page_no
int64
1
287
page_content
stringlengths
123
4.15k
1
page_content='Practical AI for Cybersecurity' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 1}
2
page_content='Practical AI for Cybersecurity\nRavi Das' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 3}
3
page_content='First edition published 2021\nby CRC Press\n6000 Broken Sound Parkway NW, Suite 300\nBoca Raton, FL 33487- 2742\nand by CRC Press\n2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN\n© 2021 Taylor & Francis Group, LLC\nCRC Press is an imprint of Taylor & Francis Group, LLC\nThe right of Ravi Das to be identified as author of this work has been asserted by them in accordance with \nsections 77 and 78 of the Copyright, Designs and Patents Act 1988.\nReasonable efforts have been made to publish reliable data and information, but the author and publisher \ncannot assume responsibility for the validity of all materials or the consequences of their use. The authors and \npublishers have attempted to trace the copyright holders of all material reproduced in this publication and \napologize to copyright holders if permission to publish in this form has not been obtained. If any copyright \nmaterial has not been acknowledged please write and let us know so we may rectify in any future reprint.\nExcept as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, \ntransmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter \ninvented, including photocopying, microfilming, and recording, or in any information storage or retrieval \nsystem, without written permission from the publishers.\nFor permission to photocopy or use material electronically from this work, access www.copyright.com or \ncontact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978- 750- \n8400. For works that are not available on CCC please contact mpkbookspermissions@tandf.co.uk\nTrademark notice : Product or corporate names may be trademarks or registered trademarks, and are used only \nfor identification and explanation without intent to infringe.\nLibrary of Congress Cataloging- in- Publication Data\nA catalog record has been requested for this book\nISBN:\xa0978- 0- 367- 70859- 7 (hbk)\nISBN:\xa0978- 0- 367- 43715- 2 (pbk)\nISBN:\xa0978- 1- 003- 00523- 0 (ebk)' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 4}
4
page_content='This is book is dedicated to my Lord and Savior, Jesus Christ. It is also dedicated \nin loving memory to Dr.\xa0Gopal Das and Mrs. Kunda Das, and also to my family \nin Australia, Mr. Kunal Hinduja and his wife, Mrs. Sony Hinduja, and their two \nwonderful children.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 5}
5
page_content='viiContents\nAcknowledgments .......................................................................................... xv\nNotes on Contributors ................................................................................. xvii\n1 Artificial Intelligence ............................................................................... 1\nThe Chronological Evolution of Cybersecurity ............................................. 3\nAn Introduction to Artificial Intelligence ..................................................... 7\nThe Sub- Fields of Artificial Intelligence ........................................................ 9\nMachine Learning ............................................................................... 9\nNeural Networks ............................................................................... 10\nComputer Vision .............................................................................. 11\nA Brief Overview of This Book ................................................................... 12\nThe History of Artificial Intelligence .......................................................... 13\nThe Origin Story ............................................................................... 16\nThe Golden Age for Artificial Intelligence ......................................... 17\nThe Evolution of Expert Systems ....................................................... 19\nThe Importance of Data in Artificial Intelligence ........................................ 21\nThe Fundamentals of Data Basics ...................................................... 22\nThe Types of Data that are Available .................................................. 23\nBig Data ............................................................................................ 25\nUnderstanding Preparation of Data ................................................... 26\nOther Relevant Data Concepts that are Important to Artificial \nIntelligence ............................................................................ 30\nResources ................................................................................................... 31\n2 Machine Learning ................................................................................. 33\nThe High Level Overview ........................................................................... 34\nThe Machine Learning Process .......................................................... 35\nData Order ............................................................................ 36\nPicking the Algorithm ............................................................ 36\nT raining the Model ................................................................ 37\nModel Evaluation ................................................................... 37\nFine T une the Model .............................................................. 37' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 7}
6
page_content='viii | Contents\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n The Machine Learning Algorithm Classifications .............................. 37\nThe Machine Learning Algorithms .................................................... 39\nKey Statistical Concepts .................................................................... 42\nThe Deep Dive into the Theoretical Aspects of Machine Learning .............. 43\nUnderstanding Probability ................................................................ 43\nThe Bayesian Theorem ...................................................................... 44\nThe Probability Distributions for Machine Learning ......................... 45\nThe Normal Distribution .................................................................. 45\nSupervised Learning .......................................................................... 46\nThe Decision T ree ............................................................................. 49\nThe Problem of Overfitting the Decision T ree ................................... 52\nThe Random Forest ........................................................................... 53\nBagging ............................................................................................. 53\nThe Na ïve Bayes Method .................................................................. 54\nThe KNN Algorithm ........................................................................ 56\nUnsupervised Learning ...................................................................... 58\nGenerative Models ............................................................................ 59\nData Compression ............................................................................ 59\nAssociation ........................................................................................ 60\nThe Density Estimation .................................................................... 61\nThe Kernel Density Function ............................................................ 62\nLatent Variables ................................................................................. 62\nGaussian Mixture Models ................................................................. 62\nThe Perceptron ........................................................................................... 62\nT raining a Perceptron ........................................................................ 64\nThe Boolean Functions ..................................................................... 66\nThe Multiple Layer Perceptrons ......................................................... 67\nThe Multi- Layer Perceptron (MLP):\xa0A Statistical Approximator ........... 68\nThe Backpropagation Algorithm ....................................................... 69\nThe Nonlinear Regression ................................................................. 69\nThe Statistical Class Descriptions in Machine Learning .............................. 70\nT wo Class Statistical Discrimination ................................................. 70\nMulticlass Distribution ..................................................................... 70\nMultilabel Discrimination ................................................................. 71\nOvertraining .............................................................................................. 71\nHow a Machine Learning System can T rain from Hidden, Statistical \nRepresentation .................................................................................. 72\nAutoencoders ............................................................................................. 74\nThe Word2vec Architecture ........................................................................ 75\nApplication of Machine Learning to Endpoint Protection .......................... 76\nFeature Selection and Feature Engineering for Detecting Malware ....79' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 8}
7
page_content='Application of Machine Learning to Endpoint Protection .......................... 76\nFeature Selection and Feature Engineering for Detecting Malware ....79\nCommon Vulnerabilities and Exposures (CVE) ................................ 80\nText Strings ....................................................................................... 80' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 8}
8
page_content='Contents | ix\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Byte Sequences .................................................................................. 81\nOpcodes ............................................................................................ 81\nAPI, System Calls, and DLLs ............................................................ 81\nEntropy ............................................................................................. 81\nFeature Selection Process for Malware Detection ............................... 82\nFeature Selection Process for Malware Classification ......................... 82\nT raining Data .................................................................................... 83\nT uning of Malware Classification Models Using a Receiver \nOperating Characteristic Curve .............................................. 83\nDetecting Malware after Detonation ................................................. 85\nSummary .......................................................................................... 86\nApplications of Machine Learning Using Python ....................................... 86\nThe Use of Python Programming in the Healthcare Sector ................ 87\nHow Machine Learning is Used with a Chatbot ................................ 87\nThe Strategic Advantages of Machine Learning In Chatbots .............. 88\nAn Overall Summary of Machine Learning and Chatbots .......................... 90\nThe Building of the Chatbot— A Diabetes Testing Portal ........................... 90\nThe Initialization Module ........................................................................... 92\nThe Graphical User Interface (GUI) Module .............................................. 92\nThe Splash Screen Module ................................................................ 93\nThe Patient Greeting Module ............................................................ 93\nThe Diabetes Corpus Module ........................................................... 94\nThe Chatbot Module ........................................................................ 95\nThe Sentiment Analysis Module ........................................................ 98\nThe Building of the Chatbot— Predicting Stock Price Movements ........... 100\nThe S&P 500 Price Acquisition Module ......................................... 100\nLoading Up the Data from the API ................................................. 101\nThe Prediction of the Next Day Stock Price Based upon \nToday’s Closing Price Module .............................................. 102\nThe Financial Data Optimization (Clean- Up) Module .................... 103\nThe Plotting of SP500 Financial Data for the Previous \nYear + One Month ............................................................... 103\nThe Plotting of SP500 Financial Data for One Month .................... 104\nCalculating the Moving Average of an SP500 Stock ........................ 104\nCalculating the Moving Average of an SP500 Stock for just a \nOne Month Time Span ........................................................ 104\nThe Creation of the NextDayOpen Column for SP500 \nFinancial Price Prediction ..................................................... 104\nChecking for any Statistical Correlations that Exist in the \nNextDayOpen Column for SP500 Financial Price \nPrediction ....................................................................... 105\nThe Creation of the Linear Regression Model to Predict \nFuture SP500 Price Data ...................................................... 105' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 9}
9
page_content='x | Contents\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Sources ..................................................................................................... 107\nApplication Sources .................................................................................. 107\n3 The High Level Overview into Neural Networks ................................. 109\nThe High Level Overview into Neural Networks ...................................... 110\nThe Neuron .................................................................................... 110\nThe Fundamentals of the Artificial Neural Network (ANN) ............ 111\nThe Theoretical Aspects of Neural Networks ............................................ 114\nThe Adaline ..................................................................................... 114\nThe T raining of the Adaline ............................................................. 115\nThe Steepest Descent T raining ......................................................... 116\nThe Madaline .................................................................................. 116\nAn Example of the Madaline:\xa0Character Recognition ...................... 118\nThe Backpropagation ...................................................................... 119\nModified Backpropagation (BP) Algorithms ................................... 120\nThe Momentum Technique ............................................................. 121\nThe Smoothing Method .................................................................. 121\nA Backpropagation Case Study:\xa0Character Recognition .................. 121\nA Backpropagation Case Study:\xa0Calculating the Monthly \nHigh and Low Temperatures ................................................ 122\nThe Hopfield Networks ............................................................................ 125\nThe Establishment, or the Setting of the Weights in the \nHopfield Neural Network .................................................... 126\nCalculating the Level of Specific Network Stability in the \nHopfield Neural Network .................................................... 127\nHow the Hopfield Neural Network Can Be Implemented .............. 129\nThe Continuous Hopfield Models ................................................... 130\nA Case Study Using the Hopfield Neural Network: \nMolecular Cell Detection ..................................................... 131\nCounter Propagation ................................................................................ 133\nThe Kohonen Self- Organizing Map Layer ....................................... 133\nThe Grossberg Layer ........................................................................ 134\nHow the Kohonen Input Layers are Preprocessed ............................ 135\nHow the Statistical Weights are Initialized in the \nKohonen Layer .................................................................... 135\nThe Interpolative Mode Layer ......................................................... 136\nThe T raining of the Grossberg Layers .............................................. 136\nThe Combined Counter Propagation Network ............................... 136\nA Counter Propagation Case Study:\xa0Character Recognition ............ 137\nThe Adaptive Resonance Theory ............................................................... 137\nThe Comparison Layer .................................................................... 138\nThe Recognition Layer .................................................................... 138\nThe Gain and Reset Elements .......................................................... 139' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 10}
10
page_content='Contents | xi\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n The Establishment of the ART Neural Network .............................. 140\nThe T raining of the ART Neural Network ....................................... 140\nThe Network Operations of the ART Neural Network .................... 141\nThe Properties of the ART Neural Network .................................... 142\nFurther Comments on Both ART 1\xa0& ART 2 Neural \nNetworks ............................................................................. 143\nAn ART 1 Case Study:\xa0Making Use of Speech Recognition ............ 143\nThe Cognitron and the Neocognitron ...................................................... 145\nThe Network Operations of the Excitory and Inhibitory \nNeurons ............................................................................... 146\nFor the Inhibitory Neuron Inputs ................................................... 147\nThe Initial T raining of the Excitory Neurons ................................... 147\nLateral Inhibition ............................................................................ 148\nThe Neocognitron ........................................................................... 148\nRecurrent Backpropagation Networks ...................................................... 149\nFully Recurrent Networks ............................................................... 149\nContinuously Recurrent Backpropagation Networks ....................... 150\nDeep Learning Neural Networks .............................................................. 150\nThe T wo Types of Deep Learning Neural Networks ......................... 153\nThe LAMSTAR Neural Networks ............................................................ 154\nThe Structural Elements of LAMSTAR Neural Networks ............... 155\nThe Mathematical Algorithms That Are Used for Establishing \nthe Statistical Weights for the Inputs and the Links \nin the SOM Modules in the ANN System ........................... 155\nAn Overview of the Processor in LAMSTAR Neural \nNetworks ............................................................................. 157\nThe T raining Iterations versus the Operational Iterations ................. 157\nThe Issue of Missing Data in the LAMSTAR Neural Network ........ 158\nThe Decision- Making Process of the LAMSTAR Neural \nNetwork ............................................................................... 158\nThe Data Analysis Functionality in the LAMSTAR Neural \nNetwork ............................................................................... 158\nDeep Learning Neural Networks— The Autoencoder ............................... 161\nThe Applications of Neural Networks ............................................. 162\nThe Major Cloud Providers for Neural Networks ..................................... 163\nThe Neural Network Components of the Amazon Web Services & \nMicrosoft Azure .............................................................................. 164\nThe Amazon Web Services (AWS) ................................................... 164\nThe Amazon SageMaker ....................................................... 165\nFrom the Standpoint of Data Preparation ............................ 165\nFrom the Standpoint of Algorithm Selection, \nOptimization, and T raining ................................... 165' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 11}
11
page_content='xii | Contents\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n From the Standpoint of AI Mathematical Algorithm and \nOptimizing ............................................................ 166\nFrom the Standpoint of Algorithm Deployment .................. 167\nFrom the Standpoint of Integration and Invocation ............. 167\nThe Amazon Comprehend .............................................................. 168\nAmazon Rekognition ...................................................................... 169\nAmazon T ranslate ............................................................................ 169\nAmazon T ranscribe .......................................................................... 171\nAmazon Textract ............................................................................. 171\nMicrosoft Azure ....................................................................................... 171\nThe Azure Machine Learning Studio Interactive Workspace ............ 172\nThe Azure Machine Learning Service .............................................. 173\nThe Azure Cognitive Services .......................................................... 174\nThe Google Cloud Platform ..................................................................... 174\nThe Google Cloud AI Building Blocks ............................................ 175\nBuilding an Application That Can Create Various Income Classes ............ 177\nBuilding an Application That Can Predict Housing Prices ........................ 179\nBuilding an Application That Can Predict Vehicle T raffic Patterns \nin Large Cities ................................................................................. 180\nBuilding an Application That Can Predict E- Commerce Buying \nPatterns ........................................................................................... 181\nBuilding an Application That Can Recommend Top Movie Picks ............ 182\nBuilding a Sentiment Analyzer Application .............................................. 184\nApplication of Neural Networks to Predictive Maintenance ..................... 185\nNormal Behavior Model Using Autoencoders ................................. 186\nWind T urbine Example ................................................................... 187\nResources ................................................................................................. 192\n4 Typical Applications for Computer Vision .......................................... 193\nTypical Applications for Computer Vision ............................................... 194\nA Historical Review into Computer Vision .............................................. 195\nThe Creation of Static and Dynamic Images in Computer \nVision (Image Creation) .................................................................. 199\nThe Geometric Constructs— 2- Dimensional Facets ......................... 199\nThe Geometric Constructs— 3- Dimensional Facets ......................... 200\nThe Geometric Constructs— 2- Dimensional T ransformations ......... 202\nThe Geometric Constructs— 3- Dimensional T ransformations ......... 204\nThe Geometric Constructs— 3- Dimensional Rotations ................... 205\nAscertaining Which 3- Dimensional Technique Is the \nMost Optimized to Use for the ANN System ....................... 206\nHow to Implement 3- Dimensional Images onto a Geometric Plane ......... 206\nThe 3- Dimensional Perspective Technique ...................................... 207\nThe Mechanics of the Camera .................................................................. 208\nDetermining the Focal Length of the Camera ................................. 209' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 12}
12
page_content='Contents | xiii\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Determining the Mathematical Matrix of the Camera .................... 210\nDetermining the Projective Depth of the Camera ........................... 211\nHow a 3- Dimensional Image Can Be T ransformed between \nT wo or More Cameras ......................................................... 212\nHow a 3- Dimensional Image Can Be Projected into an \nObject- Centered Format ...................................................... 212\nHow to Take into Account the Distortions in the Lens of the \nCamera ........................................................................................... 213\nHow to Create Photometric, 3- Dimensional Images ................................ 215\nThe Lighting Variable ...................................................................... 215\nThe Effects of Light Reflectance and Shading .................................. 216\nThe Importance of Optics ........................................................................ 220\nThe Effects of Chromatic Aberration ........................................................ 221\nThe Properties of Vignetting ........................................................... 222\nThe Properties of the Digital Camera ....................................................... 223\nShutter Speed .................................................................................. 224\nSampling Pitch ................................................................................ 224\nFill Factor ........................................................................................ 224\nSize of the Central Processing Unit (CPU) ...................................... 225\nAnalog Gain .................................................................................... 225\nSensor Noise ................................................................................... 225\nThe ADC Resolution ...................................................................... 225\nThe Digital Post- Processing ............................................................. 226\nThe Sampling of the 2- Dimensional or 3- Dimensional Images ................. 226\nThe Importance of Color in the 2- Dimensional or 3- Dimensional \nImage .............................................................................................. 227\nThe CIE, RGB, and XYZ Theorem ................................................. 228\nThe Importance of the L*a*b Color Regime for 2- Dimensional \nand 3- Dimensional Images ................................................... 228\nThe Importance of Color- Based Cameras in Computer Vision ................. 229\nThe Use of the Color Filter Arrays ................................................... 229\nThe Importance of Color Balance .................................................... 230\nThe Role of Gamma in the RGB Color Regime .............................. 230\nThe Role of the Other Color Regimes in 2- Dimensional \nand 3- Dimensional Images ................................................... 231\nThe Role of Compression in 2- Dimensional and 3- Dimensional \nImages ................................................................................. 232\nImage Processing Techniques .................................................................... 233\nThe Importance of the Point Operators .................................................... 234\nThe Importance of Color T ransformations ...................................... 235\nThe Impacts of Image Matting ........................................................ 236\nThe Impacts of the Equalization of the Histogram .......................... 236\nMaking Use of the Local- Based Histogram Equalization ................. 237' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 13}
13
page_content='xiv | Contents\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n The Concepts of Linear Filtering .............................................................. 238\nThe Importance of Padding in the 2- Dimensional or \n3- Dimensional Image ........................................................... 239\nThe Effects of Separable Filtering .................................................... 240\nWhat the Band Pass and Steerable Filters Are .................................. 241\nThe Importance of the Integral Image Filters ................................... 242\nA Breakdown of the Recursive Filtering Technique .......................... 242\nThe Remaining Operating Techniques That Can Be Used by the \nANN System ................................................................................... 243\nAn Overview of the Median Filtering Technique ............................. 243\nA Review of the Bilateral Filtering Technique .................................. 244\nThe Iterated Adaptive Smoothing/ Anisotropic Diffusion \nFiltering Technique .............................................................. 245\nThe Importance of the Morphology Technique ............................... 245\nThe Impacts of the Distance T ransformation Technique .................. 247\nThe Effects of the Connected Components ..................................... 248\nThe Fourier T ransformation Techniques .......................................... 248\nThe Importance of the Fourier T ransformation- Based Pairs ............. 252\nThe Importance of the 2- Dimensional Fourier \nT ransformations ................................................................... 253\nThe Impacts of the Weiner Filtering Technique ............................... 254\nThe Functionalities of the Discrete Cosine T ransform ..................... 255\nThe Concepts of Pyramids ........................................................................ 256\nThe Importance of Interpolation ..................................................... 257\nThe Importance of Decimation ....................................................... 258\nThe Importance of Multi- Level Representations .............................. 259\nThe Essentials of Wavelets ............................................................... 260\nThe Importance of Geometric- Based T ransformations .............................. 263\nThe Impacts of Parametric T ransformations ..................................... 264\nResources ................................................................................................. 265\n5 Conclusion .......................................................................................... 267\nIndex ............................................................................................................ 271' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 14}
14
page_content='xvAcknowledgments\nI would like to thank John Wyzalek, my editor, for his help and guidance in the \npreparation of this book. Many special thanks go out to Randy Groves, for his \ncontributions to this book as well.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 15}
15
page_content='newgenprepdf\n \nxviiNotes on\xa0Contributors\nRavi Das is a business development specialist for The AST Cybersecurity Group, \nInc., a leading Cybersecurity content firm located in the Greater Chicago area. Ravi \nholds a Master of Science degree in Agribusiness Economics (Thesis in International \nT rade), and a Master of Business Administration degree in Management Information \nSystems.\nHe has authored six books, with two more upcoming ones on COVID- 19 and its \nimpacts on Cybersecurity and Cybersecurity Risk and its impact on Cybersecurity \nInsurance Policies.\nRandy Groves is the SVP of Engineering at SparkCognition, the world- leader in \nindustrial artificial intelligence solutions. Before SparkCognition, he was the chief \ntechnology officer of Teradici Corporation where he was responsible for defining the \noverall technology strategy and technology partnerships which led to the adoption of \nthe industry- leading, PCoIP protocol for VMware Virtual Desktop Infrastructure, \nAmazon WorkSpaces Desktop- as- a- Service, and Teradici Cloud Access Software. \nHe also served as vice president of Engineering at LifeSize Communications, Inc. \n(acquired by Logitech) and led the team that released the first high- definition video \nconferencing products into the mainstream video conferencing market. Before \njoining LifeSize, he served as the chief technology officer of Dell Inc.’s product \ngroup responsible for the architecture and technology direction for all of Dell’s \nproduct offerings. Prior to that, he served as general manager of Dell Enterprise \nSystems Group and led the worldwide development and marketing of Dell’s server, \nstorage and systems management software products. He also spent 21\xa0years with \nIBM where he held many product development roles for IBM’s Intel- and RISC- \nbased servers, as well as roles in corporate strategy and RISC microprocessor devel -\nopment and architecture.\nHe is the author of numerous technical papers, disclosures and patents, as \nwell as the recipient of several corporate and industry awards. He holds a Masters \nof Electrical Engineering from the University of Texas at Austin, a Masters in \nManagement of Technology from Massachusetts Institute of Technology, and a \nBachelors of Electrical Engineering and Business from Kansas State University.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 17}
16
page_content='1Chapter\xa01\nArtificial Intelligence\nThere is no doubt that the world today is a lot different than it was fifty or even thirty \nyears ago, from the standpoint of technology. Just imagine when we landed the first \nman on the moon back in 1969. All of the computers that were used at NASA were \nall mainframe computers, developed primarily by IBM and other related computer \ncompanies. These computers were very large and massive— in fact, they could even \noccupy an entire room.\nEven the computers that were used on the Saturn V rocket and in the Command \nand Lunar Excursion Modules were also of the mainframe type. Back then, even \nhaving just 5 MB of RAM memory in a small computer was a big thing. By today’s \nstandards, the iPhone is lightyears away from this kind of computing technology, \nand in just this one device, we perhaps have enough computing power to send the \nsame Saturn V rocket to the moon and back at least 100 times.\nBut just think about it, all that was needed back then was just this size of \nmemory. The concepts of the Cloud, virtualization, etc. were barely even heard of. \nThe computers that were designed back then, for example, had just one specific pur -\npose:\xa0to process the input and output instructions (also known as “I/ O”) so that the \nspacecrafts could have a safe journey to the moon, land on it, and return safely back \nto Earth once again.\nBecause of these limited needs (though considered to be rather gargantuan \nat the time), all that was needed was just that small amount of memory. But by \ntoday’s standards, given all of the applications that we have today, we need at \nleast 1,000 times that much just to run the simplest of Cloud- based applications. \nBut also back then, there was one concept that was not even heard of quite \nyet:\xa0Cybersecurity.\nIn fact, even the term of “Cyber” was not even heard of. Most of the security issues \nback then revolved around physical security. Take, for example, NASA again. The' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 19}
17
page_content='2 | Artificial Intelligence\nmain concern was only letting the authorized and legitimate employees into Mission \nControl. Who would have thought that back then there was even the slightest pos -\nsibility that a Cyberattacker could literally take over control of the computers and \neven potentially steer the Saturn V rocket away from its planned trajectory.\nBut today, given all of the recent advancements in technology, this doomsday \nscenario is now a reality. For example, a Cyberattacker could very easily gain access \nto the electronic gadgetry that is associated with a modern jetliner, automobile, or \neven ship. By getting access to this from a covert backdoor, the Cyberattacker could \npotentially take over the controls of any these modes of vessels and literally take it to \na destination that it was not intended to.\nSo as a result, the concept of Cybersecurity has now come front and center, espe -\ncially given the crisis that the world has been in with the Coronavirus, or COVID- \n19. But when we think of this term, really, what does it mean exactly? When one \nthinks of it, many thoughts and images come to mind. For instance, the thoughts \nof servers, workstations, and wireless devices (which include those of notebooks, \ntablets, and Smartphones such as that of the Android- and iOS devices) come \ninto view.\nAlso, one may even think of the Internet and all of the hundreds of thousands \nof miles of cabling that have been deployed so that we can access the websites of \nour choice in just a mere second or so. But keep in mind that this just one aspect of \nCybersecurity. Another critical aspect that often gets forgotten about is that of the \nphysical security that is involved. As described previously with our NASA example, \nthis involves primarily protecting the physical premises of a business or corporation. \nThis includes protecting both the exterior and interior premises. For instance, this \ncould not only be gaining primary access to premises itself, but also the interior \nsections as well, such as the server rooms and places where the confidential corporate \ninformation and data are held at. It is very important to keep in mind that all of this, \nboth physical and digital, is at grave risk from being attacked.\nNo one individual or business entity is free from this, all parties are at risk from \nbeing hit by a Cyberattack. The key thing is how to mitigate that risk from spreading \neven further once you have discovered that you indeed have become a victim. So, \nnow that we have addressed what the scope of Cybersecurity really is, how is it spe -\ncifically defined?\nIt can be defined as follows:\nAlso referred to as information security, cybersecurity refers to the prac -\ntice of ensuring the integrity, confidentiality, and availability (ICA) of \ninformation. Cybersecurity is comprised of an evolving set of tools, \nrisk management approaches, technologies, training, and best practices \ndesigned to protect networks, devices, programs, and data from attacks \nor unauthorized access.\n(Forcepoint, n.d.)' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 20}
18
page_content='Artificial Intelligence | 3\n Granted that this a very broad definition of it, in an effort to narrow it down \nsome more, Cybersecurity involves the following components:\n\t{Network security (protecting the entire network and subnets of a business);\n\t{Application security (protecting mission critical applications, especially those \nthat are Web- based);\n\t{Endpoint security (protecting the origination and destination points of a net -\nwork connection);\n\t{Data security (protecting the mission critical datasets, especially those that \nrelate to the Personal Identifiable Information (PII))\n\t{Identity management (making sure that only legitimate individuals can gain \nlogical and/ or physical access);\n\t{Database and infrastructure security (protecting those servers that house \nthe PII);\n\t{Cloud security (protecting the Infrastructure as a Service (IaaS), Software as a \nService (SaaS), and the Platform as a Service (PaaS) components of a Cloud- \nbased platform);\n\t{Mobile security (protecting all aspects of wireless devices and Smartphones, \nboth from the hardware and operating system and mobile standpoints);\n\t{Disaster recovery/ business continuity planning (coming up with the appro -\npriate plans so that a business can bring mission critical applications up to \noperational level and so that they can keep continuing that in the wake of a \nsecurity breach);\n\t{End- user education (keeping both employees and individuals trained as to \nhow they can mitigate the risk of becoming the net victim).\nNow that we have explored the importance, definition, and the components of \nCybersecurity, it is now important to take a look at the evolution of it, which is \nillustrated in the next section.\nThe Chronological Evolution of Cybersecurity\nJust as much as technology has quickly evolved and developed, so too has the world \nof Cybersecurity. As mentioned, about 50\xa0years, during the height of the Apollo \nspace program, the term “Cyber” probably was barely even conceived of. But in \ntoday’s times, and especially in this decade, that particular term now is almost a part \nof our everyday lives.\nIn this section, we now provide an outline of just how Cybersecurity actually \nevolved.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 21}
19
page_content='4 | Artificial Intelligence\nThe Morri s Worm (1988):\n*This wa s created b y Robe rt Morris, a grad studen t at Cornell .\n*It brought d own 10% of the 70,000 computers that were connected to the \nInternet on a worldwide b asis.\n*It cau sed at l east $96 Mill ion in total d amages .\n*This actually s erved as the prototype f or the Distributed Denial of Service (DDoS) \na/g425acks that we see today.\nThe Melissa Virus (March 1999) :\n*This wa s named a/g332er a Florida based stripper, and it infecte d .DOC files which \nwere trans mi/g425ed to the address books in M icroso/g332 Outlook.\n*This viru s caused Microso/g332, Lockh eed Mar/g415n, and Int el to shut dow n the en/g415r e \nopera/g415ons f or a substan/g415 al period of /g415me .\n*This caus ed $80 Mill ion in damages, and infected wel l over 1,000,000 computer s \non a gl obal basis.\n*The i nventor of t he virus , David L. Smith, spen t some 20 mo nths in prison .\nThe United States Department The United St atespa rtment of Defnse (DoD) \n(August 1999):\n*Jonathan Ja mes, a 15 year old hacker, br oke into the IT/Networ k Infrastru cture \nat the Defense Threat Red uc/g415on Agency .\n*He wa s the first ju venil e to be to be a converted a major C ybercr ime.\n*NASA had to clos e down their en/g415re base of opera/g415ons for at lea st three weeks .\n*Not only were pas swords sto len, but this Cybera/g425acker al so stole so/g332ware \napplica/g415ons wo rth at l east $1.7 Mill ion whic h supporte d the Interna/g415onal Space \nSta/g415on .' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 22}
20
page_content='Artificial Intelligence | 5\nMafiaboy (F ebruary 2002):\n*Another juvenile hacker, Michael Calce (aka “Mafiab oy”), launch ed a special \nthreat variant known as “Project R ivolta”.\n*This w as a series o f Denial of Service (DoS) a/g425acks that brought dow n the \nwebsites of majo r United States cor pora/g415ons .\n*Examples of this include Yahoo, eBay, CNN, E -Trade, and Amazon ba sed servers .\n*This prompted the White House to h ave their fir st ever Cyb ersecurity su mmit .\n*The financial d amage exceeded well over $1. 2 Billion.\nTarget ( November 2013):\n*This w as deemed to be o ne of the larg est retail Cybera/g425acks in recent his tory, \nand it hit right during the 2013 Holid ay Season .\n*Becau se of this C yber a/g425acks, the net profi ts of Tar get dropped as much as 46% .\n*Over 40 Million credit car d numbers were stolen .\n*The malware installed into the Point of Sal e (PoS ) terminals at all of the Target \nstores.\n*This w as sold o n the Dark Web for a huge pr ofit.\n*This se rved as th e model f or subsequen t retail based Cybera/g425acks.\nSony Pictu res (Novemb er 2014):\n*The S ocial Se curity and c redit card n umbers were leaked to th e public .\n*Confiden/g415a l payroll info rma/g415on and data we re also released .\n*This Cybera/g425ack p rompted the C o Chair of Sony pictures, Am y Pasc al, to step \ndown from her posi/g415on .\nAnthem (January 2015):\n*This w as deemed to be th e largest Cyb era/g425a ck to hi t a ma jor health \norganiza/g415on.\n*The Personal Iden/g415fiab le Informa/g415on (PII) of over 80,000,000 members were \nstolen which included S ocial Security numbers, Email addresses, and \nempl oyment infor ma/g415on.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 23}
21
page_content='6 | Artificial Intelligence\nThe First Ransomwor m (2017) :\n*The Wanna Cry was deemed to be the first of the Ra nsomware threat variants, \nand it targeted computers which ran the Windows OS. \n*The only way that the vic/g415m could get their computer to work again is if th ey \npaid a ransom to the Cybera/g425acker, in the form of a Virtual Currency. One such \nexample of this is the Bitcoin .\n*In just one da y, the Wann a Cry threat variant infected well over 230,000 \ncomputers i n over 50 countries .\n*A newer version of the th reat variant was th e “NotPetya”. This in fected well \nover 12, 500 computer s on a glo bal basis. The impacted industries included \nenergy fi rms, banks, and g overn ment agencies.\nThe Lar gest Credit Ca rd Cybera/g425ack (2017):\n*The credit card agency, k nown as Equifax, total faile d to ins tall the latest \nso/g332ware pa tches and upgrades to their Apache Struts Server.\n*The C ybera /g425ackers w ere able to gain access over 210,000 cons umer credit \ncards, which impacted over 143 Million A mericans .\nFacebook, MyHeritage, M ario/g425 Hotels, and Br i/g415sh Airways (2018):\n*Facebook wa s hit w ith a ma jor Cybera/g425ack with the analy/g415cs firm Cambridge \nAnaly/g415ca. The Personal Id en/g415fiable Info rma/g415on (PII) that was stolen resulted \nin impac/g415ng over 87 Mill ion users.\n*With MyH eritage, over 92 Million users wer e impacted. Lucki ly, no credit card \nor banking infor ma/g415on was stolen, DNA tests, or pa sswords.\n*With Marr io/g425 Ho tels, over 500 Million us ers were i mpacted. Although this \nbreach occurred in 2018, it the underlying Malwa re was act ually deployed in \n2014, and was handed down a wh opping $123 Million fine.\n*With Bri /g415sh Airwa ys, over 500,000 cr edit card transac/g415ons we re affected. The \nstolen Personal Iden/g415fiabl e Informa/g415on (PII) included names, Email addresses, \ntelepho ne num bers, addresses, and credit card num bers. The company face d a \ngargantuan $230 Million fine as imposed by the GDPR, or 1.5% of its total \nrevenue .' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 24}
22
page_content='Artificial Intelligence | 7\n The Singapore Health S ector (2019) :\n*The Singapor e’s H ealth S ciences Auth ority (HSA )outsourced some of their \nfunc/g415onality to a third par ty vendor known as the Secur Solu/g415ons Group. The \nPersonal Iden/g415fiab le Infor ma/g415on (PII) of 808,000 don ors were revealed online, \nand items that were hijac ked include the names, ID card numbers, gender, dates \nof the last three dona/g415ons, and in some instances, blood type, height, and weight\nof the don ors.\n*Singapore’s Minist ry of Health’s Na/g415onal Public Health Uni twas impacte d when \nthe HIV st atus of 14, 200 pe ople were revealed online.\nSo as you can see, this is a chronological timeline of all of the major Cybersecurity \nevents that have led us up to the point where we are today. Even in the world of \nCybersecurity, there have also been major technological advancements that have \nbeen made in order to thwart the Cyberattacker and to keep up with the ever- \nchanging dynamics of the Cyber Threat Landscape.\nOne such area in this regard is known as “Artificial Intelligence,” or “AI” for \nshort. This is further reviewed in the next section, and is the primary focal point of \nthis entire book.\nAn Introduction to Artificial Intelligence\nThe concept of Artificial Intelligence is not a new one; rather it goes back a long \ntime— even to the 1960s. While there were some applications for it being developed \nat the time, it has not really picked up the huge momentum that it has now until \nrecently, especially as it relates to Cybersecurity. In fact, interest in AI did not even \npique in this industry until late 2019. As of now, along with the other techno jargon \nthat is out there, AI is amongst one of the biggest buzzwords today.\nBut it is not just in Cybersecurity in and of itself that AI is getting all of the \ninterest in. There are many others as well, especially as it relates to the manufacturing \nand supply chain as well as even the logistics industries. You may be wondering at \nthis point, just what is so special about Artificial Intelligence? Well, the key thing is \nthat this is a field that can help bring task automation to a much more optimal and \nefficient level than any human ever could.\nFor example, in the aforementioned industries (except for Cybersecurity), \nvarious robotic processes can be developed from AI tools in order to speed up certain \nprocesses. This includes doing those repetitive tasks in the automobile production \nline, or even in the warehouses of the supply chain and logistics industries. This is \nan area known as “Robotic Process Automation,” or “RPA” for short, and will be \nexamined in more detail later in this book.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 25}
23
page_content='8 | Artificial Intelligence\nBut as it relates to Cybersecurity, one of the main areas where Artificial Intelligence \nis playing a key role is in task automation, as just discussed. For example, both \nPenetration Testing and Threat Hunting are very time consuming, laborious, and \nmentally grueling tasks. There are a lot of smaller steps in both of these processes \nthat have to take place, and once again, many of them are repetitive. This is where \nthe tools of AI can come into play.\nAs a result, the team members on both the Penetration Testing and Threat \nHunting sides are thus freed up to focus on much more important tasks, which \ninclude finding both the hidden and unhidden holes and weaknesses in their client’s \nIT and Network Infrastructure and providing the appropriate courses of action that \nneed to be taken in order to cover up these gaps and weaknesses.\nAnother great area in Cybersecurity where Artificial Intelligence tools are being \nused is that of filtering for false positives. For example, the IT security teams of \nmany businesses and corporations, large or small, are being totally flooded with \nwarnings and alerts as a result of the many security tools they make use of, especially \nwhen it comes to Firewalls, Network Intrusion Devices, and Routers. At the pre -\nsent time, they have to manually filter through each one so that they can be triaged \nappropriately.\nBut because of the time it takes to this, many of the real alerts and warnings \nthat come through often remain unnoticed, thus increasing that business entities’ \nCyberrisk by at least 1,000 times. But by using the tools as they relate to Artificial \nIntelligence, all of these so- called false positive are filtered out, thus leaving only the \nreal and legitimate ones that have to be examined and triaged. As a result of this, \nthe IT security teams can react to these particular threats in a much quicker fashion, \nand most importantly, maintain that proactive mindset in order to thwart off these \nthreat variants.\nIt should also be noted that many businesses and corporations are now starting \nto realize that having too many security tools to beef up their respective lines of \ndefenses is not good at all— in fact, it only increases the attack surface for the \nCyberattacker. So now, many of these business entities are starting to see the value \nof implementing various risk analysis tools to see where all of these security tech -\nnologies can be strategically placed.\nSo rather than taking the mindset that more is better, it is now shifting that quality \nof deployment is much more crucial and important. So rather than deploying ten \nFirewalls, it is far more strategic to deploy perhaps just three where they are needed \nthe most. Also, by taking this kind of mindset, the business or corporation will \nachieve a far greater Return On Investment (ROI), which means that the CIO and/ \nor CISO, will be in a much better position to get more for their security budgets.\nBut, you may even be asking at this point, just what exactly is Artificial \nIntelligence? A\xa0formal definition of it is\xa0here:\nArtificial intelligence (AI) makes it possible for machines to learn from \nexperience, adjust to new inputs and perform human- like tasks. Most' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 26}
24
page_content='Artificial Intelligence | 9\n \n \n AI examples that you hear about today— from chess- playing computers \nto self- driving cars— rely heavily on deep learning and natural lan -\nguage processing. Using these technologies, computers can be trained \nto accomplish specific tasks by processing large amounts of data and \nrecognizing patterns in the data.\n(SAS(a), n.d.)\nAs one can see from the above definition, the main objective of Artificial \nIntelligence is to have the ability to learn and project into the future by learning \nfrom past behaviors. In this regard, past behavior typically means making use of \nlarge datasets that arise and stem from various data feeds that are fed into the various \nAI technologies that are being used, learning those trends, and having the ability to \nperform the task at hand and look into the future.\nIn this regard, another great boon that Artificial Intelligence brings to \nCybersecurity is its ability to predict into the future, and assess what the newer \npotential threat variants could look like as well. We will be examining the sheer \nimportance of data for Artificial Intelligence later in this chapter. But at this point, \nit is very important to keep in mind that Artificial Intelligence is just the main field, \nand there are many other sub- fields that fall just below it; the most common ones \nare as follows:\n\t{Machine Learning;\n\t{Neural Networks;\n\t{Computer Vision.\nA formal definition for each of the above is provided in the next section.\nThe Sub- Fields of Artificial Intelligence\nMachine Learning\nThe first sub- field we will take a brief look into is what is known as “Machine \nLearning,” or “ML” for short. A\xa0specific definition for it is as follows:\nMachine- learning algorithms use statistics to find patterns in massive \namounts of data. And data, here, encompasses a lot of things— numbers, \nwords, images, clicks, what have you. If it can be digitally stored, it can \nbe fed into a machine- learning algorithm.\nMachine learning is the process that powers many of the services we \nuse today— recommendation systems like those on Netflix, YouT ube, \nand Spotify; search engines like Google and Baidu; social- media feeds' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 27}
25
page_content='10 | Artificial Intelligence\n like Facebook and T witter; voice assistants like Siri and Alexa. The list \ngoes on.\n(MIT T echnology Review, n.d.)\nThe sub- field of Machine Learning is actually very expansive, diverse, and even \nquite complex. But to put it in very broad terms, as the above definition describes, \nit uses much more statistical techniques rather than mathematical ones in order to \nmine and comb through huge amounts of datasets to find those unhidden trends. \nThis can then be fed into the Artificial Intelligence tool, for example, to predict the \nfuture Cyber Threat Landscape. But it also has many other applications, as exempli -\nfied by the second part of the definition.\nNeural Networks\nThe second sub- field next to be examined is that of the Neural Networks (also \nknown as NNs). A\xa0specific definition for it is as follows:\nNeural networks are a set of algorithms, modeled loosely after the human \nbrain, that are designed to recognize patterns. They interpret sensory \ndata through a kind of machine perception, labeling or clustering raw \ninput. The patterns they recognize are numerical, contained in vectors, \ninto which all real- world data, be it images, sound, text or time series, \nmust be translated.\nNeural networks help us cluster and classify. You can think of them \nas a clustering and classification layer on top of the data you store and \nmanage. They help to group unlabeled data according to similarities \namong the example inputs, and they classify data when they have a \nlabeled dataset to train on. (Neural networks can also extract features \nthat are fed to other algorithms for clustering and classification; so you \ncan think of deep neural networks as components of larger machine- \nlearning applications involving algorithms for reinforcement learning, \nclassification and regression).\n(Pathmind, n.d.)\nIn a manner similar to that of Machine Learning, Neural Networks are also \ndesigned to look at massive datasets in order to recognize both hidden and unhidden \npatterns. But the primary difference here is that with Neural Networks, they are \ndesigned to try to replicate the thinking process of the human brain, by closely \nexamining neuronic activity of the brains.\nThe human brain consists of hundreds of millions of neurons, and it is \nhypothesized that they are the catalyst for the rationale behind the decision- making \nprocess that occurs within the brain. Another key difference is that Neural Networks \ncan also be used to organize, filter through, and present those datasets that are the' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 28}
26
page_content='Artificial Intelligence | 11\n most relevant. Back to our previous example of filtering for false positives, this is a \nprime example of where Neural Networks are used. The concept of the neuron will \nbe later examined in more detail in this book.\nComputer Vision\nThe third sub- field to be examined is that of Computer Vision. A\xa0specific definition \nfor it is as follows:\nComputer vision is the process of using machines to understand and ana -\nlyze imagery (both photos and videos). While these types of algorithms \nhave been around in various forms since the 1960s, recent advances \nin Machine Learning, as well as leaps forward in data storage, com -\nputing capabilities, and cheap high- quality input devices have driven \nmajor improvements in how well our software can explore this kind of \ncontent.\nComputer vision is the broad parent name for any computations \ninvolving visual content— that means images, videos, icons, and any -\nthing else with pixels involved. But within this parent idea, there are a \nfew specific tasks that are core building blocks:\nIn object classification, you train a model on a dataset of specific \nobjects, and the model classifies new objects as belonging to one or more \nof your training categories.\nFor object identification, your model will recognize a specific \ninstance of an object— for example, parsing two faces in an image and \ntagging one as Tom Cruise and one as Katie Holmes.\n(Algorithmia, n.d.)\nAs one can see from the above definition, Computer Vision is used primarily for \nexamining visual types and kinds of datasets, analyzing them, and feeding them into \nthe Artificial Intelligence tool. As it relates to Cybersecurity, this is most pertinent \nwhen it comes to protecting the physical assets of a business or a corporation, not \nso much the digital ones.\nFor example, CCTV cameras are used to help confirm the identity of those \nindividuals (like the employees) that are either trying to gain primary entrance \naccess or secondary access inside the business or corporation. Facial Recognition \nis very often used here, to track and filter for any sort of malicious or anomalous \nbehavior.\nThis is often viewed as a second tier to the CCTV camera, but in addition to \nthis, a Computer Vision tool can also be deployed with the Facial Recognition \ntechnology in order to provide for much more robust samples to be collected, \nand to be able to react to a security breach in a much quicker and more efficient \nmanner.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 29}
27
page_content='12 | Artificial Intelligence\n \n \n \n \n \n These are the main areas that will covered in this book, and an overview is \nprovided into the next section.\nA Brief Overview of This Book\nAs mentioned, and as one can even tell from the title of this first chapter, the entire \npremise for this book is built around Artificial Intelligence. T rue, there are many \nbooks out there that are focused on this subject matter, but many of them are very \ntheoretical in nature, and perhaps do not offer as much value to businesses and \ncorporations. Rather, they are geared much more for the academic and government \nmarkets, such as for research scientists, university professors, defense contractors, \nand the like. Not many of them have actually dealt with the application side of \nArtificial Intelligence. This is what separates this book, quite literally, from the \nothers that are out there.\nFor example, there is a theoretical component to each chapter. This is neces -\nsary because in order to understand the application side of Artificial Intelligence, \none needs to have a firm background in the theory of it as well. This actually \nencompasses about the first half of each chapter. But the second half of each chapter \nwill be devoted to the practical side of Artificial Intelligence— which is namely the \napplications.\nWhat is unique about this book is that the applications that are discussed and \nreviewed are those that have actually been or are in the process of being deployed \nin various types and kinds of Cybersecurity applications. These are written by the \nSubject Matter Experts (SMEs) themselves. To the best of our knowledge, there is \nno other book that does this. As you go through these chapters, you will find it very \nenriching to read about these particular applications.\nFinally, the very last chapter is devoted to the best practices for Artificial \nIntelligence. In other words, not only have we covered both the theoretical and \napplication angles, but we also offer a Best Practices guide (or, if you will, a checklist) \nin both the creation and deployment of Artificial Intelligence applications.\nTherefore, this book can really serve two types of audiences:\xa01) the academic \nand government sector as discussed before; and, 2)\xa0the CIO’s, CISO’s, IT Security \nManagers, and even the Project Managers that want to deploy Artificial Intelligence \napplications.\nTherefore, the structure and layout of this book is as follows:\nChapter\xa01 :\xa0An Introduction to Artificial Intelligence\nChapter\xa02 :\xa0An Overview into Machine Learning\nChapter\xa03 :\xa0The Importance of Neural Networks\nChapter 4 : Examining a Growing Sub- Specialty of Artificial Intelligence— \nComputer Vision\nChapter\xa05 :\xa0Final Conclusions' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 30}
28
page_content='Artificial Intelligence | 13\n To start the theoretical component of this first chapter, we first provide an examin -\nation into Artificial Intelligence and how it came to be such an important compo -\nnent of Cybersecurity today. Secondly, this is followed by looking at the importance \nof data— after all, as it has been reviewed earlier, this is the fuel that literally drives \nthe engines of the Artificial Intelligence applications.\nThe History of Artificial Intelligence\nTo start off with, probably the first well- known figure in the field of Artificial Intelligence \nis that of Alan T uring. He was a deemed to be a pioneer in the field of computer \nscience, and in fact, is very often referred to as the “Father of Artificial Intelligence.” \nWay back in 1936, he wrote a major scientific paper entitled “On Computable \nNumbers.” In this famous piece of work, he actually lays down the concepts for what \na computer is and what its primary purposes are to be. It is important to keep in mind \nthat computers hardly existed during this time frame, and in fact the first “breed” of \ncomputers would not come out until much later in the next decade.\nThe basic idea for what his idea of a computer is was based upon the premise that it \nhas to be intelligent in some sort of manner or fashion. But at this point in time, it was \nvery difficult to come up with an actual measure of what “intelligence” really is. Thus, \nhe came up with the concept that became ultimately known as the “T uring Test.”\nIn this scenario, there is a game with three players involved in it. One of the \nparticipants is a human being, and another is a computer. The third participant is \nthe moderator, or evaluator. In this scenario, the moderator would ask a series of \nopen- ended questions to both of them, in an effort to determine which of the two \nparticipants is actually a human being. If a determination could not be made by \nasking these open- ended questions, it would then be assumed that the computer \nwould be deemed as the “intelligent” entity.\nThe T uring Test is illustrated below:\nHuman Par/g415cipant Computer Par/g415cipan t\nEvaluato rQues/g415ons \nBeing AskedQues/g415ons \nBeing Aske d' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 31}
29
page_content='14 | Artificial Intelligence\nIn this model, it is not necessary that the computer actually has to know something \nspecific, possess a large amount of information and data, or even be correct in its \nanswers to the open- ended questions. But rather, there should be solid indications \nthat the computer can, in some way or another, communicate with the Evaluator on \nits own, without any human intervention involved.\nBelieve it or not, the T uring Test has certainly stood the test of time by still being \ndifficult to crack, even in this new decade of the twenty- first century. For example, \nthere have been many contests and competitions to see if computers can hold up \nto the T uring Test, and some of the most noteworthy ones have been the “Loebner \nPrize” and the “T uring Test Competition.”\nA turning point occurred in a competition held in May 2018 at the I/ O \nConference that was held by Google. The CEO of Google at the time, Sundar \nPichai, gave a direct demonstration of one of their newest applications, which was \nknown as the “Google Assistant.” This application was used to place a direct call to \na local hairdresser in order to establish and set up an appointment. Somebody did \npick up on the other line, but this scenario failed the T uring Test.\nWhy? Because the question that was asked was a closed- ended one and not an \nopen- ended question.\nThe next major breakthrough to come after the T uring Test came with the \ncreation and development of a scientific paper entitled the “Minds, Brains, and \nPrograms.” This was written by the scientist known as John Searle, and was \npublished in 1980. In this research paper, he formulated another model which \nclosely paralleled the T uring Test, which became known as the “Chinese Room \nArgument.”\nHere is the basic premise of it:\xa0Suppose there is an individual named “T racey.” \nShe does not know or even comprehend or understand the Chinese language, but \nshe has two manuals in hand with step- by- step rules in how to interpret and com -\nmunicate in the Chinese language. Just outside of this room is another individual by \nthe name of “Suzanne.” Suzanne does understand the Chinese language, and gives \nhelp to T racey by helping her to decipher the many characters.\nAfter a period of time, Suzanne will then get a reasonably accurate translation \nfrom T racey. As such, it is plausible to think that Suzanne assumes safely that T racey \ncan understand, to varying degrees, the Chinese language.\nThe thrust of this argument is that if T racey cannot understand the Chinese \nlanguage by implementing the proper rules for understanding the Chinese lan -\nguage despite all of the aids she has (the two manuals and Suzanne, just outside \nof the room), then a computer cannot learn by this methodology because no \nsingle computer has any more knowledge than what any other man or woman \npossesses.\nThe paper John Searle wrote also laid down the two types of Artificial Intelligence \nthat could potentially exist:' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 32}
30
page_content='Artificial Intelligence | 15\n 1) Strong AI :\nThis is when a computer truly understands and is fully cognizant of what is \ntranspiring around it. This could even involve the computer having some sort \nof emotions and creativity attached to it. This area of Artificial Intelligence is \nalso technically known as “Artificial General Intelligence,” or “AGI” for short.\n 2) Weak AI :\nThis is a form of Artificial Intelligence that is deemed to be not so strong in \nnature, and is given a very narrowed focus or set of tasks to work on. The \nprime examples of this include the Virtual Personal Assistants (VPAs) of Siri \nand Alexa (which belong to Apple and Amazon, respectively).\nThe advent of the T uring Test also led to the other development of some other note -\nworthy models, which include the following:\n 1) The Kurzweil - Kapor Test :\nThis model was created and developed by Ray Kurzweil and Mitch Kapor. \nIn this test, it was required that a computer carry out some sort of conversa -\ntion with three judges. If two of them deem the conversational to be “intel -\nligent” in nature, then the computer was also deemed to be intelligent. But \nthe exact permutations of what actually defines an “intelligent conversation” \nwere not given.\n 2) The Coffee Test :\nThis model was developed by Apple founder Steve Wozniak, and it is actually \nquite simple:\xa0A robot must be able to enter into a home, find where the kit -\nchen is located, and make/ brew a cup of coffee.\nThe next major breakthrough to come in Artificial Intelligence was a scientific \npaper entitled “A Logical Calculus of the Ideas Immanent In Nervous Activity.” \nThis was co- written by Warren McCulloch and Walter Pitts in 1943. The major \npremise of this paper was that logical deductions could explain the powers of \nthe human brain. This paper was subsequently published in the Bulletin of \nMathematical Biophysics .\nIn this paper, McCulloch and Pitts posit that the core functions of the human \nbrain, in particular the neurons and synaptic activities that take place, can be fully \nexplained by mathematical logical operators (for example, And, Not, etc.).\nIn an effort to build off this, Norbert Wiener created and published a scien -\ntific book entitled Cybernetics:\xa0Or Control and Communication In The Animal and \nThe Machine . This particular book covered such topics as Newtonian Mechanics, \nStatistics, Thermodynamics, etc. This book introduced a new type of theory called \n“Chaos Theory.” He also equated the human brain to that of a computer in that it \nshould be able to play a game of chess, and it should be able to learn at even higher \nplanes as it played more games.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 33}
31
page_content='16 | Artificial Intelligence\n The next major period of time for Artificial Intelligence was known as “The \nOrigin Story,” and it is reviewed in more detail in the next sub section.\nThe Origin Story\nThe next major stepping stone in the world of Artificial Intelligence came when \nan individual by the name of John McCarthy organized and hosted a ten- week \nresearch program at Dartmouth University. It was entitled the “Study of Artificial \nIntelligence,” and this was the first time that this term had ever been used. The exact \nnature of this project is as follows:\nThe study is to proceed on the basis of the conjecture that every aspect \nof learning or any other feature of intelligence can in principle be so pre -\ncisely described that a machine can be made to simulate it. An attempt \nwill thus be made to find out how to make machines use language, form \nabstractions and concepts, solve kinds of problems now reserved for \nhumans, and improve themselves. We think that a significant advance \ncan be made in one or more of these problems if a carefully selected \ngroup of scientists work on it together for a summer.\n(Taulli, 2019)\nDuring this particular retreat, a computer program called the “Logic Theorist” was \ndemonstrated, which was actually developed at the RAND Corporation. The focus \nof this was to solve complex mathematical theorems from the publication known as \nthe “Principia Mathematica.” In order to create this programming language, an IBM \n701 mainframe computer was used, which used primarily machine language for the \nprocessing of information and data.\nBut in order to further optimize the speed of the “Logic Theorist,” a new pro -\ncessing language was used, and this became known as the “Information Processing \nLanguage,” or “IPL” for short. But the IBM 701 mainframe did not have enough \nmemory or processing power for the IPL, so this led to the creation of yet another \ndevelopment:\xa0Dynamic Memory Allocation. As a result, the “Logic Theorist” has \nbeen deemed to be the first Artificial Intelligence programming language to ever be \ncreated.\nAfter this, John McCarthy went onto create other aspects for Artificial \nIntelligence in the 1950s. Some of these included the following:\n\t{The LISP Programming Language :\n– This made the use of nonnumerical data possible (such as qualitative data \npoints);\n– The development of programming functionalities such as Recursion, \nDynamic Typing, and Garbage Collection were created and deployed;' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 34}
32
page_content='Artificial Intelligence | 17\n \t{Time sharing mainframe computers:\nThese were created, which was actually the forerunner to the first Internet, \ncalled the “APRANET”;\n\t{The Computer Controlled Car :\nThis was a scientific paper he published that described how a person could \nliterally type directions using a keyboard and a specialized television camera \nwould then help to navigate the vehicle in question. In a way, this was a primi -\ntive version of the GPS systems that are available today.\nFrom this point onwards, the era for Artificial Intelligence became known as the \n“Golden Age for AI,” with key developments taking place. This is reviewed in more \ndetail in the next subsection.\nThe Golden Age for Artificial Intelligence\nDuring this time period, much of the innovation that took place for Artificial \nIntelligence came from the academic sector. The primary funding source for all \nAI- based projects came from the Advanced Research Projects Agency, also known \nas “ARPA” for short. Some of the key developments that took place are as follows:\n 1) The Symbolic Automatic INTegrator :\nAlso known as “SAINT,” this program was developed by James Slagle, a \nresearcher at MIT, in 1961. This was created to help solve complex calculus \nproblems and equations. Other types of computer programs were created \nfrom this, which were known as “SIN” and “MACSYMA,” which solved \nmuch more advanced mathematical problems with particular usage of linear \nalgebra and differential equations. SAINT was actually deemed to be what \nbecame known as the first “Expert System.”\n 2) ANALOGY :\nThis was yet another computer program that was developed by an MIT pro -\nfessor known as Thomas Evans in 1963. It was specifically designed to solve \nanalogy- based problems that are presented in IQ tests.\n 3) STUDENT :\nThis type of computer program was developed by another researcher at MIT, \nDaniel Bobrow, in 1964. This was the first to use what is known as “Natural \nLanguage Processing,” and is a topic that will be reviewed in more detail later \nin this book.\n 4) ELIZA :\nThis is also another Artificial Intelligence program which was developed in \n1965 by Joseph Weizenbaum, a professor at MIT. This was actually the pre -\ncursor to the Chatbot, which is in heavy demand today. In this particular \napplication, an end user could type in various questions, and the computer' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 35}
33
page_content='18 | Artificial Intelligence\nin turn would provide some sort of response. The application here was for \npsychology— the program acted much like a virtual psychoanalyst.\n 5) Computer Vision :\nIn 1966, an MIT researcher, Marvin Minsky led the way to what is known \nas Computer Vision, which is a subsequent chapter in this book. He linked a \nbasic camera to a computer and wrote a special program to describe in some \ndetail what it saw. It detected basic visual patterns.\n 6) Mac Hack :\nThis was also another Artificial Intelligence program that was developed \nRichard Greenblatt, another professor at MIT, in 1968.\n 7) Hearsay I :\nThis was considered to be one of the most advanced Artificial Intelligence \nprograms during this time. It was developed by Raj Reddy in 1968, and was \nused to create the first prototype of Speech Recognition Systems.\nDuring this Golden Age Period, there were two major theories of Artificial \nIntelligence that also came about and they are as follows:\n\t{The need for symbolic systems:\xa0This would make heavy usage of computer \nlogic, such as “If- Then- Else” statements.\n\t{The need for Artificial Intelligence Systems to behave more like the human \nbrain:\xa0This was the first known attempt to map the neurons in the brain and \ntheir corresponding activities. This theory was developed by Frank Rosenblatt, \nbut he renamed the neurons as “perceptrons.”\nBack in 1957, Rosenblatt created the first Artificial Intelligence program to do this, \nand it was called the “Mark I\xa0Perceptron.” The computer that ran this particular \nprogram was fitted two cameras to differentiate two separate images, whose scale was \n20 by 20 pixels. This program would also make use of random statistical weightings \nto go through this step- by- step, iterative process:\n 1) Create and insert an input, but come up with an output that was \nperceptron- based.\n 2) The input and the output should match, and if they do not, then the following \nsteps should be taken:\n– If the output (the perceptron) was “I” (instead of being 0), the statistical \nweight for “I” should be decreased.\n– In the reverse of the above, if the output (the perceptron) was “0” (instead \nof being I), the statistical weight for “I” should be increased by an equal \nmanner.\n 3) The first two steps should be repeated in a continued, iterative process until \n“I”\xa0=\xa00, or vice versa.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 36}
34
page_content='Artificial Intelligence | 19\n This program also served as the prot égé for Neural Networks (which is also a subse -\nquent chapter in this book), but as successful as it was deemed to be, it had also had \nits fair share of criticisms. One of the major flaws of it that was pointed out was that \nit had one layer of processing.\nThe next major phase to happen in Artificial Intelligence was the development of \nExpert Systems, which is reviewed in more detail in the next subsection.\nThe Evolution of Expert Systems\nDuring this era, there were many other events that took place in the field of Artificial \nIntelligence. One of these was the development of the back propagation technique. \nThis is a technique which is widely used in statistical weights for the inputs that \ngo into a Neural Network system. As mentioned earlier, there is a chapter in this \nbook that is devoted to this topic, both from the theoretical and the application \nstandpoints.\nAnother key development was the creation of what is known as the “Recurrent \nNeural Network,” or “RNN” for short. This technique permits the connections \nin the Artificial Intelligence system to move seamlessly through both the input \nand the output layers. Another key catalyst was the evolution of the Personal \nComputer and their minicomputer counterparts, which in turn led to the devel -\nopment of what are known as “Expert Systems,” which made heavy usage of \nsymbolic logic.\nThe following diagram illustrates the key components of what is involved in an \nExpert System:\nUser \nInterfac eEnd User\nInference \nEngine\nKnowledge \nBaseExpert' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 37}
35
page_content='20 | Artificial Intelligence\nIn this regard, one of the best examples of an Expert System was that of the “eXpert \nCONfigurer,” also known as the “XCON” for short. This was developed by John \nMcDermott at the Carnegie Mellon University. The main purpose of this was to \nfurther optimize the choice of computer components, and it had about 2,500 rules \n(both mathematical and statistical) that were incorporated into it. In a way, this was \nthe forerunner to the Virtual Personal Assistants (VPAs) of Siri and Cortana, which \nallow you to make choices.\nThe development of the XCON further proliferated the growth of Expert \nSystems. Another successful implementation of an Expert System was the devel -\nopment of the “Deep Blue” by IBM in 1996. In fact, its most successful applica -\ntion came when it played a game of chess against Grandmaster Garry Kasparov. In \nthis regard, Deep Blue could process well over 200\xa0million positions in just one \nsecond.\nBut despite all of this, there were a number of serious shortcomings with Expert \nSystems, which are as follows:\n\t{They could not be applied to other applications; in other words, they \ncould only be used for just one primary purpose, and thus, they had a very \nnarrow focus.\n\t{As the Expert Systems became larger, it became much more difficult and \ncomplicated to not only manage them but to keep feeding them because these \nwere all mainframe- based technologies. As a result, this led to more errors \noccurring in the outputs.\n\t{The testing of these Expert Systems proved to be a much more laborious and \ntime- consuming process than first expected.\n\t{Unlike the Artificial Intelligence tools of today, Expert Systems could not learn \non their own over a certain period of time. Instead, their core logic models \nhad to be updated manually, which led to much more expense and labor.\nFinally, the 1980s saw the evolution of yet another new era in Artificial Intelligence, \nknown as “Deep Learning.” It can be specifically defined as follows:\nDeep learning is a type of machine learning that trains a computer to per -\nform human- like tasks, such as recognizing speech, identifying images, \nor making predictions. Instead of organizing data to run through prede -\nfined equations, deep learning sets up basic parameters about the data \nand trains the computer to learn on its own by recognizing patterns \nusing many layers of processing.\n(SAS(b), n.d.)\nIn simpler terms, this kind of system does not need already established mathematical \nor statistical algorithms in order to learn from the data that is fed into it. All it needs' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 38}
36
page_content='Artificial Intelligence | 21\n are certain permutations and from there, it can literally learn on its own— and even \nmake projections into the future.\nThere were also two major developments at this time with regards to Deep \nLearning:\n\t{In 1980, Kunihiko Fukushima developed an Artificial Intelligence called the \n“Neocognitron.” This was the precursor to the birth of what are known as \n“Convolutional Neural Networks,” or “CNNs” for short. This was based upon \nthe processes that are found in the visual cortex of various kinds of animals.\n\t{In 1982, John Hopfield developed another Artificial Intelligence system called \n“Hopfield Networks.” This laid down the groundwork for what are known as \n“Recurrent Neural Networks,” or “RNNs” for short.\nBoth CNNs and RNNs will be covered in the chapter on Neural Networks.\nThe next section of this book will now deal with data and datasets, which are \nessentially the fuel that drives Artificial Intelligence algorithms and applications of \nall types and kinds.\nThe Importance of Data in Artificial Intelligence\nSo far in this chapter, we have examined in great detail what Artificial Intelligence \nis and what its subcomponents are, as well as provided a very strong foundation in \nterms of the theoretical and practical applications of it, which has led to the power -\nhouse that it is today in Cybersecurity. In this part of the chapter, we now focus \nupon the key ingredient that drives the engines of Artificial Intelligence today— the \ndata that is fed into it, and the feeds from where it comes.\nWe all have obviously have heard of the term “data” before. This is something \nthat has been taught to us ever since we started elementary school. But what really is \ndata? What is the scientific definition for it? It can be defined as follows:\nIn computing, data is information that has been translated into a \nform that is efficient for movement or processing. Relative to today’s \ncomputers and transmission media, data is information converted into \nbinary digital form.\n(T echTarget, n.d.)\nSo, as this can be applied to Artificial Intelligence, the underlying tool will \ntake all of the data that is fed into it (both numerical and non- numerical), convert \nit into a format that it can understand and process, and from there provide the \nrequired output. In a sense, it is just like garbage in/ garbage out, but on a much \nmore sophisticated level.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 39}
37
page_content='22 | Artificial Intelligence\n This section will cover the aspect of data and what it means for Artificial \nIntelligence from the following perspectives:\n\t{The fundamentals of data basics;\n\t{The types of data that are available;\n\t{Big Data;\n\t{Understanding preparation of data;\n\t{Other relevant data concepts that are important to Artificial Intelligence.\nThe Fundamentals of Data Basics\nLet’s face it, everywhere we go, we are exposed to data to some degree or another. \nGiven the advent of the Smartphone, digitalization, wireless technology, social media, \nthe Internet of Things (IoT), etc. we are being exposed to it every day in ways that we \nare not even cognizant of. For example, when we type in a text message or reply to an \nemail, that is actually considered to be data, though more of a qualitative kind. Even \nvideos that you can access on YouT ube or podcasts can be considered data as well.\nIt is important to keep in mind that data does not have to be just the numerical \nkind. If you think about it, anything that generates content, whether it is written, in \nthe form audio or video, or even visuals, are all considered to be data. But in the word \nof Information Technology, and even to that of a lesser extent in Artificial Intelligence, \ndata is much more precisely defined, and more often than not symbolically represented, \nespecially when the source code compiles the datasets that it has been given.\nIn this regard, the data that is most often used by computers are those of the \nbinary digits. It can possess the value of either 0 or 1, and in fact, this is the smallest \npiece of data that a computer will process. The computers of today can process at \nleast 1,000 times data sizes more than that, primarily because of the large amounts \nof memory that they have and their very powerful processing capabilities.\nIn this regard, the binary digit is very often referred to merely as a “Bit.” Any data \nsizes larger than this are referred to as a “Byte.” This is illustrated in the table below:\nUnit Value\nMegabyte 1,000 Kilobytes\nGigabyte 1,000 Megabytes\nTerabyte 1,000 Gigabytes\nPetabyte 1,000 Terabytes\nExabyte 1,000 Petabytes\nZettabyte 1,000 Exabytes\nYottabyte 1,000 Zetabytes' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 40}
38
page_content='Artificial Intelligence | 23\n The Types of Data that are Available\nIn general, there are four types of data that can be used by an Artificial Intelligence \nsystem. They are as follows:\n 1) Structured Data :\nThese are datasets that have some type or kind of preformatting to them. \nIn other words, the dataset can reside in a fixed field within a record or \nfile from within the database that is being used. Examples of this typically \ninclude values such as names, dates, addresses, credit card numbers, stock \nprices, etc. Probably some of the best examples of structured data are those \nof Excel files, and data that is stored in an SQL database. Typically, this \ntype of data accounts for only 20\xa0percent of the datasets that are consumed \nby an Artificial Intelligence application or tool. This is also referred to as \n“Quantitative Data.”\n 2) Unstructured Data :\nThese are the datasets that have no specific, predefined formatting to them. In \nother words, there is no way that they will fit nicely into an Excel spreadsheet \nor even an SQL database. In other words, this is all of the data out there that \nhas boundaries that are not clearly defined. It is important to keep in mind \nthat although it may not have the external presence of an organized dataset, \nit does have some sort of internal organization and/ or formatting to it. This is \nalso referred to as “Qualitative Data,” and the typical examples of this include \nthe following:\n\t{Text files:\xa0Word processing, spreadsheets, presentations, email, logs.\n\t{Email:\xa0Email has some internal structure thanks to its metadata, and we \nsometimes refer to it as semi- structured. However, its message field is \nunstructured and traditional analytics tools cannot parse it.\n\t{Social Media:\xa0Data from Facebook, T witter, LinkedIn.\n\t{Website:\xa0YouT ube, Instagram, photo sharing sites.\n\t{Mobile data:\xa0Text messages, locations.\n\t{Communications:\xa0Chat, IM, phone recordings, collaboration software.\n\t{Media:\xa0MP3, digital photos, audio and video files.\n\t{Business applications:\xa0 MS Office documents, productivity applications \n(Geeks for Geeks(b), n.d.).\nThese kinds of datasets account for about 70\xa0percent of the data that is \nconsumed by an Artificial Intelligence tool.\n 3) Semi- Structured Data :\nAs its name implies, there is no rigid format into how this data is typically \norganized, but either externally or internally, there is some kind of organ -\nization to it. It can be further modified so that it can fit into the columns \nand fields of a database, but very often, this will require some sort of human \nintervention in order to make sure that it is processed in a proper way. Some' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 41}
39
page_content='24 | Artificial Intelligence\nof the typical examples of these kinds of datasets include the “Extensible \nMarkup Language,” also known as “XML” for short. Just like HTML, XML \nis considered to be a markup language that consists of various rules in order \nto identify and/ or confirm certain elements in a document. Another example \nof Semi- Structured Data is that of the “JavaScript Object Notation,” also \nknown as “JSO” for short. This is a way in which information can be trans -\nferred from a Web application to any number of Application Protocol \nInterfaces (also known as “APIs” for short), and from there, to the server \nupon which the source code of the web application resides upon. This pro -\ncess can also happen in the reverse process as well. These kinds of datasets \naccount for about 10\xa0percent of the data that is consumed by an Artificial \nIntelligence tool.\n 4) Time Series Data :\nAs its name also implies, these kinds of datasets consist of data points that \nhave some sort of time value attached to them. At times, this can also be \nreferred to as “Journey” data, because during a trip, there are data points that \ncan be access throughout the time from leaving the point of origination to \nfinally arriving at the point of destination. Some typical examples of this \ninclude the price range of a certain stock or commodity as it is traded on an \nintraday period, the first time that a prospect visits the website of a merchant \nand the various web pages they click on or materials that they download until \nthey log off the website, etc.\nNow that we have defined what the four most common datasets are, you may even \nbe wondering at this point, just what are some examples of them? They include the \nfollowing:\nFor Structured Datasets :\n\t{SQL Databases;\n\t{Spreadsheets such as Excel;\n\t{OLTP Systems;\n\t{Online forms;\n\t{Sensors such as GPS or RFID tags;\n\t{Network and Web server logs;\n\t{Medical devices (Geeks for Geeks(a), n.d.).\nFor Unstructured Sets :\n\t{Social media;\n\t{Location & Geo Data;\n\t{Machined Generator & Sensor- based;\n\t{Digital streams;\n\t{Text documents;' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 42}
40
page_content='Artificial Intelligence | 25\n \t{Logs;\n– T ransactions\n– Micro- blogging\nFor Sem i- Structured Datasets :\n\t{Emails;\n\t{XML and other markup languages;\n\t{Binary Executables;\n\t{TCP/ IP packets;\n\t{Zipped Files;\n\t{Integration of data from different sources;\n\t{Web pages (Oracle, n.d.).\nFor Time Series Datasets :\n\t{Statista;\n\t{Data- Planet Statistical Datasets;\n\t{Euromonitor Passport;\n\t{OECD Statistics;\n\t{United Nations Statistical Databases;\n\t{World Bank Data;\n\t{U.S. Census Bureau:\xa0International Data Base;\n\t{Bloomberg;\n\t{Capital IQ;\n\t{Datastream;\n\t{Global Financial Data;\n\t{International Financial Statistics Online;\n\t{MarketLine Advantage;\n\t{Morningstar Direct.\nAs it was mentioned earlier, it is the Unstructured Datasets that account for a \nmajority of the datasets that are fed into an Artificial Intelligence application, and \nthere is a beauty about them. They are so powerful that they can take just about any \nkind or type of dataset that is presented to them, literally digest it into a format it \ncan understand, process it, and provide the output or outputs that are required. In \nother words, there are no limiting factors with regards to this, and as a result, they \ncan give just about any kind of prediction or answer that is asked of them.\nBig Data\nAs also previously reviewed, the size and the number of datasets are growing at an \nexponential clip on a daily basis, given all of the technological advancements that are' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 43}
41
page_content='26 | Artificial Intelligence\n currently taking place. There is a specific term for this, and it is called “Big Data.” \nThe technical definition of it is as follows:\nBig data is larger, more complex data sets, especially from new data \nsources. These data sets are so voluminous that traditional data pro -\ncessing software just can’t manage them. But these massive volumes of \ndata can be used to address business problems that wouldn’t have been \nable to be tackled before.\n(Datamation, n.d.)\nIn a way, this can also be likened to another concept known as “Data Warehousing.”\nThere are three main characteristics that are associated with “Big Data,” and they \nare as follows:\n 1) Volume :\nThis refers to sheer size and scale of the datasets. Very often, they will be in the \nform of Unstructured Data. The dataset size can go as high as into the Terabytes.\n 2) Variety :\nThis describes the diversity of all of the datasets that reside in the Big Data. This \nincludes the Structured Data, the Unstructured Data, the Semi- Structured \nData, and the Time Series Data. This also describes the sources where all of \nthese datasets come from.\n 3) Velocity :\nThis refers to the rapid speed at which the datasets in the Big Data are actually \nbeing created.\n 4) Value :\nThis refers to just how useful the Big Data is. In other words, if it is fed into \nan Artificial Intelligence system, how close will it come to giving the desired \nor expected output?\n 5) Variability :\nThis describes how fast the datasets in the Big Data will change over a certain \nperiod of time. For example, Structured Data, Time Series Data, and Semi- \nStructured Data will not change that much, but Unstructured Data will. This \nis simply due its dynamic nature at hand.\n 6) Visualization :\nThis is how visual aids are used in the datasets that are in the Big Data. For \nexample, these could graphs, dashboards, etc.\nUnderstanding Preparation of Data\nAs it has been mentioned before, it is data that drives the Artificial Intelligence \napplication to do what it does. In other words, data is like the fuel these applications \nneed to run. Although the applications are quite robust in providing the output that' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 44}
42
page_content='Artificial Intelligence | 27\nis asked of them, this is still viewed as a “Garbage In and Garbage Out” process. \nMeaning, the quality of outputs that you are going to get is only going to be as good \nas the data that is put into the application.\nTherefore, you must take great effort to make sure that the datasets that you are \nfeeding into your Artificial Intelligence systems are very robust and that they will \nmeet the needs you are expecting in terms of what you want the desired outputs to \nbe. The first step in this process is known as “Data Understanding”:\n 1) Data Understanding :\nIn this regard, you need to carefully assess where the sources of your data \nand their respective feeds are coming from. Depending upon what your exact \ncircumstances and needs are, they will typically come from the following \nsources:\n\t{In- House Data :\nAs the name implies, these are the data points that are actually coming into \nyour business or corporation. For example, it could be data that originates \nfrom your corporate intranet, or even your external website, as customers \nand prospects download materials from your site or even fill out the con -\ntact form. Also, it could be the case that you may have datasets already in \nyour organization that you can use.\n\t{Open Source Data :\nThese are the kinds of data that are freely available from the Internet, \nespecially when you are using Google to find various data sources. For \nexample, the Federal Government is a great resource for this, as well \nas many private enterprises (obviously, you will have to pay for this \nas a subscription, but initially, they will more than likely offer a free \ntrial at first to test drive their respective datasets. This would be a great \nopportunity to see if what they are offering will be compatible with \nyour Artificial Intelligence system, and if it will potentially yield the \ndesired outputs. These kinds of datasets will very likely use a specialized \nApplication Protocol Interface (API) in order to download the data. \nOther than the advantage of being free, another key advantage of using \nOpen Source Data is that it already comes in a formatted manner that \ncan be uploaded and fed into your Artificial Intelligence system.\n\t{Third Party Data :\nThese are the kind of datasets that are available exclusively from an outside \nvendor. Examples of these can be seen in the last subsection of this chapter. \nThe primary advantage of obtaining data from these sources is that you can \nbe guaranteed, to a certain degree, that it has been validated. But the dis -\nadvantage of this is that they can be quite expensive, and if you ever need \nto update your datasets, you will have to go back to the same vendor and \npay yet another premium price for it.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 45}
43
page_content='28 | Artificial Intelligence\nAccording to recent research, about 70\xa0 percent of the Artificial Intelligence \nsystems that are in use today make use of In House Data, 20\xa0percent of them use \nOpen Source Data, and the remaining 10\xa0percent comes from outside vendors. \nIn order to fully understand the robustness of the datasets you are about to pro -\ncure, the following must first be answered:\n\t{Are the datasets complete for your needs and requirements? Is there any \nmissing data?\n\t{How was the data originally collected?\n\t{How was the data initially processed?\n\t{Have there been any significant changes made to it that you need to be \naware of?\n\t{Are there any Quality Control (QC) issues with the datasets?\xa0\n 2) The Preparation of the Data :\nThis part is often referred to as “Data Cleansing,” and it requires the following \nactions that you must take before you can feed the data into your Artificial \nIntelligence system:\n\t{Deduplication :\nIt is absolutely imperative to make sure that your data does not contain \nduplicate sets. If this is the case, and it goes unnoticed, it could greatly \naffect and skew the outputs that are produced.\n\t{Outliers :\nThese are the data points that lie to the extremes of the rest of the datasets. \nPerhaps they could be useful for some purpose, but you need to make sure \nfirst that they are needed for your particular application. If not, then they \nmust be removed.\n\t{Consistency :\nIn this situation, you must make sure that all of the variables have clear \ndefinitions to them, and that you know what they mean. There should be \nno overlap in these meanings with the other variables.\n\t{Validation Rules :\nThis is where you try to find the technical limitations of the datasets that \nyou intend to use. Doing this manually can be very time consuming and \nlaborious, so there are many software applications that are available that \ncan help you determine these specific kinds of limitations. Of course, you \nwill first need to decide on and enter in the relevant permutations, and \nthese can be referred to as the “thresholds.”\n\t{Binning :\nWhen you procure your datasets, it may also be the case that you may not \nneed each and every one to feed into your Artificial Intelligence system. As' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 46}
44
page_content='Artificial Intelligence | 29\na result, you should look at each category and decide which ones are the \nmost relevant for the outputs that you are trying to garner.\n\t{Staleness :\nThis is probably one of the most important factors to consider. Just how \ntimely and relevant are the datasets that you are using? For an Artificial \nIntelligence application, it is absolutely crucial that you get data that is \nupdated in real time if your desired output is to predict something in the \nfuture.\n\t{Merging :\nIt could be the case that two columns in your dataset could contain very \nsimilar pieces of information. If this is the case, you may want to consider \nbringing these two columns together by merging them. By doing so, you \nare actually using the processing capabilities of your Artificial Intelligence \nmuch more efficiently.\n\t{One Hot Encoding :\nTo a certain degree, it may be possible to represent qualitative data \nas quantitative data, once again, depending upon your needs and \nrequirements.\n\t{Conversions :\nThis is more of an aspect of formatting the units as to how you want your \noutputs to look like. For example, if all of your datasets are in a decimal \nsystem, but your output calls for the values to be in the metric system, then \nusing this technique will be important.\n\t{Finding Missing Data :\nWhen you are closely examining your datasets, it could quite often be the \ncase that there may some pieces that are missing. In this regard, there are \ntwo types of missing data:\n*Randomly missing data:\xa0Here, you can calculate a median or even \nan average as a replacement value. By doing this, it should only skew \nthe output to a negligible degree.\n*Sequentially missing data:\xa0 This is when the data is missing in a \nsuccessive fashion, in an iterative manner. Taking the median or \naverage will not work because there is too much that is not available in \norder to form a scientific estimate. You could try to extrapolate the pre -\nceding data and the subsequent data to make a hypothesized guess, but \nthis is more of a risky proposition to take. Or you could simply delete \nthose fields in which the sequential data is missing. But in either case, \nthe chances are much greater that the output will much more skewed \nand not nearly as reliable.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 47}
45
page_content='30 | Artificial Intelligence\n \t{Correcting Data Misalignments :\nIt is important to note that before you merge any fields together in your \ndatasets, that the respective data points “align” with the other datasets that \nyou have. To account and correct for this, consider the following actions \nthat you can take:\n*If possible, try to calculate and ascertain any missing data that you \nmay have in your data sets (as previously reviewed);\n*Find any other missing data in all of the other datasets that you have \nand intend to use;\n*T ry to combine the datasets so that you have columns which can \nprovide for consistent fields;\n*If need be, modify or further enhance the desired outcome that the \noutput produces in order to accommodate for any changes that have \nbeen made to correct data misalignment.\nOther Relevant Data Concepts that are \nImportant to Artificial Intelligence\nFinally, in this subsection we examine some other data concepts that are very per -\ntinent to Artificial Intelligence systems, and are as follows:\n 1) Diagnostic Analytics :\nThis is the careful examination of the datasets to see why a certain trend has \nhappened the way it did. An example of this is discovering any hidden trends \nwhich may not have been noticed before. This is very often done in Data \nWarehousing or Big Data projects.\n 2) Extraction, T ransformation, and Load (ETL):\nThis is a specialized type of data integration, and is typically used in, once \nagain, Data Warehousing applications.\n 3) Feature :\nThis is a column of data.\n 4) Instance :\nThis is a row of data.\n 5) Metadata :\nThis the data that is available about the datasets.\n 6) Online Analytical Processing (OLAP):\nThis is a technique which allows you to examine the datasets from types of \ndatabases into one harmonized view.\n 7) Categorical Data :\nThis kind of data does not have a numerical value per se, but has a textual \nmeaning that is associated with it.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 48}
46
page_content='Artificial Intelligence | 31\n 8) Ordinal Data :\nThis is a mixture of both Categorical Data and Numerical Data.\n9) Predictive Analytics :\nThis is where the Artificial Intelligence system attempts to make a certain \nprediction about the future (this is displayed as an output), based upon the \ndatasets that are fed into it.\n 10) Prescriptive Analytics :\nThis is where the concepts of Big Data (as previously examined) are used to \nhelp make better decisions based upon the output that is yielded.\n 11) Scalar Variables :\nThese are the types of variables that hold and consist of only single values.\n 12) T ransactional Data :\nThese are the kinds of datasets that represent data to actual transactions \nthat have occurred in the course of daily business activities.\nSo far, we have provided an extensive overview of just how important data and \ndatasets are to an Artificial Intelligence system. The remainder of this book will \nexamine Machine Learning, Neural Networks, and Computer Vision in much \ngreater detail.\nResources\nAlgorithmia:\xa0“Introduction to Computer Vision:\xa0What It Is and How It Works;” \nn.d. < algorithmia.com/ blog/ introduction- to- computer- vision >\nAlpaydin E:\xa0Introduction to Machine Learning, 4th Edition, Massachusetts:\xa0The \nMIT Press; 2020.\nDatamation:\xa0“Structured vs. Unstructured Data;” n.d. < www.datamation.com/ big- \ndata/ structured- vs- unstructured- data.html>\nForcepoint:\xa0 “What is Cybersecurity?” n.d. < www.forcepoint.com/ cyber- edu/ \ncybersecurity >\nGeeks for Geeks(a):\xa0 “What is Semi- Structured Data?” n.d. < www.geeksforgeeks.\norg/ what- is- semi- structured- data/ >\nGeeks for Geeks(b):\xa0 “What is Structured Data?” n.d. < www.geeksforgeeks.org/ \nwhat- is- structured- data/ >\nGraph, M:\xa0 Machine Learning , 2019.\nMIT T echnology Review :\xa0“What is Machine Learning?” n.d. < www.technology review.\ncom/ s/ 612437/ what- is- machine- learning- we- drew- you- another- flowchart/ >\nOracle:\xa0 “What is Big Data?” n.d. < www.oracle.com/ big- data/ guide/ what- is- big- \ndata.html>' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 49}
47
page_content='32 | Artificial Intelligence\nPathmind:\xa0 “A Beginner’s Guide to Neural Networks and Deep Learning;” n.d. \n<pathmind.com/ wiki/ neural- network >\nSAS(a):\xa0 “Artificial Intelligence:\xa0 What It Is and Why It Matters;” n.d. < www.sas.\ncom/ en_ us/ insights/ analytics/ what- is- artificial- intelligence.html >\nSAS(b):\xa0“Deep Learning:\xa0What It Is and Why It Matters;” n.d. < www.sas.com/ en_ \nus/ insights/ analytics/ deep- learning.html>\nTaulli, T:\xa0 Artificial Intelligence Basics:\xa0 A Non- T echnical Introduction , New\xa0 York: \nApress; 2019.\nTechTarget. “Data;” n.d. <searchdatamanagement.techtarget.com/ definition/ data>' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 50}
48
page_content='33Chapter\xa02\nMachine Learning\nIn our last chapter ( Chapter\xa01 ), we reviewed what Artificial Intelligence was by pro -\nviding an overview. Specifically, the following topics were covered:\n\t{An introduction to Cybersecurity;\n\t{The various aspects of Cybersecurity;\n\t{A chronological timeline into the evolution of Cybersecurity;\n\t{An introduction to Artificial Intelligence;\n\t{A definition of Artificial Intelligence;\n\t{The various components of Artificial Intelligence and their technical \ndefinitions (this includes the likes of Machine Learning, Computer Vision, \nand Neural Networks);\n\t{An overview into the book;\n\t{The history of Artificial Intelligence;\n\t{The importance of data and its role with Artificial Intelligence systems and \napplications;\n\t{The applications of Artificial Intelligence.\nIn this chapter, we examine the very first subcomponent of Artificial Intelligence, \nwhich is that of Machine Learning, also known as “ML” for short. We will do a \ndeep dive first in the theoretical aspects of Machine Learning, and then this will be \nfollowed by the various applications, just like in the last chapter. But before we start \ngetting into all of the theoretical aspects of Machine Learning, we will first provide \na high level overview of what it is all about.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 51}
49
page_content='34 | Machine Learning\n \nThe High Level Overview\nAlthough Machine Learning has been around for a long time (some estimates have \nit as long as a couple of decades), there are a number of key applications in which \nMachine Learning is used. Some examples of these are as follows:\n 1) Predictive Maintenance :\nThis kind of application is typically used in supply chain, manufacturing, \ndistribution, and logistics sectors. For example, this is where the concept \nof Quality Control comes into key play. In manufacturing, you want to be \nable to predict how many batches of products that are going to be produced \ncould actually become defective. Obviously, you want this number to be as \nlow as possible. Theoretically, you do not want any type or kind of product \nto be defective, but in the real world, this is almost impossible to achieve. \nWith Machine Learning, you can set up the different permutations in both \nthe mathematical and statistical algorithms with different permutations as to \nwhat is deemed to be a defective product or not.\n 2) Employee Recruiting :\nThere is one common denominator in the recruitment industry, and that is the \nplethora of resumes that recruiters from all kinds of industries get. Consider \nsome of these statistics:\n\t{Just recently, Career Builder, one of the most widely used job search portals \nreported:\n* 2.3\xa0million jobs were posted;\n* 680 unique profiles of job seekers were collected;\n* 310\xa0million resumes were collected;\n* 2.5\xa0million background checks were conducted with the Career Builder \nplatform.\n(SOURCE:\xa01).\nJust imagine how long it would take a team of recruiters to have to go through \nall of the above. But with Machine Learning, it can all be done in a matter \nof minutes, by examining it for certain keywords in order to find the desired \ncandidates. Also, rather than having the recruiter post each and every job \nentry manually onto Career Builder, the appropriate Machine Learning tool \ncan be used to completely automate this process, thus freeing up the time of \nthe recruiter to interview with the right candidates for the job.\n 3) Customer Experience :\nIn the American society of today, we want to have everything right here \nand right now, at the snap of a finger. Not only that, but on top of this \nwe also expect to have impeccable customer service delivered at the same \ntime. And when none of this happens, well, we have the luxury to go to a \ncompetitor to see if they can do any better. In this regard, many businesses' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 52}
50
page_content='Machine Learning | 35\n \nand corporations have started to make use of Virtual Agents. These are the \nlittle chat boxes typically found on the lower right part of your web browser. \nWith this, you can actually communicate with somebody in order to get your \nquestions answered or shopping issues resolved. The nice thing about these \nis that they are also on demand, on a 24/ 7/ 365 basis. However, in order to \nprovide a seamless experience to the customer or prospect, many business \nentities are now making use of what are known as “Chat Bots.” These are \na much more sophisticated version of the Virtual Agent because they make \nuse of Machine Learning algorithms. By doing this, the Chat Bot can find \nmuch more specific answers to your queries by conducting more “intelligent” \nsearches in the information repositories of the business or corporation. Also, \nmany call centers are making use of Machine Learning as well. In this par -\nticular fashion, when a customer calls in, their call history, profile, and entire \nconversations are pulled up in a matter of seconds for the call center agent, \nso that they can much easier anticipate your questions and provide you with \nthe best level of service possible.\n 4) Finance :\nIn this market segment, there is one thing that all people, especially the \ntraders, want to do, and that is to have the ability to predict the financial \nmarkets, as well as what they will do in the future, so that they can hedge \ntheir bets and make profitable trades. Although this can be done via a manual \nprocess, it can be a very laborious and time- consuming process to achieve. Of \ncourse, we all know that the markets can move in a matter of mere seconds \nwith uncertain volatility, as we have seen recently with the Coronavirus. In \nfact, exactly timing and predicting the financial markets with 100\xa0percent \naccuracy is an almost impossible feat to accomplish. But this is where the \nrole of Machine Learning can come into play. For example, it can take all \nof the data that is fed into it, and within a matter of seconds make more \naccurate predictions as to what the market could potentially do, giving the \ntraders valuable time to make the split- second decisions that are needed to \nproduce quality trades. This is especially useful for what is known as “Intra \nDay T rading,” where the financial traders try to time the market as they are \nopen on a minute- by- minute basis.\nThe Machine Learning Process\nWhen you are applying Machine Learning to a particular question that you want \nanswered or to predict a certain outcome, it is very important to follow a distinct \nprocess in order to accomplish these tasks. In other words, you want to build an \neffective model that can serve well for other purposes and objectives for a subsequent \ntime down the road. In other words, you want to train this model in a particular \nfashion, so that it can provide a very high degree of both accuracy and reliability.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 53}
51
page_content='36 | Machine Learning\n \n \n \nThis process is depicted below:\nData Order \nPickin gthe \nAlgorith m\nTrain the \nMode l\nEvaluate the \nMode l\nFine Tune \nthe Mode l\nData Order\nIn this step, you want to make sure that the data is as unorganized and unsorted as \npossible. Although this sounds quite contrary, if the datasets are by any means sorted \nor organized in any way shape or form, the Machine Learning Algorithms that are \nutilized may detect this as a pattern, which you do not want to happen in this par -\nticular instance.\nPicking the Algorithm\nIn this phase, you will want to select the appropriate Machine Learning algorithms \nfor your model. This will be heavily examined in this part of the chapter.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 54}
52
page_content='Machine Learning | 37\n \n \n \n \n \n \n \nTraining the Model\nThe datasets that you have will be fed into the Machine Learning system, in order \nfor it to learn first. In other words, various associations and relationships will be \ncreated and examined so that the desired outputs can be formulated. For example, \none of the simplest algorithms that can be used in Machine Learning is the Linear \nRegression one, which is represented mathematically as follows:\nY\xa0=\xa0M*X + B\nWhere:\nM\xa0=\xa0the slope on a graph;\nB\xa0=\xa0the Y intercept on the graph.\nModel Evaluation\nIn this step, you will make use of a representative sample of data from the datasets, \nwhich are technically known as the “Test Data.” By feeding this initially into the \nMachine Learning system, you can gauge just how accurate your desired outputs \nwill be in a test environment before you release your datasets into the production \nenvironment.\nFine Tune the Model\nIn this last phase, you will adjust the permutations that you have established in the \nMachine Learning system so that it can reasonably come up with desired outputs \nthat you are looking for.\nIn the next subsection, we examine the major classifications and types of Machine \nLearning Algorithms that are commonly used today.\nThe Machine Learning Algorithm Classifications\nThere are four major categorizations of the Machine Learning Algorithms, and they \nare as follows:\n 1) Supervised Learning :\nThese types of algorithms make use of what are known as “labeled data.” This \nsimply means that each dataset has a certain label that is associated with them. \nIn this instance, one of the key things to keep in mind is that you need to have \na large amount of datasets in order to produce the dataset you are looking \nfor when you are using algorithms based on this category. But if the datasets \ndo not come already labeled, it could be very time- consuming to create and' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 55}
53
page_content='38 | Machine Learning\nassign a label for each and every one of them. This is the primary downside of \nusing Machine Learning algorithms from this particular category.\n 2) Unsupervised Learning :\nThese kinds of algorithms work with data that is typically not labeled. Because \nof the time constraints it would take to create and assign the labels for each \ncategory (as just previously mentioned), you will have to make use of what \nare known as “Deep Learning Algorithms” in order to detect any unseen data \ntrends that lie from within all of your datasets. In this regard, one of the \nmost typical approaches that is used in this category is that of “Clustering.” \nWith this, you are merely taking all of the unlabeled datasets and using the \nvarious algorithms that are available from within this particular category to \nput these datasets into various groups, which have common denominators or \naffiliations with them. To help out with this, there are a number of ways to do \nthis, which are the following:\n\t{The Euclidean Metric :\nThis is a straight line between two independent datasets.\n\t{The Cosine Similarity Metric :\nIn this instance, a trigonometric function known as the “Cosine” is used to \nmeasure any given angles between the datasets. The goal here is to find any \ncloseness or similarities between at least two or more independent datasets \nbased upon their geometric orientation.\n\t{The Manhattan Metric :\nThis technique involves taking the summation of at least two or more abso -\nlute value distances from the datasets that you have.\n\t{The Association :\nThe basic thrust here is that if a specific instance occurs in one of your \ndatasets, then it will also likely occur in the datasets that have some sort of \nrelationship with the initial dataset that has been used.\n\t{The Anomaly Detection :\nWith this methodology, you are statistically identifying those outliers or \nother anomalous patterns that may exist within your datasets. This tech -\nnique has found great usage in Cybersecurity, especially when it relates to \nfiltering out for false positives from the log files that are collected from the \nFirewalls, Network Intrusion Devices, and Routers, as well as any behavior \nthat may be deemed suspicious or malicious in nature.\n\t{The Autoencoders :\nWith this particular technique, the datasets that you have on hand will be \nformatted and put into a compressed type of format, and from that, it will be \nreconstructed once again. The idea behind this is to detect and find any sort \nof new patterns or unhidden trends that may exist from within your datasets.\n 3) The Reinforcement Learning:\nIn this instance, you are learning and harnessing the power of your datasets \nthrough a trial and error process, as the name of this category implies.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 56}
54
page_content='Machine Learning | 39\n \n 4) The Semi- Supervised Learning:\nThis methodology is actually a mixture of both Supervised Learning and \nUnsupervised Learning. However, this technique is only used when you have \na small amount of datasets that are actually labeled. Within this, there is a \nsub- technique which is called “Pseudo- Labeling.” In this regard, you literally \ntranslate all of the unsupervised datasets into a supervised state of nature.\nThe Machine Learning Algorithms\nThere are many types and kinds of both mathematical and statistical algorithms that \nare used in Machine Learning. In this subsection, we examine some of the more \ncommon ones, and we will do a deeper dive into them later in this chapter. Here are \nthe algorithms:\n 1) The Na ïve Bayes Classifier :\nThe reason why this particular algorithm is called “na ïve” is because the under -\nlying assumption is that the variables in each of the datasets that you have \nare actually all independent from one another. In other words, the statistical \noccurrence from one variable in one dataset will have nothing to do whatsoever \nwith the variables in the remaining datasets. But there is a counterargument to \nthis which states that this association will prove to be statistically incorrect if \nany of the datasets have actually changed in terms of their corresponding values.\nIt should be noted that there are also specific alterations or variations to this \nparticular algorithm, and they are as follows:\n\t{The Bermoulli :\nThis is only used if you have binary values in your datasets.\n\t{The Multinomial :\nThis technique is only used if the values in your datasets are discrete, in \nother words, if they contain mathematical absolute values.\n\t{The Gaussian :\nThis methodology is used only if your datasets line up to a statistically \nnormal distribution.\nIt should be noted that this overall technique is heavily used for analyzing \nin granular detail those datasets that have a text value assigned to them. In \nCybersecurity, this technique proves to be extremely useful when it comes to \nidentifying and confirming phishing emails by examining the key features and \npatterns in the body of the email message, the sender address, and the content \nin the subject line.\n 2) The K- Nearest Neighbor :\nThis specific methodology is used for classifying any dataset or datasets that \nyou have. The basic theoretical construct of the values that are closely related \nor associated with one another in your datasets will statistically be good \npredictors for a Machine Learning model. In order to use this model, you first' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 57}
55
page_content='40 | Machine Learning\nneed to compute the numerical distance between the closest values. If these \nvalues are quantitative, you could then use the Euclidean Distance formula. \nBut if your datasets have some sort of qualitative value, you could then use \nwhat is known as the “Overlap Metric.” Next, you will then have to ascertain \nthe total number of values that are closely aligned with one another. While \nhaving more of these kinds of values in your datasets could mean a much \nmore efficient and robust Machine Learning Model, this also translates into \nusing much more processing resources of your Machine Learning System. \nTo help accommodate for this, you can always assign higher value statistical \nweights to those particular values that are closely affiliated with one another.\n 3) The Linear Regression :\nThis kind of methodology is strictly statistical. This means that it tries to \nexamine and ascertain the relationship between preestablished variables that \nreside from within your datasets. With this, a line is typically plotted, and can \nbe further smoothed out using a technique called “Least Squares.”\n 4) The Decision T ree :\nThis methodology actually provides an alternative to the other techniques \ndescribed thus far. In fact, the Decision T ree works far better and much more \nefficiently with non- numerical data, such as those that deal with text values. \nThe main starting point of the decision is at the node, which typically starts \nat the top of any given chart. From this point onwards, there will be a series \nof decision branches that come stemming out, thus giving it its name. The \nfollowing example depicts a very simple example of a Decision T ree:\nYesN o\nYesN oAm I \nhungry?\nDo I have \n$30. 00?Stay home \nand watch \nTV\nGo to a nice \nrestaurantsGet a pizz a' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 58}
56
page_content='Machine Learning | 41\nThe above is of course, a very simple Decision T ree to illustrate the point. \nBut when it comes to Machine Learning, Decision T rees can become very \nlong, detailed, and much more complex. One of the key advantages of using a \nDecision T ree is that they can actually work very well with very large datasets \nand provide a degree of transparency during the Machine Leaning Model \nbuilding process.\nBut, on the flip side, a Decision T ree can also have its serious disadvantages \nas well. For example, if just one branch of it fails, it will have a negative, cas -\ncading effect on the other branches of the Decision T ree.\n 5) The Ensemble Model :\nAs its name implies, this particular technique means using more than just one \nmodel, it uses a combination of what has been reviewed so far.\n 6) The K- Means Clustering :\nThis methodology is very useful for extremely large datasets— it groups \ntogether the unlabeled datasets into various other types of groups. The first \nstep in this process is to select a group of clusters, which is denoted with \nthe value of “k.” For illustration purposes, the diagrams below represent two \ndifferent clusters:\nX\nX\nXX\nX\nX X\nOnce you have decided upon these clusters, the next step will be to calculate \nwhat is known as the “Centroid.” This is technically the midpoint of the two \nclusters, illustrated below:\nCluster #1\nX\nX\nXCentroidX\nX\nX XCentroid' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 59}
57
page_content='42 | Machine Learning\n \nFinally, this specific algorithm will then calculate the average distance of the two \nCentroids, and will keep doing so in an iterative fashion until the two Centroids \nreach the point of convergence— that is, when the boundaries of the two clusters \nwill actually meet with each other. It should be noted that this technique suffers \nfrom two different drawbacks:\n\t{It does not work well with non- spherical datasets;\n\t{There could be some clusters with many data points in them, and some with \nhardly any at all. In this particular instance, this technique will not pick up \non the latter.\nKey Statistical Concepts\nApart from the mathematical side of the algorithms, Machine Learning also makes \nheavy usage of the principles of statistics, and some of the most important ones that \nare used are described in this subsection:\n 1) The Standard Deviation :\nThis measures the average distance from the statistical aspect of any dataset.\n 2) The Normal Distribution :\nThis is the “bell- shaped curve” that we have heard so often about. In more \ntechnical terms, it represents the sum of the statistical properties in the \nvariables of the all the datasets that you are going to use for the Machine \nLearning system.\n 3) The Bayes Theorem:\nThis theorem provides detailed, statistical information about your datasets.\n 4) The Correlation :\nThis is where the statistical correlations or commonalities (or even associ -\nations) are found amongst all of the datasets. Here are the guiding principles \nbehind it:\n\t{Greater than 0 :\nThis occurs when a variable increases by an increment of one. Consequently, \nthe other variables will also increase by at least of a value of one.\n\t{0:\nThere is no statistical correlation between any of the variables in the \ndatasets.\n\t{Less than 0 :\nThis occurs when a variable increases by an increment of one. Consequently, \nthe other variables will also decrease by at least of a value of one.\nSo far, we have provided a high level overview of the theoretical aspects of Machine \nLearning. In the next section of this book, we will now do the “Deep Dive.”' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 60}
58
page_content='Machine Learning | 43\n \n \n \nThe Deep Dive into the Theoretical \nAspects of Machine Learning\nUnderstanding Probability\nIf you haven’t noticed already, one of the key drivers behind any Machine Learning \nis the quality and the robustness of the data sets that you have for the system that \nyou are using. In fact, it is probably safe to say that the data is roughly 80\xa0percent \nof the battle to get your Machine Learning system up and running and to produce \nthe outputs that you need for your project. So in this regard, you will probably rely \nupon the concepts of statistics much more so than pure and discrete mathematics, \nas your data sets will be heavily reliant upon this.\nIn the field of statistics, the concepts of probability are used quite often. \nProbability, in much more specific terms, is the science of trying to confirm the \nuncertainty of an event, or even a chain of events. The value “E” is most commonly \nused to represent a particular event, and the value of P € will represent the level of \nprobability that will occur for it. If this does not really happen, this is called the \n“T rail.” In fact, many of the algorithms that are used for Machine Learning come \nfrom the principles of probability and the na ïve Bayesian models.\nIt should be noted at this point that there are three specific categories for the \npurposes of further defining probability, and they are as follows:\n 1) The Theoretical Probability :\nThis can be defined as the number of ways that a specific event can occur, \nwhich is mathematically divided by the total number of possible outcomes \nthat can actually happen. This concept is very often used for Machine \nLearning systems in order to make better predictions for the future, such as \npredicting what the subsequent Cyberthreat Landscape will look like down \nthe road.\n 2) The Empirical Probability :\nThis describes the specific number of times that an event will occur, which is \nthen mathematically divided by the total number of incidents that are also \nlikely to occur.\n 3) The Class\xa0Membership :\nIn this instance, when a particular dataset is assigned and given a label, this \nis known technically as “Classification Predictive Modeling.” In this case, the \nprobability that a certain observation will actually happen, such as assigning \na particular dataset to each class, can be predicted. This makes it easier to lay \ndown the actual objectives for what the Machine Learning system will accom -\nplish before you select the algorithms that you will need.\nIt should be noted that the above- mentioned classifications of probability can also \nbe converted into what are known as “Crisp Class\xa0Labels.” In order to conduct this' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 61}
59
page_content='44 | Machine Learning\n \n \nspecific procedure, you need to choose the dataset that has the largest levels of prob -\nability, as well as those that can be scaled through a specific calibration process.\nKeep in mind that at least 90\xa0 percent of the Machine Learning models are \nactually formulated by using a specific sequencing of various iterative algorithms. \nOne of the most commonly used techniques to accomplish this task is the known \nas the “Expectation Maximization Algorithm” which is most suited for clustering \nthe unsupervised data sets. In other words, it specifically minimizes the difference \nbetween a predicted probability distribution and a predicted probability distribution.\nAs it will be further reviewed in the next subsection, Bayesian Optimization is \nused for what is known as “Hyperparameter Optimization.” This technique helps \nto discover the total number of possible outcomes that can happen for all of your \ndatasets that you are making use of in your Machine Learning system. Also, prob -\nabilistic measures can be used to evaluate the robustness of these algorithms. One \nsuch other technique that can be used in this case is known as “Receiver Operating \nCharacteristic Curves,” or “ROC” for short.\nFor example, these curves can be used to further examine the tradeoffs of these \nspecific algorithms.\nThe Bayesian Theorem\nAt the heart of formulating any kind or type of Machine Learning algorithm is \nwhat is known as the “Bayesian Probability Theory.” In this regard, the degree of \nuncertainty, or risk, of collecting your datasets before you start the optimization \nprocess is known as the “Prior Probability,” and the examining of this level of risk \nafter the dataset optimization process has been completed is known as the “Posterior \nProbability.” This is also known in looser terms as the “Bayes Theorem.”\nThis simply states that the relationship between the probability of a hypoth -\nesis before getting any kind of statistical evidence (which is represented as P[H] ) \nand after can be driven into the Machine Learning system by making use of the \nfollowing mathematical computation:\nPr (H|E)\xa0=\xa0Pr (E|H) * Pr(H) / Pr(E)\nIn the world of Machine Learning, there are two fields of statistics that are the most \nrelevant, and they are as follows:\n 1) Descriptive Statistics :\nThis is the sub- branch of statistics that further calculates any useful properties \nof your datasets that are needed for your Machine Learning system. This actu -\nally involves a simple set, such as figuring out the mean, median, and mode \nvalues amongst all of your datasets. Here:\n\t{The Mean:\xa0This is the average value of the dataset;\n\t{The Mode:\xa0This is the most frequent value that occurs in your datasets;' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 62}
60
page_content='Machine Learning | 45\n \n \n \n \n\t{The Median:\xa0This is the middle value which physically separates the higher \nhalf of the values in your dataset from the lower half of the values in your \ndataset.\n 2) Inferential Statistics :\nThis grouping of statistics is implemented into the various methods that actu -\nally support the various quantifying properties of the datasets that you are \nusing for your Machine Learning system. These specific techniques are used to \nhelp quantify the statistical likelihood of any given dataset that is used in cre -\nating the assumptions for the Machine Learning model formulation process.\nThe Probability Distributions for Machine Learning\nIn Machine Learning, the statistical relationship between the various events of what \nis known as “Continuous Random Variable” and its associated probabilities is known \nas the “Continuous Probability Distribution.” These specific distribution sets are in \nfact a key component of the operations that are performed by the Machine Learning \nmodels in terms of optimizing the numerical input and output variables.\nAlso, the statistical probability of an event that is equal to or less than a particular \ndefined value is technically known as the “Cumulative Distribution Function,” or \n“CDF” for short. The inverse, or reverse, of this function is called the “Percentage \nPoint Function,” or “PPF” for short. In other words, the Probability Density \nFunction calculates the statistical probability of a certain, continuous outcome, and \nthe Cumulative Density Function calculates the statistical probability that a value \nthat is less or equal to a certain outcome will actually transpire in the datasets that \nyou are using in your Machine Learning system.\nThe Normal Distribution\nThe Normal Distribution is also known as the “Gaussian Distribution.” The premise \nfor this is that there is a statistical probability of a real time event occurring in your \nMachine Learning system from your given datasets. This distribution also consists \nof what is known as a “Continuous Random Variable,” and this possesses a Normal \nDistribution that is evenly divided amongst your datasets.\nFurther, the Normal Distribution is defined by making use of two distinct and \nestablished parameters which are the Mean (denoted as “mu”) and the Variance \n(which is denoted as Sigma ^2). Also, the Standard Deviation is typically the average \nspread from the mean and is denoted as “Sigma” as well. The Normal Distribution \ncan be represented mathematically as follows:\nF(X)\xa0=\xa01/ aSQRT PI ^e\xa0– (u\xa0– x)^2/ 2O^2.\nIt should be also noted that this mathematical formula can be used in the various \nMachine Learning Algorithms in order to calculate both distance and gradient' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 63}
61
page_content='46 | Machine Learning\n \n \ndescent measures, which also include the “K- Means” and the “K- Nearest Neighbors.” \nAt times, it will be necessary to rescale the above- mentioned formula until the appro -\npriate statistical distribution is actually reached. In order to perform the rescaling \nprocess, the “Z- Score Normalization” and the “Min- Max T ransformation” are used.\nFinally, in terms of the Machine Learning Algorithms, the independent variables \nthat are used in your datasets are also known as “Features.” The dependent variables \nare also known as the “Outputs.”\nSupervised Learning\nEarlier in this chapter, Supervised Learning was reviewed. Although just a high level \noverview of it was provided, in this subsection, we now go into a much deeper \nexploration of it. It should be noted that many of the Machine Learning algorithms \nactually fall under this specific category. In general, Supervised Learning works by \nusing a targeted independent variable (it can also even be a series of dependent \nvariables). From this point onwards, a specific mathematical function can then be \ncreated which can associate, or map, the inputs from the datasets to what the desired \nor expected outputs should be.\nThis is an iterative process that keeps going until an optimal level of accuracy \nis reached, and the desired output has an expected outcome with it as well. The \nfollowing are typical examples of some of the statistical techniques that are used in \nthis iterative process:\n 1) Linear Regression :\nThis is probably the best approach to be used in order to statistically estimate \nany real or absolute values that are based upon the continuous variables that \nare present in the Machine Learning model. With this technique, a linear rela -\ntionship (as its name implies) is actually established and placed amongst both \nthe independent variable and the dependent variables that are present in the \nMachine Learning model. Technically, this is known as the “Regression Line,” \nand the mathematical formula for this is as follows:\nY\xa0=\xa0a*X + b.\nWith this kind of modeling technique, the statistical relationships are actually \ncreated and filtered via numerous Linear Predictor Functions. From here, the \nparameters of these particular functions are then estimated from the datasets \nthat are used in the Machine Learning system. Although Linear Regression is \nwidely used in Machine Learning, there are also a number of specific other \nuses for it as well, which are as follows:\n\t{Determining the strength of the predictors, which can be a very subjective \ntask to accomplish;' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 64}
62
page_content='Machine Learning | 47\n \n \n \n\t{T rend Forecasting, in that it can be used to estimate the level of the impact \nof any changes that may transpire from within the datasets;\n\t{Predicting or forecasting a specific event into the future. For example, as \nit relates to Cybersecurity, it can be used to help predict what a new threat \nvector variant could potentially look like.\n\t{In the case that there are multiple independent variables that are being \nused (typically, there is just one, as denoted by the value of “Y” in the \nabove equation), then other techniques have to be used as well, which \ninclude those of Forward Selection, Step Wise Elimination, and Backward \nElimination.\n 2) Logistic Regression :\nThis statistical technique is used for determining the levels of probability \nof both an outcome success and an outcome failure. Thus, the dependent \nvariables that are present must be in binary format, which is either a 0 or a \n1.\xa0This kind of technique can be mathematically represented as follows:\nOdds\xa0=\xa0p/ (1- p)\nLn(odds)\xa0=\xa0ln[p/ (1- p)]\nLogit(p)\xa0=\xa0ln ln[p/ (1- p)].\nIt should be noted also that this technique also makes use of what are known \nas “Binomial Distributions.” In other words, a Link Function must be selected \nfor the specific distribution that is at hand. Unlike the previously mentioned \ntechnique, there is no linear relationship that is required. Further, this kind \nof technique is mostly used for the purposes of problem classification for the \nMachine Learning system.\n 3) Stepwise Regression :\nAs mentioned previously, this kind of technique works best when there are \nmultiple independent variables that are present. In this regard, these inde -\npendent variables can be further optimized with the following tools:\n\t{The AIC Metric;\n\t{The T- Test;\n\t{The R Squared, as well as the Adjusted R Squared.\nOne of the main benefits of this technique is that Covariant Variables can be \nadded one at a time, but permutations for doing this have to be established \nfirst. One of the key differences between Stepwise Regression and the Forward \nRegression is that the former can actually remove any kind of statistical predictors, \nbut the with the latter, a “Significant Predictor” can add any other extra stat -\nistical variables that are needed in the development of the Machine Learning' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 65}
63
page_content='48 | Machine Learning\n \n \nmodel. Also, Backward Elimination starts this process with all of the statistical \npredictors present in the Machine Learning model, and from there removes \nevery least significant variable that occurs throughout this entire iterative cycle.\n 4) Polynomial Regression :\nIf it were to be the case that the power of an independent variable happens to \nbe greater than one (this can be mathematically represented as “Y^1\xa0> 1”), this \nthen becomes what is known as the “Polynomial Regression Equation.” This \ncan be mathematically represented as follows:\nY\xa0=\xa0a+b*Y^2.\n 5) Ridge Regression :\nThis technique is specifically used when the datasets that are used for the \nMachine Learning system undergo a transformation which is known as \n“Multicollinearity.” This typically occurs when the independent variables are \nhighly correlated, or associated, amongst one another, and from there, the \nLeast Squares calculations remain at a neutral or unchanged point.\nTo counter for the Multicollinearity effect, a certain degree of statistical bias \nis added in order to help reduce any Standard Errors or other types of statis -\ntical deviations that may occur in the Machine Learning model. The effects of \nMulticollinearity can be mathematically represented as follows:\nY\xa0=\xa0a+y a+ b1x1+ b2x2+, b3x3, etc.\nAlso in this technique, the “Regularization Method” can be used to make \nsure that the values of the coefficients that are present in the above formula \nwill never reach zero during the time that the Machine Learning system is \nin use.\n 6) Least Absolute Shrinkage & The Selector Operator Regression (aka the “Lasso \nRegression”) :\nThis specific technique possesses the ability to reduce any of the statistical \nvariability that is present in the Machine Learning model, by reducing the \namount of variability that is present. This can be deemed also as an optimiza -\ntion or a “regularization” technique in that only one single statistical option is \npicked from an aggregate group of predictors. This technique also can make \nfuture predictions even much more accurate in nature.\nThe fundamental question that often gets asked at this point is what type of \nRegression Technique should be used for the Machine Learning model? The basic \nrule of thumb is that if the outputs should be continuous (or linear) in nature, \nthen Linear Regression should be used. However, if the output is multiple options \nin nature, such as being binary, then either the Binary or the Logistic Regression \nmodels should be used.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 66}
64
page_content='Machine Learning | 49\n \nBut, there are other factors that need to be taken into consideration, which \ninclude the following:\n\t{The type of the independent and the dependent variables that are being used;\n\t{The characteristics of the datasets that are being used as well as their mathem -\natical dimensionality.\nThe Decision Tree\nAn overview of the Decision T ree was provided earlier in this chapter, and in this \nsubsection, we do a deeper dive into it. This technique is actually considered to be \na part of Supervised Learning. The ultimate goal of the Decision T ree is to create \na Machine Learning model which has the potential to predict a certain value of a \ntarget variable by learning the decision’s rules, or permutations, that have been ini -\ntially deployed into the datasets, in order to make a more effective learning environ -\nment for the Machine Learning system.\nIt should be noted that Decision T rees can also be called “Classification and \nRegression T rees,” or “CART” for short. In this particular situation, the ability \nto predict the value of a target variable is created by what are known as “If/ Then \nStatements.” Some of the attributes of a Decision T ree include the following:\n 1) The Attribute :\nThis is a numerical quantity that describes the value of an instance.\n 2) The Instance :\nThese are the attributes that further define the input space and are also referred \nto as the “Vector of Features.”\n 3) The Sample :\nThis is the set of inputs that are associated with or combined with a specific \nlabel. This then becomes known as the “T raining Set.”\n 4) The Concept :\nThis is a mathematical function that associates or maps a specific input to a \nspecific output.\n 5) The Target Concept :\nThis can be deemed to be the output that has provided the desired results or \noutcome.\n 6) The Hypothesis Class :\nThis is a set or category of possible outcomes.\n 7) The Testing Set:\nThis is a sub- technique that is used to further optimize the performance of the \n“Candidate Concept.”\n 8) The Candidate Concept :\nThis is also referred to as the “Target Concept.”' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 67}
65
page_content='50 | Machine Learning\nAn graphical example of a Decision T ree was provided earlier in this chapter. It \nshould be noted that this technique also makes further usage of Boolean functions, \nAND OR XOR mathematical operators, as well as Boolean gates.\nThe specific steps for creating any kind of Machine Learning- based Decision \nT ree are as follows:\n\t{Obtain the datasets that will be needed and from there compute the statistical \nuncertainty for each of them;\n\t{Establish a list of questions that have to be asked at every specific node of the \nDecision T ree;\n\t{After the questions have been formulated, create the “T rue” and “False” rows \nthat are needed;\n\t{Compute the information that has been established from the partitioning that \ntook place in the previous step;\n\t{Next, update the questions that are being asked from the results of the process \nthat have been garnered in the last step;\n\t{Finally, divide, and if need be, sub- divide the nodes and keep repeating this \niterative process until you have completed the objective of the Decision T ree \nand it can be used for the Machine Learning system.\nIt should also be noted that in Machine Learning, the Python Programming \nLanguage is used quite extensively. This will be examined in much greater detail, \nbut the below provides an example of how it can be used in creating a Decision T ree \nas well:\nImport numpy as np\nImport pandas as pd\nFrom sklearn.metrics import confusion_ matrix\nFrom sklearn.cross - validation import train_ test_ split\nFrom sklearn.tree import DecisionT reeClassifier\nFrom sklearn.metrics import accuracy_ score\nFrom sklearn.metrics import classification_ report\n# Function importing data set\nDef importdata ();\n Balance_ data\xa0=\xa0pd.read_ csv(\n#Printing the dataswet shape\nPrint (“data set Length:”,len(balance_ data)\nPrint (“data set Shape:\xa0”, balance_ data.shape)\n#Printing the data set observations\nPrint “[data set:\xa0”, balance _ data.head()]\nReturn balance_ data' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 68}
66
page_content='Machine Learning | 51\n#Function to split the data set\nDef splitdata set(balance_ data):\n#Separating the target variable\nX\xa0=\xa0balance_ data.values [:, 1:5]\nY\xa0=\xa0balance_ data.values[:,\xa00]\n#Splitting the dataset into train and test\nX_ train, X- test, y_ train, y_ test,\xa0=\xa0train_ test_ split(\nX, Y, test_ size\xa0=\xa00.3, random_ state\xa0=\xa0100)\nReturn X, Y, X _ train; X_ test; y_ train, Y_ test\n#Function to perform training with giniIndex.\nDef train_ using_ gini(X_ train. X_ test, y_ train);\n#Creating the classifier object\nOf_ gini\xa0=\xa0DecisionT reeClassifer(criterion\xa0=\xa0“gin”,\nRandom_ state\xa0=\xa0100,max_ depth=3,\nMin_ samples_ leaf=5)\n#Performing training\nCif_ gini.fit(X_ train, y_ train)\nRetrn cif_ gini\n#Function to perform training with entropy.\nDef tarin_ using_ entropy(X_ train, X_ test, y_ train);\n#Decision tree with entropy\nClf_ entropy_ = DecisionT reeClassifier(\n Criterion\xa0=\xa0“entropy”, random_ state#\n100,\nMax_ depth= 3, min _ samples_ leaf =5)\n#Performing training\nClf_ enrtropy.fit(X_ train, y_ train)\nReturn clf_ entropy\n#Function to make predictions\nDef prediction(X_ test, clf_ object):\n#Prediction on test with giniIndex\nY_ pred\xa0=\xa0clf_ object.predict(X_ test)\nPrint(“Predicted values:”)\nReturn y_ pred\n#Function to compute accuracy\nDef cal_ accuracy(y_ test, y_ pred):' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 69}
67
page_content='52 | Machine Learning\n \n Print(“Confusion Matrix:\xa0”;\n Confusion_ matrix(y_ test, y_ pred)\nPrint (“Accuracy :”\nAccuracy_ score(y_ test, y_ pred)*100)\nPrint (“Report :\xa0”,\nClassification_ report(y_ test, y_ pred)\n#Driver code\nDef():\n#Building Phase\nData\xa0=\xa0importdata()\nX, Y, X_ train, X_ test, y_ train, y_ test\xa0=\xa0splitdata set(data)\nClf_ gini\xa0=\xa0train_ using_ gini(X_ train, X_ test, y_ train)\nClf_ entropy\xa0=\xa0train_ using_ entropy(X_ train, X_ test, y_ train)\n#Operational Phase\nPrint(“Results Using Gini Index:”)\n#Prediction using gini\nY_ pred_ gini\xa0=\xa0prediction(X_ test, clf_ gini)\nCal_ accuracy(y_ test, y_ pred_ gini)\nPrint(“Results Using Entropy:”)\n #Prediction using entropy\nY_ pred_ entropy\xa0=\xa0prediction(X_ test, clf_ entropy)\n Cal_ accuracy(y_ test, y_ pred_ entropy)\n#Calling amin function\nIf_ name_ ==”_ main_ :”\n Main()\n(Sharma, n.d.)\n(SOURCE:\xa02).\nThe Problem of Overfitting the Decision Tree\nOnce the Decision T ree has been completed, one of major drawbacks of it is that \nis very susceptible to what is known as “Overfitting.” This simply means that \nthere are more datasets than what is needed for the Machine Learning system; \ntherefore, further optimization is thus needed in order to gain the desired \noutputs. In order to prevent this phenomenon from happening, you need to' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 70}
68
page_content='Machine Learning | 53\n \n \n \ncarefully study those branches on the Decision T ree that are deemed to be not \nas important.\nIn these instances, these specific branches, or nodes, then need to be removed. \nThis process is also called “Post Pruning,” or simply “Pruning.” In this particular \ninstance, there are two more specific techniques, which are as follows:\n 1) The Minimum Error :\nIn this instance, the Decision T ree is pruned back to the point where the Cross \nValidation Error is at its minimum point.\n 2) The Smallest T ree :\nIn this case, the Decision T ree is reduced even more than the established value \nfor the Minimum Error. As a result, this process will create a Decision T ree \nwith a Cross Validation Error that is within at least one Standard Deviation \naway from the Minimum Error.\nBut, it is always very important to check for Overfitting as you build the Decision \nT ree. In this case, you can use what is known as the “Early Stopping Heuristic.”\nThe Random Forest\nRandom Forests are a combination of many Decision T rees, probably even in the \nrange at a minimum of hundreds or even thousands of them. Each of the indi -\nvidual trees are trained and simulated in a slightly different fashion from each other. \nOnce the Random Forest has been completed and optimized, the final outputs \nare computed by the Machine Learning system in a process known as “Predictive \nAveraging.”\nWith Random Forests, the datasets are split into much smaller subsets that are \nbased upon their specific features at hand, and which also reside only under one \nparticular Label Type. They also have certain statistical splits by a statistical measure \ncalculated at each from within the Decision T ree.\nBagging\nThis is also known as “Bootstrap Aggregation.” This is a specific approach that is \nused to combine the predictions from the various Machine Learning systems that \nyou are using and put them together for the sole purposes of accomplishing more \naccurate Mode Predictions than any other individual that is presently being used. \nBecause of this, the Decision T ree can be statistically very sensitive to the specific \ndatasets that they have been trained and optimized for.\nBagging can also be considered to be a further subset in the sense that it is typ -\nically applied to those Machine Learning algorithms that are deemed to be of “High' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 71}
69
page_content='54 | Machine Learning\n \n \nVariance” in nature. The Decision T rees that are created from Bootstrap Aggregation \ncan also be highly sensitive in nature once again to the datasets that are being used \nfor the tasks that they have been trained to do. The primary reason for this is that \nany small or incremental changes can drastically alter the composition and makeup \nof the Decision T ree structure.\nWith the Bagging technique, the datasets are not actually further subdivided; \ninstead, each node of the Decision T ree is associated with a specific sample of the \ndataset in question. A\xa0random size is typically assigned. This stands in sharp con -\ntrast to a more normalized Decision T ree in which the randomness typically happens \nwhen that specific node is further subdivided, and from there, a greater degree of \nstatistical separation can thus be achieved.\nA question that typically gets asked at this point is, which is better:\xa0the Random \nForest, or making use of multiple Decision T rees that are not interlinked or other -\nwise connected with one another? In most cases, the choice of the former is a much \nbetter one, because better Pooling Techniques, as well as various other types of \nMachine Learning algorithms can be used as well, and bonded all together into one \ncohesive unit.\nThe Naïve Bayes Method\nThis is a well- known technique that is typically used for Predictive Modeling \nscenarios by the Machine Learning system. It should be noted that with Machine \nLearning, the computations are done on a specific dataset in which the best statis -\ntical hypothesis must be figured out in order to yield the desired outputs. The Na ïve \nBayes Method can be mathematically represented as follows:\nP(h|d)\xa0=\xa0[P(d|h) * P(h))/ P(d)]\nWhere:\nP(h|d)\xa0 =\xa0 is the statistical probability of a given hypothesis (known as “h”) \nwhich is computed onto a particular dataset (which is known as “d”).\nP(d|h)\xa0=\xa0is the probability of dataset “d,” assuming the hypothesis “h” is actu -\nally statistically correct.\nP(d)\xa0=\xa0is the probability of a dataset absent of any kind of hypothesis (“h”) or \nany form of dataset “d”).\nIn this regard, if all of the above are also correct, then one can conclude that the \nhypothesis “h” is also correct. What is known as a “Posterior Probability” is further \nassociated with this concept as well.\nThe above methodology can also be used to compute the “Posterior Probability” \nfor any given number of statistical hypothesis. Of course, the one that has the highest \nlevel of probability will be selected for the Machine Learning System because it is' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 72}
70
page_content='Machine Learning | 55\n \n \ndeemed the most successful and most robust in nature. But, if the situation arises \nwhere all levels of statistical hypotheses are equal in value, then this can be mathem -\natically represented as follows:\nMAP(h)\xa0=\xa0max[(P(d|h)).\nIt is also worth mentioning that this methodology consists of yet another algorithm \nwhich is known as the “Na ïve Bayes Classification.” This technique is typically used \nto determine and ascertain if a certain statistical value is either Categorical or Binary \nby design. The Class Probabilities and their associated conditional sets are also \nknown as the “representations” of the Na ïve Bayes Model. Also, Class Probabilities \nare the statistical odds of each class that is present in the datasets; the Conditional \nProbabilities are ascertained form the given input values for each Value Class from \nthe datasets that are used in the Machine Learning system.\nAnother common question that typically gets asked at this point is, how does \nthe Na ïve Bayes Theorem actually work, at least on a high level? Well, one needs to \nfirst compute the Posterior Probability (which is denoted as P(c|x) from the P©, \nthe P(X), and the PX|C). In other words, the foundations for this algorithm can be \nmathematically represented as follows:\nP(c|x)\xa0=\xa0P(x|c)P©/ P(x)\nWhere:\nP(c|x)\xa0=\xa0the Posterior Probability;\nP(x|c)\xa0=\xa0the Statistical Likelihood;\nP©\xa0=\xa0the Class Prior Probability;\nP(x)\xa0=\xa0the Predictor Prior Probability.\nGiven the above mathematical representation, the specific class that has the highest \nlevels of statistical Posterior Probability will likely be the candidate to be used in \ncomputing the final output from the Machine Learning system.\nThe advantages of the Na ïve Bayes Method are as follows:\n\t{It is one of the most widely used algorithms in Machine Learning to date;\n\t{It gives very robust results for all sorts of Multi- Class predictions;\n\t{It requires much less training versus some of the other methods just reviewed;\n\t{It is best suited for Real Time Prediction purposes, especially for Cybersecurity \npurposes when it comes to filtering for false positives;\n\t{It can predict the statistical probability of various Multiple Classes of the \ntargeted variables;\n\t{It can be used for text classification purposes (this is typically where the \ndatasets are not quantitative in nature, but rather qualitative;' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 73}
71
page_content='56 | Machine Learning\n \n\t{With the filtering approaches that it has, it can very easily find hidden trends \nmuch more quickly than the other previously reviewed methods.\nThe disadvantages of the Na ïve Bayes Method are as follows:\n\t{It is not efficient for predicting the class of a test data set;\n\t{If any T ransformation Methods are used, it cannot convert the datasets into a \nStandard Normal Distribution curve;\n\t{It cannot deal with certain Correlated Features because they are considered \nto be an overhead in terms of processing power on the Machine Learning \nsystem;\n\t{There are no Variance Minimization Techniques that are used, and thus it \ncannot make use of the “Bagging Technique”;\n\t{It has a very limited set for Parameter T uning;\n\t{It makes the further assumption that every unique feature in each and every \ndataset that is present and used for the Machine Learning system is unrelated \nto any other, and thus, it will not have any positive impact on other features \nwhich may be present in the datasets.\nThe KNN Algorithm\nThis is also known as the “K Nearest Neighbors” algorithm. This is also deemed \nto be a Supervised Machine Learning algorithm. It is typically used by those \nMachine Learning systems in order to specifically solve Classification and \nRegression scenarios. This is a very widely- used Machine Learning algorithm \nfor the reason that it has two distinct properties, unlike the ones previously \nexamined. They are as follows:\n 1) It is a “Lazy” Algorithm :\nIt is lazy in the sense that this algorithm has no specialized training segments \nthat are associated with it, and thus it makes use of all of the datasets that are \navailable to it while it is training in its Classification Phase.\n 2) It is No n- Parametric by nature :\nThis simply means that this specific algorithm never makes any assumptions \nabout the underlying datasets.\nIn order to fully implement the KNN Algorithm for any kind of Machine Learning \nsystem, the following steps have to be taken:\n 1) Deploy the datasets, and initialize the “K” value to the preestablished set of \nthe total number of nearest neighbors that are present. Also, any training and \nother forms of testing datasets must be deployed as well.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 74}
72
page_content='Machine Learning | 57\n \n 2) It is important to also calculate the values of “K” as well as the distance from \nthe training datasets and the test datasets.\n 3) From within every point in the test dataset, you also need to compute the \ndistance between the test datasets as well as each and every row for each of the \ntraining datasets as well.\n 4) Once the above step has been accomplished, then sort the “K” values in an \nascending order format which is based upon the distance values that have \nbeen calculated previously. From this point, then choose the top “K” rows, \nand assign a specific class to it.\n 5) Finally, get the preestablished Labels for the “K” entries that you have just \nselected.\nAnother key advantage of the KNN Algorithm is that there is no learning that is \ntypically required, and because of that, it is very easy to update as new datasets \nbecome available. This algorithm can store other forms of datasets as well by taking \ncomplex dataset structures and matching new learning patterns as it tries to predict \nthe values of the various outputs. Thus, if any new types of predictions have to be \nmade for the outputs, it can just use the pre- existing training datasets.\nAs we alluded earlier, various distances must be calculated for the KNN \nAlgorithm. The most commonly used one is what is known as the “Euclidean \nDistance,” which is represented by the following mathematical formula:\nEuclidean Distance(X, Xi)\xa0=\xa0SQRT[(sum((X)\xa0– (xij^2)]\nIt should also be noted that other distancing formulas can be used as well, espe -\ncially that of the Cosine Distance. Also, the computational complexity of the KNN \nAlgorithm can also increase in tandem upon the size of the training dataset. This \nsimply means that there is a positive, statistical relationship that exists:\xa0as the size \nincreases, so will the complexity.\nAs mentioned, Python is very often used for Machine Learning, and the following \ncode can be used to predict the outputs that the KNN Algorithm will provide:\nKnn_ predict <- function(test, train, k_ value){\nPred <- c()\n#LOOP\xa0– 1\nFor(I in c(1:row(test))){\n Dist\xa0=\xa0c()\n Char\xa0=\xa0c()\n Setosa\xa0=\xa00\n Versicolor\xa0=\xa00\n Virginica\xa0=\xa00\n}' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 75}
73
page_content='58 | Machine Learning\n \n#LOOP\xa0– 2\xa0– looping over trained data\nFor (j in c(1:row(train))){}\nDist <- c(dist, ED(test[I,], train [j,]))\nChar <- c(char, as.character(train[j,][[5] ]))\nDf <- data.frame(char, dist$SepallLength)\nDf <- df[order(df$dist.SepallLength),]\n#sorting dataframe\n Df<- df[1:k_ value,]\n#Loop3:\xa0loops over df and counts classes of all neighbors\n For(k in c(1:nrow(df))){\nIf(as.character(df[k, “char”])\xa0=\xa0= “setoasa”){\nSetosa\xa0=\xa0setosa + 1\n}else if(as.character(df[k,, “char])\xa0=\xa0=\n“versicolor”){\n Versicolor\xa0=\xa0versicolor + 1\n }else\n Virginica\xa0=\xa0virginica +1\n }\nN<- table(df$char)\nPred\xa0=\xa0names(n)[which(n==max(n))]\nReturn(pred) #return prediction vector\n}\n#Predicting the value for K=1\nK=1\nPredictions <- knn_ predict(test, train, K)\nOutput:\nFor K=1\n[1] ”Iris- virginica\n(SOURCE:\xa02).\nUnsupervised Learning\nWhen it comes to Machine Learning, the Unsupervised Algorithms can be used to \ncreate inferences from datasets that are composed of the input data if they do not \nhave Labeled Responses that are associated with them. In this category, the various \nmodels that are used (and which will be examined in much more detail) make use of \nthe input data type of [X] , and further, do not have any association with the output \nvalues that are calculated.\nThis forms the basis for Unsupervised Learning, primarily because the goal of \nthe models is to find and represent the hidden trends without any previous learning \ncycles. In this, there are two major categories:\xa0Clustering and Association.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 76}
74
page_content='Machine Learning | 59\n \n \n \n 1) Clustering :\nThis typically occurs when inherent groups must be discovered in the datasets. \nBut in this category, the Machine Learning system has to deal with a tre -\nmendous amount of large datasets, which are often referred to as “Big Data.” \nWith Clustering, the goal is to find any and all associations (which are hidden \nand unhidden) in these large datasets. The following are the major types of \nClustering Properties that are very often used today in Machine Learning:\n\t{Probabilistic Clustering :\nThis involves grouping the various datasets into their respective clusters \nthat are based upon a predetermined probabilistic scale.\n\t{K- Means Clustering :\nThis involves the clustering of all of the datasets into a “K” number of stat -\nistically mutually exclusive clusters.\n\t{Hierarchical Clustering :\nThis classifies and categorizes the specific data points in all of the datasets \ninto what are known as “Parent- Child Clusters.”\n\t{Gaussian Mixture Models :\nThis consists of both Multivariate and Normal Density Components.\n\t{Hidden Markov Models :\nThis technique is used to analyze all of the datasets that are used by the \nMachine Learning systems, as well as to discover any sequential states that \ncould exist amongst them.\n\t{Self- Organizing Maps :\nThis maps the various Neural Network structures which can learn the \nStatistical Distribution as well as the Topology of the datasets.\nGenerative Models\nThese types of models make up the bulk of Unsupervised Learning Models. The pri -\nmary reason for this is that they can generate brand new data samples from the same \ndistribution of any established training dataset. These kinds of models are created \nand implemented to learn the data about the datasets. This is very often referred to \nas the “Metadata.”\nData Compression\nThis refers to the process for keeping the datasets as small as possible. This is purely \nan effort to keep them as smooth and efficient as possible so as not to drain the \nprocessing power of the Machine Learning system. This is very often done through \nwhat is known as the “Dimensionality Reduction Process.” Other techniques that \ncan be used in this regard include those of “Singular Value Decomposition” and \n“Principal Component Analysis.”' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 77}
75
page_content='60 | Machine Learning\n \nSingular Value Decomposition mathematically factors the datasets into a \nproduct of three other datasets, using the concepts of Matrix Algebra. With Principal \nComponent Analysis, various Linear Combinations are used find the specific statis -\ntical variances amongst all of the datasets.\nAssociation\nAs its name implies, this is actually a Rule- based Machine Learning methodology \nwhich can be used to find both hidden and unhidden relationships in all of the \ndatasets. In order to accomplish this, the “Association Rule” is typically applied. It \nconsists of both a consequent and an antecedent. An example of this is given in the \nmatrix below:\nFrequency Count Items That Are Present\n1 Bread, Milk\n2 Bread, Biscuits, Drink, Eggs\n3 Milk, Biscuits, Drink, Diet Coke\n4 Bread, Milk, Biscuits, Diet Coke\n5 Bread, Milk, Diet Coke, and Coke\n(SOURCE:\xa02).\nThere are two very important properties to be aware of here:\n\t{The Support Count :\nThis is the actual count for the frequency of occurrence in any set that is pre -\nsent in the above matrix. For example, [(Milk, Bread, Biscuit)]\xa0=\xa02. Here, the \nmathematical representation can be given as follows:\nX- >Y, where the values of X and Y can be any two of the sets in the above \nmatrix. For example, (Milk, Biscuits)- >(Drinks).\n\t{The Frequent Item :\nThis is the statistical set that is present when it is equal to or even greater than \nthe minimum threshold of the datasets. In this regard, there are three key \nmetrics that one needs to be aware of:\n1) The Support :\nThis specific metric describes just how frequently an Item Set actually \noccurs in all of the data processing transactions. The mathematical formula \nto calculate this level of occurrence is as follows:\nSupport[(X) \uf0e0(Y)]\xa0 =\xa0 transactions containing both X and Y/ The total \nnumber of transactions.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 78}
76
page_content='Machine Learning | 61\n \n2) Confidence :\nThis metric is used to gauge the statistical likeliness of an occurrence having \nany subsequent, consequential effects. The mathematical formula to calcu -\nlate this is as follows:\nConfidence[(X) \uf0e0(Y)]\xa0=\xa0the total transactions containing both X and Y/ \nThe transactions containing X.\n3) Lift:\nThis metric is used to statistically support the actual frequency of a conse -\nquent from which the conditional property of the occurrence of (Y)\xa0given \nthe state of (X)\xa0can be computed. More specifically, this can be defined as \nthe statistical rise in the probability level of the influence that (Y)\xa0has over \n(X). The mathematical formula to calculate this is as follows:\nLift [(X) \uf0e0(Y)]\xa0 =\xa0 (The total transactions containing both X and Y) \n*)The transactions containing X)/ The total fraction of transactions \ncontaining Y.\nIt should be noted that the Association Rule relies heavily upon using data patterns as \nwell as statistical co- occurrences. Very often in these situations, “If/ Then” statements \nare utilized. There are also three other Machine Learning algorithms that fit into this \ncategory, and they are as follows:\n 1) The AIS Algorithm :\nWith this, a Machine Learning system can scan in and provide the total count \nof the number of datasets that are being fed into the Machine Learning system.\n 2) The SETM Algorithm :\nThis is used to further optimize the transactions that take place within the \ndatasets as they are being processed by the Machine Learning system.\n 3) The Apriori Algorithm :\nThis allows for the Candidate Item to be set as a specific variable known as “S” \nto generate only those support amounts that are needed for a Large Item that \nresides within the datasets.\nThe Density Estimation\nThis is deemed to be the statistical relationship between the total number of \nobservations and their associated levels of probability. It should be noted here that \nwhen it comes to the outputs that have been derived from the Machine Learning \nsystem, the density probabilities can vary from high to low, and anything in \nbetween.\nBut in order to fully ascertain this, one needs to also determine whether or not a \ngiven statistical observation will actually happen or not.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 79}
77
page_content='62 | Machine Learning\n \n \n \n \n \n \n \nThe Kernel Density Function\nThis mathematical function is used to further estimate the statistical probability of a \nContinuous Variable actually occurring in the datasets. In these instances, all of the \nKernel Functions that are present are mathematically divided by the sheer total of \nthe Kernel Functions, whether they are actually present or not. This is meant to pro -\nvide assurances that the Probability Density Function remains a non- negative value, \nand to confirm that it will remain a mathematical integral over the datasets that are \nused by the Machine Learning system.\nThe Python source code for this is as follows:\nFor I\xa0=\xa01 to n:\nFor all X;\nDens(X)\n+\xa0=\xa0(1/ n) * (1/ w) *K[(x- Xi)/ w]\nWhere:\n\t{The Input\xa0=\xa0the Kernel Function K(x), with the Kernel Width of W, consisting \nof Data Instances of x1 and xN.\n\t{The Output\xa0=\xa0the estimated Probability Density Function that underlays the \ntraining datasets.\n\t{The Process:\xa0This initializes the Dens(X)\xa0=\xa00 at all points of “X” which occur \nin the datasets.\nLatent Variables\nThese variables are deemed to be those that are statistically inferred from other \nvariables in the datasets that have no direct correlation amongst one another. These \nkinds of variables are not used in training sets, and are not quantitative by nature. \nRather, they are qualitative.\nGaussian Mixture Models\nThese are deemed to be Latent Variable models as well. They are highly used in \nMachine Learning applications because they can calculate the total amount of data \nin the datasets, including those that contain Clusters. Each of the latter can be fur -\nther represented as N1, … NK, but the statistical distributions that reside in them \nare deemed to be Gaussian Mixture by nature.\nThe Perceptron\nAs you might have inferred, probably one of the biggest objectives of Artificial \nIntelligence, Machine Learning, and Neural Networks is to model the processes' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 80}
78
page_content='Machine Learning | 63\n \n \n \nof the human brain. Obviously, we know that the human brain is extremely \ncomplicated, and we probably only have hit upon only 1\xa0percent of it. T ruth be \ntold, we will never fully understand the human brain, and if we ever come to that \npoint, it is safe to say that it will be literally centuries away.\nAs we know, the Central Processing Unit (CPU) is the main processing com -\nponent of a computer. But if one were to equate this to the level of the brain, then \nthe equivalence would be what is called the “Neuron.” This will be covered in more \ndetail on the chapter which deals with Neural Networks, but we will provide some -\nwhat of an overview here, in this part of the chapter.\nThe human brain consists of literally billions and billions of neurons— according \nto some scientific studies there are as many as almost 90 billion of them. Research \nhas also shown that the Neuron is typically much slower than that of a CPU in the \ncomputer, but it compensates for that by having such a high quantity of them, as \nwell as so much connectivity.\nThese connections are known as “Synapses,” and interestingly enough, they \nwork in a parallel tandem from one another, much like the parallel processing in a \ncomputer. It should be noted that in a computer, the CPU is always active and the \nmemory (such as the RAM) is a separate entity. But in the human brain, all of the \nSynapses are distributed evenly over its own network. To once again equate the brain \nto the computer, the actual processing takes place in the Neurons, the memory lies \nin the Synapses of the human brain.\nWithin the infrastructure of the Neuron lies what is known as the “Perceptron.” \nJust like a Machine Learning system, it can also process inputs and deliver outputs \nin its own way. This can be mathematically represented as follows:\nXj\xa0=\xa0E, R, j\xa0=\xa01, … d\nWhere:\nD\xa0=\xa0the connection weight (also known as the “Synaptic Weight”);\nWj\xa0=\xa0R is the specific output;\nY\xa0=\xa0the weighted sum of the inputs.\nThe sum of the weighted inputs can be mathematically represented as follows:\nY\xa0=\xa0d ∑ j\xa0=\xa01 WjXj + Wo\nWhere:\nWo\xa0=\xa0the intercept value to further optimize the Perceptron Model.\nThe actual output of the Perceptron is mathematically represented as follows:\nY\xa0=\xa0W^t * X.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 81}
79
page_content='64 | Machine Learning\n \n \n \n \n \n \n \nIn this situation:\nW\xa0=\xa0[W0, W1, … Wd]^T\nX\xa0=\xa0[1, x1, … Xd^T.\nThe above- mentioned values are also known as the “Augmented Vectors,” which \ninclude a “Bias Weight,” which is statistically oriented, as well as the specific values \nfor the inputs. When the Perceptron Model is going through its testing phase, the \nstatistical weights (denoted as “W1”) and the inputs (denoted as “X”) will compute \nto the desired output, which is denoted as “y.”\nHowever, the Machine Learning system needs to learn about these particular \nstatistical weights that have been assigned to it, as well as its parameters, so that \nit can generate the needed outputs. This specific process can be mathematically \nrepresented as follows:\nY\xa0=\xa0Wx + w0.\nThe above simply represents just one input and one output. This also becomes a \nsolid linear line when it is embedded onto a Cartesian Geometric Plane. But, if \nthere is more than just one input, then this linear line becomes what is known \nas a “Hyperplane.” In this particular instance, these inputs can be used to imple -\nment what is known as a “Multivariate Linear Fit.” From here, the inputs in the \nPerceptron Model can be broken in half, where one of the input spaces contains \npositive values, and the other input space contains negative values.\nThis division can be done using a technique which is known as the “Linear \nDiscriminant Function,” and the operation upon which it is carried out is known as \nthe “Threshold Function.” This can be mathematically represented as follows:\nS(a)\xa0=\xa0{1 if a>0; 0 otherwise}\nChoose {C1 if s(w^tx)>0, C2 otherwise].\nIt should be noted that each Perceptron is actually a locally- based function of its \nvarious inputs and synaptic weights. However, actually deploying a Perceptron \nModel into the Machine Learning system is a two- step process. This can be math -\nematically represented as follows:\nOi\xa0=\xa0W^TI * X\nYi\xa0=\xa0exp 0i / ∑k exp 0k.\nTraining a Perceptron\nSince the Perceptron actually defines the Hyperplane, a technique known as “Online \nLearning” is used. In this specific scenario, the entire datasets are not fed into the' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 82}
80
page_content='Machine Learning | 65\n \n \n \nPerceptron Model, but instead are given representative samples of them. There are \ntwo advantages to this approach, which are as follows:\n\t{It makes efficient use of the processing power and resources of the \nPerceptron Model;\n\t{The Perceptron Model can decipher rather quickly what the old datasets and \nthe new datasets are in the training data.\nWith the “Online Learning” technique, the Error Functionalities that are associated \nwith the datasets are not overwritten at all. Instead, the first statistical weights that \nwere assigned are used to fine tune the parameters in order to further minimize any \nfuture errors that are found in the datasets. This technique can be mathematically \nrepresented as follows:\nE^T(w|x^1, r^1\xa0=\xa0 ½ (r^t\xa0– y^t)^2\xa0=\xa0 ½ {r^2\xa0– (w^T X^1)}^2.\nThe Online Updates can be represented as follows:\nDelta W^tj\xa0=\xa0n(r^1\xa0– y^t) x^t * j\nWhere:\nN\xa0=\xa0the learning factor.\nThe learning factor is slowly decreased over a predefined period of time for a \nConvergence Factor to take place. But if the training set is fixed and not dynamic \nenough in nature, the statistical weights are then assigned on a random basis. In \ntechnical terms, this is known as a “Stochastic Gradient Descent.” Under normal \nconditions, it is usually a very good idea to optimize and/ or normalize the various \ninputs so that they can all be centered around the value of 0, and yet maintain the \nsame type of scalar properties.\nIn a similar fashion, the Update Rules can also be mathematically derived for \nany kind Classification scenario, which makes use of a particular technique called \n“Logistic Discrimination.” In this this instance, the Updates are actually done after \neach Pattern Variance, instead of waiting for the very end and then getting their \nmathematical summation.\nFor example, when there are two types of Classes that are involved in the Machine \nLearning system, the Single Instance can be represented as follows:\n(x^2, r^2)\nWhere:\nR^I\xa0=\xa0X^t E C1 and R^1\xa0=\xa00 if X^1 E C2.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 83}
81
page_content='66 | Machine Learning\n \n \n \nFrom here, the single output can be calculated as follows:\nY^1\xa0=\xa0sigmoid (w^T, x^t).\nFrom here, the Cross Entropy is then calculated from this mathematical formula:\nE^t (w|x^t, r^t)\xa0=\xa0- r^t log y^t\xa0– (1- r^t) log (1- y^t).\nAll of the above can be represented by the following Python source code:\nFor i\xa0=\xa01, … K\nFor j\xa0=\xa00, … d\nWij \uf0df rand (- 0.01, 0.01)\nRepeat\nFor all (x^t, r^t) E X in random order\nFor I\xa0=\xa01, … K\nOi\xa0=\xa00\nFor j\xa0=\xa00, … d\nOi \uf0df Oi + Wijx^1\nFor I\xa0=\xa01, … K\nY1 \uf0dfexp(oi)/ ∑k exp(0k)\nFor I\xa0=\xa01, … K\nFor j\xa0=\xa00, … d\nWij \uf0df Wij + n(n^ti - y) * x^1j.\n(SOURCE:\xa03).\nThe Boolean Functions\nFrom within the Boolean Functions, the inputs are considered to be binary in nature, \nand the output value is normally 1 if the associated Function Value is deemed to be \n“T rue,” and it possesses the value of 0 in other states of matter. Thus, this can also \nbe characterized as a two- level classification problem, and the mathematical discrim -\ninant can be computed from the following formula:\nY\xa0=\xa0s(X1 + X2– 1.5)\nWhere:\nX\xa0=\xa0[1, x1, x2]^T\nW\xa0=\xa0[- 1.5, 1,\xa01]^T.' metadata={'source': '/content/Practical AI for Cybersecurity.pdf', 'page': 84}
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
11
Edit dataset card