phishing-dataset / README.md
ealvaradob's picture
Update README.md
7676290
|
raw
history blame
No virus
3.75 kB
metadata
license: apache-2.0
task_categories:
  - text-classification
language:
  - en
size_categories:
  - 10K<n<100K
tags:
  - phishing
  - url
  - html
  - text

Phishing Dataset

Phishing dataset compiled from various resources for classification and phishing detection tasks.

Dataset Details

The dataset has two columns: text and label. Text field contains samples of:

  • URL
  • SMS messages
  • Email messages
  • HTML code

Which are labeled as 1 (Phishing) or 0(Benign).

Source Data

This dataset is a compilation of 4 sources, which are described below:

  • Mail dataset that specifies the body text of various emails that can be used to detect phishing emails, through extensive text analysis and classification with machine learning. Contains over 18,000 emails generated by Enron Corporation employees.

  • SMS message dataset of more than 5,971 text messages. It includes 489 Spam messages, 638 Smishing messages and 4,844 Ham messages. The dataset contains attributes extracted from malicious messages that can be used to classify messages as malicious or legitimate. The data was collected by converting images obtained from the Internet into text using Python code.

  • URL dataset with more than 800,000 URLs where 52% of the domains are legitimate and the remaining 47% are phishing domains. It is a collection of data samples from various sources, the URLs were collected from the JPCERT website, existing Kaggle datasets, Github repositories where the URLs are updated once a year and some open source databases, including Excel files.

  • Website dataset of 80,000 instances of legitimate websites (50,000) and phishing websites (30,000). Each instance contains the URL and the HTML page. Legitimate data were collected from two sources: 1) A simple keyword search on the Google search engine was used and the first 5 URLs of each search were collected. Domain restrictions were used and a maximum of 10 collections from one domain was limited to have a diverse collection at the end. 2) Almost 25,874 active URLs were collected from the Ebbu2017 Phishing Dataset repository. Three sources were used for the phishing data: PhishTank, OpenPhish and PhishRepo.

Dataset Processing

Primarily, this dataset is intended to be used in conjunction with the BERT language model. Therefore, it has not been subjected to traditional preprocessing that is usually done for NLP tasks, such as Text Classification.

Is stemming, lemmatization, stop word removal, etc., necessary to improve the performance of BERT?

In general, NO. Preprocessing will not change the output predictions. In fact, removing empty words (which are considered noise in conventional text representation, such as bag-of-words or tf-idf) can and probably will worsen the predictions of your BERT model. Since BERT uses the self-attenuation mechanism, these "stop words" are valuable information for BERT. The same goes for punctuation: a question mark can certainly change the overall meaning of a sentence. Therefore, eliminating stop words and punctuation marks would only mean eliminating the context that BERT could have used to get better results.

However, if this dataset plans to be used for another type of model, perhaps preprocessing for NLP tasks should be considered. That is at the discretion of whoever wishes to employ this dataset.

For more information check these links: