Datasets:
File size: 1,200 Bytes
3abbfa5 e7286e9 24a9171 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1452847
num_examples: 1000
- name: validation
num_bytes: 1685310
num_examples: 1000
download_size: 1637839
dataset_size: 3138157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
task_categories:
- text-classification
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset is a condensed subset derived from the SetFit/enron_spam dataset. It comprises two primary columns: 'Text' and 'Label'. The dataset contains 1000 samples for training and 1000 samples for testing, making it suitable for binary text classification tasks.
## Dataset Details
### Columns
- **Text:** Represents the content of an email.
- **Label:** Indicates whether the email is categorized as 'spam' or 'ham' (non-spam).
### Dataset Sources
- **Repository:** https://huggingface.co/datasets/SetFit/enron_spam
## Uses
### Direct Use
```
from datasets import load_dataset
spam_dataset = load_dataset("likhith231/enron_spam_small")
``` |