update
Browse files- README.md +8 -6
- data/trec07p.jsonl +3 -0
- examples/preprocess/process_trec07p.py +89 -0
- examples/preprocess/samples_count.py +2 -1
- spam_detect.py +1 -0
README.md
CHANGED
@@ -12,17 +12,18 @@ license: apache-2.0
|
|
12 |
| 数据 | 语言 | 任务类型 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
13 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
|
14 |
| enron_spam | 英语 | 垃圾邮件分类 | [enron_spam_data](https://github.com/MWiechmann/enron_spam_data); [Enron-Spam](https://www2.aueb.gr/users/ion/data/enron-spam/); [spam-mails-dataset](https://www.kaggle.com/datasets/venky73/spam-mails-dataset) | ham: 16545; spam: 17171 | Enron-Spam 数据集是 V. Metsis、I. Androutsopoulos 和 G. Paliouras 收集的绝佳资源 | [SetFit/enron_spam](https://huggingface.co/datasets/SetFit/enron_spam); [enron-spam](https://www.kaggle.com/datasets/wanderfj/enron-spam) |
|
15 |
-
| enron_spam_subset | 英语 | 垃圾邮件分类 | [email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset) | ham: 5000; spam: 5000
|
16 |
| ling_spam | 英语 | 垃圾邮件分类 | [lingspam-dataset](https://www.kaggle.com/datasets/mandygu/lingspam-dataset); [email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset) | ham: 2172; spam: 433 | Ling-Spam 数据集是从语言学家列表中整理的 2,893 条垃圾邮件和非垃圾邮件消息的集合。 | |
|
17 |
| sms_spam | 英语 | 垃圾短信分类 | [SMS Spam Collection](https://archive.ics.uci.edu/dataset/228/sms+spam+collection); [SMS Spam Collection Dataset](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset) | ham: 4827; spam: 747 | SMS 垃圾邮件集合是一组公开的 SMS 标记消息,为移动电话垃圾邮件研究而收集。 | [sms_spam](https://huggingface.co/datasets/sms_spam) |
|
18 |
-
| sms_spam_collection | 英语 | 垃圾短信分类 | [spam-emails](https://www.kaggle.com/datasets/abdallahwagih/spam-emails) | ham: 4825; spam: 747
|
19 |
| spam_assassin | 英语 | 垃圾邮件分类 | [datasets-spam-assassin](https://github.com/stdlib-js/datasets-spam-assassin); [Apache SpamAssassin’s public datasets](https://spamassassin.apache.org/old/publiccorpus/); [Spam or Not Spam Dataset](https://www.kaggle.com/datasets/ozlerhakan/spam-or-not-spam-dataset) | ham: 4150; spam: 1896 | 数据集从[email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset)的completeSpamAssassin.csv文件而来。 | [email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset); [talby/SpamAssassin](https://huggingface.co/datasets/talby/spamassassin); [spamassassin-2002](https://www.kaggle.com/datasets/cesaber/spam-email-data-spamassassin-2002) |
|
20 |
-
| spam_base | 英语 | 垃圾邮件分类 | [spambase](https://archive.ics.uci.edu/dataset/94/spambase) |
|
21 |
-
| spam_detection | 英语 | 垃圾短信分类 | [Deysi/spam-detection-dataset](https://huggingface.co/datasets/Deysi/spam-detection-dataset) | ham: 5400; spam: 5500
|
22 |
| spam_message | 汉语 | 垃圾短信分类 | [SpamMessage](https://github.com/hrwhisper/SpamMessage) | ham: 720000; spam: 80000 | 其中spam的数据是正确的数据,但是做了脱敏处理(招生电话:xxxxxxxxxxx),这里的 x 可能会成为显著特征。而ham样本像是从普通文本中截断出来充作样本的,建议不要用这些数据。 | |
|
23 |
| spam_message_lr | 汉语 | 垃圾短信分类 | [SpamMessagesLR](https://github.com/x-hacker/SpamMessagesLR) | ham: 3983; spam: 6990 | | |
|
24 |
-
|
|
25 |
-
|
|
|
|
26 |
|
27 |
|
28 |
|
@@ -297,6 +298,7 @@ ham
|
|
297 |
<details>
|
298 |
<summary>参考的数据来源,展开查看</summary>
|
299 |
<pre><code>
|
|
|
300 |
|
301 |
https://huggingface.co/datasets/FredZhang7/all-scam-spam
|
302 |
https://huggingface.co/datasets/Deysi/spam-detection-dataset
|
|
|
12 |
| 数据 | 语言 | 任务类型 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
13 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
|
14 |
| enron_spam | 英语 | 垃圾邮件分类 | [enron_spam_data](https://github.com/MWiechmann/enron_spam_data); [Enron-Spam](https://www2.aueb.gr/users/ion/data/enron-spam/); [spam-mails-dataset](https://www.kaggle.com/datasets/venky73/spam-mails-dataset) | ham: 16545; spam: 17171 | Enron-Spam 数据集是 V. Metsis、I. Androutsopoulos 和 G. Paliouras 收集的绝佳资源 | [SetFit/enron_spam](https://huggingface.co/datasets/SetFit/enron_spam); [enron-spam](https://www.kaggle.com/datasets/wanderfj/enron-spam) |
|
15 |
+
| enron_spam_subset | 英语 | 垃圾邮件分类 | [email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset) | ham: 5000; spam: 5000 | | |
|
16 |
| ling_spam | 英语 | 垃圾邮件分类 | [lingspam-dataset](https://www.kaggle.com/datasets/mandygu/lingspam-dataset); [email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset) | ham: 2172; spam: 433 | Ling-Spam 数据集是从语言学家列表中整理的 2,893 条垃圾邮件和非垃圾邮件消息的集合。 | |
|
17 |
| sms_spam | 英语 | 垃圾短信分类 | [SMS Spam Collection](https://archive.ics.uci.edu/dataset/228/sms+spam+collection); [SMS Spam Collection Dataset](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset) | ham: 4827; spam: 747 | SMS 垃圾邮件集合是一组公开的 SMS 标记消息,为移动电话垃圾邮件研究而收集。 | [sms_spam](https://huggingface.co/datasets/sms_spam) |
|
18 |
+
| sms_spam_collection | 英语 | 垃圾短信分类 | [spam-emails](https://www.kaggle.com/datasets/abdallahwagih/spam-emails) | ham: 4825; spam: 747 | 该数据集包含电子邮件的集合 | [email-spam-detection-dataset-classification](https://www.kaggle.com/datasets/shantanudhakadd/email-spam-detection-dataset-classification); [spam-identification](https://www.kaggle.com/datasets/amirdhavarshinis/spam-identification); [sms-spam-collection](https://www.kaggle.com/datasets/thedevastator/sms-spam-collection-a-more-diverse-dataset); [spam-or-ham](https://www.kaggle.com/datasets/arunasivapragasam/spam-or-ham) |
|
19 |
| spam_assassin | 英语 | 垃圾邮件分类 | [datasets-spam-assassin](https://github.com/stdlib-js/datasets-spam-assassin); [Apache SpamAssassin’s public datasets](https://spamassassin.apache.org/old/publiccorpus/); [Spam or Not Spam Dataset](https://www.kaggle.com/datasets/ozlerhakan/spam-or-not-spam-dataset) | ham: 4150; spam: 1896 | 数据集从[email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset)的completeSpamAssassin.csv文件而来。 | [email-spam-dataset](https://www.kaggle.com/datasets/nitishabharathi/email-spam-dataset); [talby/SpamAssassin](https://huggingface.co/datasets/talby/spamassassin); [spamassassin-2002](https://www.kaggle.com/datasets/cesaber/spam-email-data-spamassassin-2002) |
|
20 |
+
| spam_base | 英语 | 垃圾邮件分类 | [spambase](https://archive.ics.uci.edu/dataset/94/spambase) | | 将电子邮件分类为垃圾邮件或非垃圾邮件 | [spam-email-data-uci](https://www.kaggle.com/datasets/kaggleprollc/spam-email-data-uci) |
|
21 |
+
| spam_detection | 英语 | 垃圾短信分类 | [Deysi/spam-detection-dataset](https://huggingface.co/datasets/Deysi/spam-detection-dataset) | ham: 5400; spam: 5500 | | |
|
22 |
| spam_message | 汉语 | 垃圾短信分类 | [SpamMessage](https://github.com/hrwhisper/SpamMessage) | ham: 720000; spam: 80000 | 其中spam的数据是正确的数据,但是做了脱敏处理(招生电话:xxxxxxxxxxx),这里的 x 可能会成为显著特征。而ham样本像是从普通文本中截断出来充作样本的,建议不要用这些数据。 | |
|
23 |
| spam_message_lr | 汉语 | 垃圾短信分类 | [SpamMessagesLR](https://github.com/x-hacker/SpamMessagesLR) | ham: 3983; spam: 6990 | | |
|
24 |
+
| trec07p | 英语 | 垃圾邮件分类 | [2007 TREC Public Spam Corpus](https://plg.uwaterloo.ca/~gvcormac/treccorpus07/); [Spam Track](https://trec.nist.gov/data/spam.html) | ham: 25220; spam: 50199 | 2007 TREC Public Spam Corpus | [trec07p.tar.gz](https://pan.baidu.com/s/1jC9CxVaxwizFCvGtI1JvJA?pwd=g72z) |
|
25 |
+
| trec06c | 汉语 | 垃圾邮件分类 | [2006 TREC Public Spam Corpora](https://plg.uwaterloo.ca/~gvcormac/treccorpus06/); | | 2006 TREC Public Spam Corpora | |
|
26 |
+
| youtube_spam_collection | 英语 | 垃圾评论分类 | [youtube+spam+collection](https://archive.ics.uci.edu/dataset/380/youtube+spam+collection); [YouTube Spam Collection Data Set](https://www.kaggle.com/datasets/lakshmi25npathi/images) | ham: 951; spam: 1005 | 它是为垃圾邮件研究而收集的公共评论集。 | |
|
27 |
|
28 |
|
29 |
|
|
|
298 |
<details>
|
299 |
<summary>参考的数据来源,展开查看</summary>
|
300 |
<pre><code>
|
301 |
+
https://huggingface.co/datasets/dbarbedillo/SMS_Spam_Multilingual_Collection_Dataset
|
302 |
|
303 |
https://huggingface.co/datasets/FredZhang7/all-scam-spam
|
304 |
https://huggingface.co/datasets/Deysi/spam-detection-dataset
|
data/trec07p.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:157d6a981f70a3789282ffb79c300a19a7332aa635ba3c479757a970aa9ba4b3
|
3 |
+
size 609190484
|
examples/preprocess/process_trec07p.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
from pathlib import Path
|
8 |
+
import random
|
9 |
+
import re
|
10 |
+
import sys
|
11 |
+
|
12 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
13 |
+
sys.path.append(os.path.join(pwd, '../../'))
|
14 |
+
|
15 |
+
from datasets import load_dataset
|
16 |
+
from tqdm import tqdm
|
17 |
+
|
18 |
+
from project_settings import project_path
|
19 |
+
|
20 |
+
|
21 |
+
def get_args():
|
22 |
+
parser = argparse.ArgumentParser()
|
23 |
+
|
24 |
+
parser.add_argument("--data_dir", default="data/trec07p/full", type=str)
|
25 |
+
parser.add_argument(
|
26 |
+
"--output_file",
|
27 |
+
default=(project_path / "data/trec07p.jsonl"),
|
28 |
+
type=str
|
29 |
+
)
|
30 |
+
args = parser.parse_args()
|
31 |
+
return args
|
32 |
+
|
33 |
+
|
34 |
+
def main():
|
35 |
+
args = get_args()
|
36 |
+
|
37 |
+
data_dir = Path(args.data_dir)
|
38 |
+
|
39 |
+
full_index = data_dir / "index"
|
40 |
+
|
41 |
+
with open(args.output_file, "w", encoding="utf-8") as fout:
|
42 |
+
with open(full_index.as_posix(), "r", encoding="utf-8") as fin:
|
43 |
+
for row in fin:
|
44 |
+
row = str(row).strip()
|
45 |
+
row = row.split(" ", maxsplit=1)
|
46 |
+
|
47 |
+
if len(row) != 2:
|
48 |
+
print(row)
|
49 |
+
raise AssertionError
|
50 |
+
|
51 |
+
label = row[0]
|
52 |
+
fn = row[1]
|
53 |
+
filename = data_dir / fn
|
54 |
+
|
55 |
+
for encoding in ("utf-8", "gbk", "ANSI"):
|
56 |
+
try:
|
57 |
+
with open(filename.as_posix(), "r", encoding=encoding) as finmail:
|
58 |
+
text = finmail.read()
|
59 |
+
except UnicodeDecodeError:
|
60 |
+
# print(filename.as_posix())
|
61 |
+
# print("UnicodeDecodeError")
|
62 |
+
continue
|
63 |
+
|
64 |
+
if label not in ("spam", "ham"):
|
65 |
+
raise AssertionError
|
66 |
+
|
67 |
+
num = random.random()
|
68 |
+
if num < 0.9:
|
69 |
+
split = "train"
|
70 |
+
elif num < 0.95:
|
71 |
+
split = "validation"
|
72 |
+
else:
|
73 |
+
split = "test"
|
74 |
+
|
75 |
+
row = {
|
76 |
+
"text": text,
|
77 |
+
"label": label,
|
78 |
+
"category": None,
|
79 |
+
"data_source": "trec07p",
|
80 |
+
"split": split
|
81 |
+
}
|
82 |
+
row = json.dumps(row, ensure_ascii=False)
|
83 |
+
fout.write("{}\n".format(row))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == '__main__':
|
89 |
+
main()
|
examples/preprocess/samples_count.py
CHANGED
@@ -15,7 +15,8 @@ dataset_dict = load_dataset(
|
|
15 |
# name="sms_spam_collection",
|
16 |
# name="spam_message",
|
17 |
# name="spam_message_lr",
|
18 |
-
name="
|
|
|
19 |
split=None,
|
20 |
cache_dir=None,
|
21 |
download_mode=DownloadMode.FORCE_REDOWNLOAD
|
|
|
15 |
# name="sms_spam_collection",
|
16 |
# name="spam_message",
|
17 |
# name="spam_message_lr",
|
18 |
+
name="trec07p",
|
19 |
+
# name="youtube_spam_collection",
|
20 |
split=None,
|
21 |
cache_dir=None,
|
22 |
download_mode=DownloadMode.FORCE_REDOWNLOAD
|
spam_detect.py
CHANGED
@@ -20,6 +20,7 @@ _urls = {
|
|
20 |
"spam_emails": "data/spam_emails.jsonl",
|
21 |
"spam_message": "data/spam_message.jsonl",
|
22 |
"spam_message_lr": "data/spam_message_lr.jsonl",
|
|
|
23 |
"youtube_spam_collection": "data/youtube_spam_collection.jsonl",
|
24 |
|
25 |
}
|
|
|
20 |
"spam_emails": "data/spam_emails.jsonl",
|
21 |
"spam_message": "data/spam_message.jsonl",
|
22 |
"spam_message_lr": "data/spam_message_lr.jsonl",
|
23 |
+
"trec07p": "data/trec07p.jsonl",
|
24 |
"youtube_spam_collection": "data/youtube_spam_collection.jsonl",
|
25 |
|
26 |
}
|