Text Classification
Transformers
Safetensors
bert
Inference Endpoints
Edit model card

Model Card for Spam Detection Model

This model card outlines a spam detection model trained on the SetFit/enron_spam and Deysi/spam-detection-dataset from Hugging Face. The model aims to classify emails or text messages into spam or not spam (ham) with high accuracy, leveraging the BERT architecture for natural language processing tasks.

Model Details

Model Description

This spam detection model was developed to identify and filter out unwanted or harmful emails and messages automatically. It was fine-tuned on two significant datasets featuring real-world spam examples, demonstrating a high level of accuracy in distinguishing between spam and ham.

  • Developed by: AI and cybersecurity researchers.
  • Model type: BERT for Sequence Classification.
  • Language(s) (NLP): English.
  • License: Unknown.
  • Finetuned from model: bert-base-uncased.

Uses

Direct Use

The model is intended for direct use in email filtering systems, cybersecurity applications, and any platform needing to identify spam content within text data.

Out-of-Scope Use

The model is not designed for identifying phishing attempts, detecting malware within attachments, or other security threats beyond the scope of text-based spam content. It may not perform well on texts significantly different from those found in the training datasets, such as messages in languages other than English or texts from domains vastly different from emails.

Bias, Risks, and Limitations

The model's performance is highly dependent on the nature and diversity of the training data. There might be biases in the datasets that could affect the model's predictions, particularly for edge cases or underrepresented categories of spam. Users should be aware of these limitations and consider additional layers of security and content moderation according to their specific needs.

How to Get Started with the Model

To get started with the model, load the pretrained model and tokenizer from the specified directory and use them to preprocess your text data. The model can then be applied to classify texts as spam or not spam.

Training Details

Training Data

The model was trained on the SetFit/enron_spam and Deysi/spam-detection-dataset, which include a variety of spam and ham examples collected from real-world email data.

Training Procedure

The model was fine-tuned for 3 epochs, achieving a final training loss of 0.0239 and an accuracy of 99.55% on the evaluation set. Training was conducted using a batch size of 8, with a learning rate of 2e-5.

Evaluation

Testing Data, Factors & Metrics

The evaluation was performed on a test split from the datasets, focusing on the accuracy metric to assess the model's performance.

Results

The model achieved an evaluation accuracy of 99.55% with an evaluation loss of 0.0448, indicating excellent performance in distinguishing between spam and ham messages.

Environmental Impact

Given the high accuracy and low loss, this model presents a robust solution for spam detection tasks. However, users are encouraged to assess the model's applicability to their specific use cases, considering potential biases and the model's limitations.

Downloads last month
132
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train cybert79/spamai