Edit model card

WebSector-Flexible

Model description

The WebSector-Flexible model is a RoBERTa-based transformer designed for high-recall website classification into one of ten broad sectors. It is part of the WebSector framework, which introduces a Single Positive Label (SPL) paradigm for multi-label classification using only the primary sector of websites. The flexible mode of this model focuses on maximizing recall by identifying both primary and secondary sectors, making it ideal for exploratory tasks or when it's critical to capture all possible sector associations.

Intended uses & limitations

Intended uses:

  • Website categorization: Classifies websites into multiple sectors for general exploration or broader categorization tasks.
  • Research: Suitable for research on multi-sector classification or multi-label classification tasks where label dependencies are important.
  • Content Management: Can be used in platforms where it's important to categorize content across multiple industries or sectors.

Limitations:

  • Single Positive Label: Only primary sector labels are observable during training, which might limit performance when predicting secondary sectors.
  • Flexible mode: Focuses on recall, which may lead to over-predicting some sectors in websites with ambiguous content.
  • Dataset imbalance: Some sectors are underrepresented, which may affect performance in predicting those categories.

How to use

To use this model with Hugging Face's transformers library:

from transformers import pipeline

classifier = pipeline("text-classification", model="Shahriar/WebSector-Flexible")
result = classifier("Your website content\URL here")
print(result)

This will return the predicted sectors of the website based on its content.

Dataset

The model was trained on the WebSector Corpus, which consists of 254,702 websites categorized into 10 broad sectors. The training set contains 109,476 websites. The dataset is split as follows:

  • Training set: 109,476 websites
  • Validation set: 27,370 websites
  • Test set: 58,649 websites

The 10 sectors used for classification are:

  • Finance, Marketing & HR
  • Information Technology & Electronics
  • Consumer & Supply Chain
  • Civil, Mechanical & Electrical
  • Medical
  • Sports, Media & Entertainment
  • Education
  • Government, Defense & Legal
  • Travel, Food & Hospitality
  • Non-Profit

Training Procedure

Hyperparameters:

  • Number of epochs: 7
  • Batch size: 8
  • Learning rate: $5 \times 10^{-6}$
  • Weight decay: 0.1
  • LoRA rank: 128
  • LoRA alpha: 512
  • Dropout rate: 0.1

Training Setup:

  • Hardware: Four GPUs, including two NVIDIA RTX A5000 and two NVIDIA TITAN RTX units, were used for distributed training.
  • Software: The model was trained using the PyTorch framework, with the Hugging Face Transformers library for implementing transformer-based models.
  • Strategy: Distributed training was employed, and models were selected based on the lowest validation loss.

Evaluation

The model was evaluated on the WebSector Corpus using metrics appropriate for multi-label classification:

  • Top-1 Recall: 68%
  • Top-3 Recall: 85%
  • Recall: 86%
  • Precision: 68%

These metrics show that the flexible mode maximizes recall, allowing it to capture multiple relevant sectors while maintaining a solid precision score.

Ethical Considerations

  • Privacy Enforcement: The model can assist in classifying websites into sectors relevant to privacy regulations like CCPA or HIPAA.
  • Bias: As the model was trained on self-declared sector labels, there is potential for bias due to inaccurate or incomplete labeling.

Citation

If you use this model in your research, please cite the following paper:

@article{?,
  title={WebSector: A New Insight into Multi-Sector Website Classification Using Single Positive Labels},
  author={Shayesteh, Shahriar and Srinath, Mukund and Matheson, Lee and Schaub, Florian and Giles, C. Lee and Wilson, Shomir},
  journal={?},
  year={?},
}
Downloads last month
58
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Shahriar/WebSector-Flexible

Finetuned
(1273)
this model

Dataset used to train Shahriar/WebSector-Flexible