Edit model card





Fine-tuned ALBERT Model for Constructiveness Detection in Steam Reviews


Model Summary

This model is a fine-tuned version of albert-base-v2, designed to classify whether Steam game reviews are constructive or non-constructive. It was trained on the steam-reviews-constructiveness-binary-label-annotations-1.5k dataset, containing user-generated game reviews labeled as either:

  • 1 (constructive)
  • 0 (non-constructive)

The dataset features were combined into a single string per review, formatted as follows:

"Review: {review}, Playtime: {author_playtime_at_review}, Voted Up: {voted_up}, Upvotes: {votes_up}, Votes Funny: {votes_funny}" and then fed to the model accompanied by the respective constructive labels.

This approach of concatenating the features into a simple String offers a good trade-off between complexity and performance, compared to other options.

Intended Use

The model can be applied in any scenario where it's important to distinguish between helpful and unhelpful textual feedback, particularly in the context of gaming communities or online reviews. Potential use cases are platforms like Steam, Discord, or any community-driven feedback systems where understanding the quality of feedback is critical.

Limitations

  • Domain Specificity: The model was trained on Steam reviews and may not generalize well outside gaming.
  • Dataset Imbalance: The training data has an approximate 63.04%-36.96% split between non-constructive and constructive reviews.

Evaluation Results

The model was trained and evaluated using a 80/10/10 Train/Dev/Test split, achieving the following performance metrics during evaluation using the test set:

  • Accuracy: 0.80
  • Precision: 0.80
  • Recall: 0.82
  • F1-score: 0.79

These results indicate that the model performs reasonably well at identifying the correct label. (~80%)


How to Use

Huggingface Space

Explore and test the model interactively on its Hugging Face Space.

Transformers Library

To use the model programmatically, use this Python snippet:

from transformers import pipeline
import torch

device = 0 if torch.cuda.is_available() else -1
torch_d_type = torch.float16 if torch.cuda.is_available() else torch.float32

base_model_name = "albert-base-v2"

finetuned_model_name = "abullard1/albert-v2-steam-review-constructiveness-classifier"

classifier = pipeline(
    task="text-classification",
    model=finetuned_model_name,
    tokenizer=base_model_name,
    device=device,
    top_k=None,
    truncation=True,
    max_length=512,
    torch_dtype=torch_d_type)

review = "Review: I think this is a great game but it still has some room for improvement., Playtime: 12, Voted Up: True, Upvotes: 1, Votes Funny: 0"
result = classifier(review)
print(result)

License

This model is licensed under the MIT License, allowing open and flexible use of the model for both academic and commercial purposes.

Downloads last month
181
Safetensors
Model size
11.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for abullard1/albert-v2-steam-review-constructiveness-classifier

Finetuned
(165)
this model

Dataset used to train abullard1/albert-v2-steam-review-constructiveness-classifier

Space using abullard1/albert-v2-steam-review-constructiveness-classifier 1

Collection including abullard1/albert-v2-steam-review-constructiveness-classifier

Evaluation results

  • Accuracy on abullard1/steam-reviews-constructiveness-binary-label-annotations-1.5k
    self-reported
    0.796
  • Precision on abullard1/steam-reviews-constructiveness-binary-label-annotations-1.5k
    self-reported
    0.800
  • Recall on abullard1/steam-reviews-constructiveness-binary-label-annotations-1.5k
    self-reported
    0.818
  • F1-score on abullard1/steam-reviews-constructiveness-binary-label-annotations-1.5k
    self-reported
    0.794