deberta-v3-base-zyda-2-transformed-quality

This model is a fine-tuned version of agentlans/deberta-v3-base-zyda-2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2755
  • Mse: 0.2755

Model description

More information needed

Intended uses & limitations

Example use:

import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load model and tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model_name = "agentlans/deberta-v3-base-zyda-2-quality"
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Function to perform inference
def predict_score(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True).to(device)
    with torch.no_grad():
        logits = model(**inputs).logits
    return logits.item()

# Example usage
input_text = "This product is excellent and works perfectly!"
predicted_score = predict_score(input_text)
print(f"Predicted score: {predicted_score}")

Example output:

Text Quality
Discover the secret to eternal youth with our revolutionary skincare product! -1.93
Get rich quick with our foolproof investment strategy - no experience needed! -0.89
Congratulations! You've won a $1,000 gift card! Click here to claim your prize! -0.63
Act now! Limited time offer on miracle weight loss pills! -0.33
Your computer is infected! Click here for a free scan and fix your issues now! 0.22
Unlock the secrets of the universe with our exclusive online astronomy course! 0.32
Earn money from home by participating in online surveys - sign up today! 0.55
The Eiffel Tower can be 15 cm taller during the summer due to thermal expansion. 1.29
Did you know? The average person spends 6 years of their life dreaming. 1.52
Did you know that honey never spoils? Archaeologists have found pots of honey in ancient Egyptian tombs that are over 3,000 years old and still edible. 2.68

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Mse
0.302 1.0 12649 0.2972 0.2972
0.2027 2.0 25298 0.2775 0.2775
0.1387 3.0 37947 0.2755 0.2755

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
102
Safetensors
Model size
184M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for agentlans/deberta-v3-base-zyda-2-quality

Finetuned
(4)
this model

Dataset used to train agentlans/deberta-v3-base-zyda-2-quality