metadata
language:
- en
base_model:
- microsoft/deberta-v3-base
pipeline_tag: text-classification
Binary classification model for ad-detection on QA Systems.
Sample usage
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
classifier_model_path = "jmvcoelho/ad-classifier-v0.2"
tokenizer = AutoTokenizer.from_pretrained(classifier_model_path)
model = AutoModelForSequenceClassification.from_pretrained(classifier_model_path)
model.eval()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def classify(passages):
inputs = tokenizer(
passages, padding=True, truncation=True, max_length=512, return_tensors="pt"
)
inputs = {k: v.to(device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1)
return predictions.cpu().tolist()
preds = classify(["sample_text_1", "sample_text_2"])
Version
- v0.0: Trained with the official data from Webis Generated Native Ads 2024
- v0.1: Trained with v0.0 data + new synthetic data
- v0.2: Similar to v0.1, but include more diversity in ad placement startegies through prompting.
New Synthetic Data
Objective: Given (query, answer) pair, generate new_answer which contains an advertisement.
Obtaining (query, answer) pairs:
- queries: Obtained from MS-MARCO V2.1 QA task. 150K subset of queries that are associated with a "well formed answer"
- answer: Generated given the query. Model: Qwen2.5-7B-Instruct using RAG with 10 passages (from our model.)
Models used for generation
Each model generated for 1/4th of the (query, answer) pairs
- Gemma-2-9b-it
- LLaMA-3.1-8B-Instruct
- Mistral-7B-Instruct
- Qwen2.5-7B-Instruct
Prompts
One of twelve prompts is chosen at random.
Prompts can be found under files/*.prompt
.