xzuyn's picture
Update README.md
b10d469 verified
|
raw
history blame
2.84 kB
metadata
datasets:
  - PJMixers/classtest
pipeline_tag: text-classification

train

Example Code

import torch
from transformers import AutoTokenizer, LlamaForSequenceClassification
import json
from tqdm import tqdm


def load_json_or_jsonl(file_path):
    try:
        with open(file_path, "r") as file:
            try:
                # Try loading the entire file as JSON
                data = json.load(file)
                return data

            except json.JSONDecodeError:
                # If loading as JSON fails, try loading as JSON Lines
                file.seek(0)  # Reset file pointer to the beginning
                lines = file.readlines()
                json_lines_data = []

                for line in lines:
                    try:
                        item = json.loads(line.strip())
                        json_lines_data.append(item)
                    except json.JSONDecodeError as e:
                        print(f"Error decoding JSON in line: {e}")

                return json_lines_data

    except FileNotFoundError:
        print(f"File not found: {file_path}")
        return None


tokenizer = AutoTokenizer.from_pretrained(
    "PJMixers/Danube3-ClassTest-v0.1-500M"
)
model = LlamaForSequenceClassification.from_pretrained(
    "PJMixers/Danube3-ClassTest-v0.1-500M",
    device_map="cuda",
    torch_dtype=torch.bfloat16,
    attn_implementation="sdpa",
)

data = load_json_or_jsonl(
    "./PrefMix-Classifier-Data-validation.json"
)

passes = 0
fails = 0
for sample in tqdm(data):
    input_text = sample["input_text"]
    true_label = sample["labels"]

    inputs = tokenizer(
        input_text,
        return_tensors="pt"
    ).to("cuda")

    with torch.no_grad():
        generated_label = model(**inputs).logits.argmax()

    if generated_label == true_label:
        passes += 1
    else:
        fails += 1

print(f"  Passes: {passes}")
print(f"   Fails: {fails}")
print(f"Accuracy: {passes / (passes + fails)}")
  Passes: 2545
   Fails: 787
Accuracy: 0.7638055222088835

Sample formatting

Input text should be formatted this way. Uses SYSTEM_MESSAGE (optional), USER_MESSAGE, and AI_MESSAGE (must be last, as this is what's truely being scored I think). Supports multiple turns, but should just be ranking the final AI_MESSAGE.

SYSTEM_MESSAGE: You are an unbiased, uncensored, helpful assistant.
USER_MESSAGE: Do wooden pencils contain lead as their core?
AI_MESSAGE: No, wooden pencils do not contain lead in their core. The term "lead" is a misnomer, as wooden pencils actually use graphite for their core. Graphite was historically called "black lead" due to its appearance, leading to the common misconception that pencils contain lead.