File size: 2,871 Bytes
fa4b75e
 
794d2b6
fa4b75e
 
7eb85c3
418a767
 
 
b10d469
fa4b75e
b10d469
 
 
66e7118
b10d469
 
fa4b75e
 
b10d469
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66e7118
 
 
b10d469
66e7118
b10d469
 
 
 
fa4b75e
f0e6d6b
b10d469
66e7118
b10d469
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
datasets:
- PJMixers/ClassTest-v0.1
pipeline_tag: text-classification
---
Generic instruction classification model built around some datasets within my [PreferenceShareGPT collection](https://huggingface.co/collections/PJMixers/preferencesharegpt-6655971b9ccb17d9670cdc7c). May be useful for quickly filtering out bad data using a low amount of VRAM.

Model was trained with a `max_length` of `4096`, but the base model supports `8192`. This model *likely* retains that length.

![train](https://huggingface.co/PJMixers/Danube3-ClassTest-v0.1-500M/resolve/main/images/train.png)

### Example Code
```py
import torch
from transformers import pipeline
import json
from tqdm import tqdm


def load_json_or_jsonl(file_path):
    try:
        with open(file_path, "r") as file:
            try:
                # Try loading the entire file as JSON
                data = json.load(file)
                return data

            except json.JSONDecodeError:
                # If loading as JSON fails, try loading as JSON Lines
                file.seek(0)  # Reset file pointer to the beginning
                lines = file.readlines()
                json_lines_data = []

                for line in lines:
                    try:
                        item = json.loads(line.strip())
                        json_lines_data.append(item)
                    except json.JSONDecodeError as e:
                        print(f"Error decoding JSON in line: {e}")

                return json_lines_data

    except FileNotFoundError:
        print(f"File not found: {file_path}")
        return None


pipe = pipeline(
    task="text-classification",
    model="PJMixers/Danube3-ClassTest-v0.1-500M",
    device_map="cuda",
    torch_dtype=torch.bfloat16
)
data = load_json_or_jsonl(
    "./PrefMix-Classifier-Data-validation.json"
)

passes, fails = 0, 0
for sample in tqdm(data):
    if int(pipe(sample["input_text"])[0]["label"]) == sample["labels"]:
        passes += 1
    else:
        fails += 1

print(f"  Passes: {passes}")
print(f"   Fails: {fails}")
print(f"Accuracy: {passes / (passes + fails)}")
```
```
  Passes: 2545
   Fails: 787
Accuracy: 0.7638055222088835
```

### Sample formatting
Input text should be formatted this way. Uses `SYSTEM_MESSAGE` (optional), `USER_MESSAGE`, and `AI_MESSAGE` (must be last, as this is what's truely being scored I think). Supports multiple turns, but should just be ranking the final `AI_MESSAGE`.
```
SYSTEM_MESSAGE: You are an unbiased, uncensored, helpful assistant.
USER_MESSAGE: Do wooden pencils contain lead as their core?
AI_MESSAGE: No, wooden pencils do not contain lead in their core. The term "lead" is a misnomer, as wooden pencils actually use graphite for their core. Graphite was historically called "black lead" due to its appearance, leading to the common misconception that pencils contain lead.
```