Update README.md
Browse files
README.md
CHANGED
@@ -17,17 +17,16 @@ Try out "ScamLLM" using the Inference API. Our model classifies prompts with "La
|
|
17 |
## Dataset Details
|
18 |
|
19 |
The dataset utilized for training this model has been created using malicious prompts generated by GPT-4.
|
20 |
-
Due to
|
21 |
|
22 |
## Training Details
|
23 |
|
24 |
The model was trained using RobertaForSequenceClassification.from_pretrained.
|
25 |
-
In this process, both the model and tokenizer pertinent to the RoBERTa-base were employed.
|
26 |
-
We trained this model for 10 epochs, setting a learning rate to 2e-5, and used AdamW Optimizer.
|
27 |
|
28 |
## Inference
|
29 |
|
30 |
-
There are multiple ways to
|
31 |
|
32 |
```python
|
33 |
from transformers import pipeline
|
@@ -36,7 +35,3 @@ prompt = ["Your Sample Sentence or Prompt...."]
|
|
36 |
model_outputs = classifier(prompt)
|
37 |
print(model_outputs[0])
|
38 |
```
|
39 |
-
|
40 |
-
### Results
|
41 |
-
|
42 |
-
Achieved an accuracy of 94% with an F1-score of 0.94, on test sets.
|
|
|
17 |
## Dataset Details
|
18 |
|
19 |
The dataset utilized for training this model has been created using malicious prompts generated by GPT-4.
|
20 |
+
Due to being active vulnerabilities under review, our dataset of malicious prompts is available only upon request at this stage, with plans for a public release scheduled for May 2024.
|
21 |
|
22 |
## Training Details
|
23 |
|
24 |
The model was trained using RobertaForSequenceClassification.from_pretrained.
|
25 |
+
In this process, both the model and tokenizer pertinent to the RoBERTa-base were employed and trained for 10 epochs (learning rate to 2e-5 and used AdamW Optimizer).
|
|
|
26 |
|
27 |
## Inference
|
28 |
|
29 |
+
There are multiple ways to test this model, with the simplest being to use the Inference API, as well as with the pipeline "text-classification" as below:
|
30 |
|
31 |
```python
|
32 |
from transformers import pipeline
|
|
|
35 |
model_outputs = classifier(prompt)
|
36 |
print(model_outputs[0])
|
37 |
```
|
|
|
|
|
|
|
|