Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,23 @@ This model was trained on 115,943 manually annotated sentences to classify text
|
|
38 |
|
39 |
## Intended uses & limitations
|
40 |
|
41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
## Training and evaluation data
|
44 |
|
@@ -112,14 +128,24 @@ Overall count: 8,330
|
|
112 |
### Training hyperparameters
|
113 |
|
114 |
The following hyperparameters were used during training:
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
|
124 |
### Training results
|
125 |
|
|
|
38 |
|
39 |
## Intended uses & limitations
|
40 |
|
41 |
+
```python
|
42 |
+
from transformers import pipeline
|
43 |
+
import pandas as pd
|
44 |
+
|
45 |
+
classifier = pipeline(
|
46 |
+
task="text-classification",
|
47 |
+
model="niksmer/PolicyBERTa-7d")
|
48 |
+
|
49 |
+
# Load text data you want to classify
|
50 |
+
text = pd.read_csv(text.csv)
|
51 |
+
|
52 |
+
# Inference
|
53 |
+
output = classifier(df_text)
|
54 |
+
|
55 |
+
# Print output
|
56 |
+
pd.DataFrame(output).head()
|
57 |
+
```
|
58 |
|
59 |
## Training and evaluation data
|
60 |
|
|
|
128 |
### Training hyperparameters
|
129 |
|
130 |
The following hyperparameters were used during training:
|
131 |
+
```
|
132 |
+
training_args = TrainingArguments(
|
133 |
+
warmup_steps=0,
|
134 |
+
weight_decay=0.1,
|
135 |
+
learning_rate=1e-05,
|
136 |
+
fp16 = True,
|
137 |
+
evaluation_strategy="epoch",
|
138 |
+
num_train_epochs=5,
|
139 |
+
per_device_train_batch_size=16,
|
140 |
+
overwrite_output_dir=True,
|
141 |
+
per_device_eval_batch_size=16,
|
142 |
+
save_strategy="no",
|
143 |
+
logging_dir='logs',
|
144 |
+
logging_strategy= 'steps',
|
145 |
+
logging_steps=10,
|
146 |
+
push_to_hub=True,
|
147 |
+
hub_strategy="end")
|
148 |
+
```
|
149 |
|
150 |
### Training results
|
151 |
|