Improving Black-box Robustness with In-Context Rewriting
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | F1 | Acc | Validation Loss |
---|---|---|---|---|---|
No log | 1.0 | 375 | 0.6887 | 0.8526 | 0.3768 |
0.5463 | 2.0 | 750 | 0.7002 | 0.8500 | 0.3333 |
0.27 | 3.0 | 1125 | 0.6929 | 0.8417 | 0.3775 |
0.1582 | 4.0 | 1500 | 0.7082 | 0.8564 | 0.5663 |
0.1582 | 5.0 | 1875 | 0.6730 | 0.8230 | 0.7996 |
0.0617 | 6.0 | 2250 | 0.6581 | 0.8051 | 1.1498 |
0.0267 | 7.0 | 2625 | 0.7068 | 0.8555 | 0.9111 |
Base model
google-bert/bert-base-uncased