James Bishop
commited on
Commit
·
7225122
1
Parent(s):
f843c71
model card
Browse files
README.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
datasets:
|
5 |
+
- imdb
|
6 |
+
metrics:
|
7 |
+
- accuracy
|
8 |
+
---
|
9 |
+
|
10 |
+
# bert-imdb-1hidden
|
11 |
+
|
12 |
+
## Model description
|
13 |
+
|
14 |
+
A `bert-base-uncased` model was restricted to 1 hidden layer and
|
15 |
+
fine-tuned for sequence classification on the
|
16 |
+
imdb dataset loaded using the `datasets` library.
|
17 |
+
|
18 |
+
## Intended uses & limitations
|
19 |
+
|
20 |
+
#### How to use
|
21 |
+
|
22 |
+
```python
|
23 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
24 |
+
|
25 |
+
pretrained = "lannelin/bert-imdb-1hidden"
|
26 |
+
|
27 |
+
tokenizer = AutoTokenizer.from_pretrained(pretrained)
|
28 |
+
|
29 |
+
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
|
30 |
+
|
31 |
+
LABELS = ["negative", "positive"]
|
32 |
+
|
33 |
+
def get_sentiment(text: str):
|
34 |
+
inputs = tokenizer.encode_plus(text, return_tensors='pt')
|
35 |
+
|
36 |
+
output = model(**inputs)[0].squeeze()
|
37 |
+
|
38 |
+
return LABELS[(output.argmax())]
|
39 |
+
|
40 |
+
print(get_sentiment("What a terrible film!"))
|
41 |
+
```
|
42 |
+
|
43 |
+
#### Limitations and bias
|
44 |
+
|
45 |
+
No special consideration given to limitations and bias.
|
46 |
+
|
47 |
+
Any bias held by the imdb dataset may be reflected in the model's output.
|
48 |
+
|
49 |
+
## Training data
|
50 |
+
|
51 |
+
Initialised with [bert-base-uncased](https://huggingface.co/bert-base-uncased)
|
52 |
+
|
53 |
+
Fine tuned on [imdb](https://huggingface.co/datasets/imdb)
|
54 |
+
|
55 |
+
|
56 |
+
## Training procedure
|
57 |
+
|
58 |
+
The model was fine-tuned for 1 epoch with a batch size of 64,
|
59 |
+
a learning rate of 5e-5, and a maximum sequence length of 512.
|
60 |
+
|
61 |
+
## Eval results
|
62 |
+
|
63 |
+
Accuracy on imdb test set: 0.87132
|