Removed asterisk and added space after colon in lists
Browse files
README.md
CHANGED
@@ -27,11 +27,11 @@ The base model for fine-tuning was the [distilbert-base-uncased](https://hugging
|
|
27 |
|
28 |
### Model Description
|
29 |
|
30 |
-
- Developed by
|
31 |
-
- Model type
|
32 |
-
- Language(s) (NLP)
|
33 |
-
- License
|
34 |
-
- Finetuned from model
|
35 |
|
36 |
## Uses
|
37 |
|
@@ -104,9 +104,9 @@ Train dataset high_quality_review counts: Counter({0: 2120, 1: 2120})
|
|
104 |
|
105 |
#### Training Hyperparameters
|
106 |
|
107 |
-
- Learning Rate:
|
108 |
-
- Batch Size
|
109 |
-
- Epochs
|
110 |
|
111 |
## Evaluation
|
112 |
|
@@ -176,8 +176,8 @@ Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled ve
|
|
176 |
|
177 |
## Glossary
|
178 |
|
179 |
-
- Low Quality Review:
|
180 |
-
- High Quality Review
|
181 |
|
182 |
## More Information
|
183 |
|
|
|
27 |
|
28 |
### Model Description
|
29 |
|
30 |
+
- Developed by: Zakia
|
31 |
+
- Model type: Text Classification
|
32 |
+
- Language(s) (NLP): English
|
33 |
+
- License: Apache 2.0
|
34 |
+
- Finetuned from model: distilbert-base-uncased
|
35 |
|
36 |
## Uses
|
37 |
|
|
|
104 |
|
105 |
#### Training Hyperparameters
|
106 |
|
107 |
+
- Learning Rate: 3e-5
|
108 |
+
- Batch Size: 16
|
109 |
+
- Epochs: 1
|
110 |
|
111 |
## Evaluation
|
112 |
|
|
|
176 |
|
177 |
## Glossary
|
178 |
|
179 |
+
- Low Quality Review: high_quality_review=0
|
180 |
+
- High Quality Review: high_quality_review=1
|
181 |
|
182 |
## More Information
|
183 |
|