swcrazyfan
commited on
Commit
•
babc95f
1
Parent(s):
4471303
Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ This model is a fine-tuned version of [google/t5-v1_1-large] on a dataset of a m
|
|
17 |
|
18 |
## Intended uses & limitations
|
19 |
|
20 |
-
At times, despite sharing the same language and general grammatical rules, English from previous centuries can be
|
21 |
|
22 |
#### How to use
|
23 |
|
@@ -32,6 +32,7 @@ model = AutoModelWithLMHead.from_pretrained("swcrazyfan/Kingify-2Way")
|
|
32 |
#### Limitations and bias
|
33 |
|
34 |
- The model is trained on the King James Version of the Bible, so it will work best with Christian-style language (or even clichés).
|
|
|
35 |
- The model was trained on a relatively small amount of data, so it will not be as accurate as a model trained on a larger data set.
|
36 |
|
37 |
## Training data
|
|
|
17 |
|
18 |
## Intended uses & limitations
|
19 |
|
20 |
+
At times, despite sharing the same language and general grammatical rules, English from previous centuries can be easily misunderstood. The purpose of this was to explore ways to understand texts from the 17th-century more clearly.
|
21 |
|
22 |
#### How to use
|
23 |
|
|
|
32 |
#### Limitations and bias
|
33 |
|
34 |
- The model is trained on the King James Version of the Bible, so it will work best with Christian-style language (or even clichés).
|
35 |
+
- Before the 18th and 19th centuries, English spelling was inconsistent. Because of this, the model often does not recognize spellings different from those in the KJV.
|
36 |
- The model was trained on a relatively small amount of data, so it will not be as accurate as a model trained on a larger data set.
|
37 |
|
38 |
## Training data
|