Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ CryptoBERT is a pre-trained NLP model to analyse the language and sentiments of
|
|
24 |
## Classification Training
|
25 |
The model was trained on the following labels: "Bearish" : 0, "Neutral": 1, "Bullish": 2
|
26 |
|
27 |
-
CryptoBERT's sentiment classification head was fine-tuned on a balanced dataset of 2M labelled StockTwits posts,
|
28 |
|
29 |
CryptoBERT was trained with a max sequence length of 128. Technically, it can handle sequences of up to 514 tokens, however, going beyond 128 is not recommended.
|
30 |
|
|
|
24 |
## Classification Training
|
25 |
The model was trained on the following labels: "Bearish" : 0, "Neutral": 1, "Bullish": 2
|
26 |
|
27 |
+
CryptoBERT's sentiment classification head was fine-tuned on a balanced dataset of 2M labelled StockTwits posts, sampled from [ElKulako/stocktwits-crypto](https://huggingface.co/datasets/ElKulako/stocktwits-crypto).
|
28 |
|
29 |
CryptoBERT was trained with a max sequence length of 128. Technically, it can handle sequences of up to 514 tokens, however, going beyond 128 is not recommended.
|
30 |
|