Commit
·
686d7f7
1
Parent(s):
49f2841
Update README.md
Browse files
README.md
CHANGED
@@ -36,13 +36,13 @@ datasets:
|
|
36 |
|
37 |
license: mit
|
38 |
---
|
39 |
-
# Model Card for the test-version of
|
40 |
|
41 |
|
42 |
A minimalistic instruction model with an already good analysed and pretrained encoder like roBERTa.
|
43 |
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
|
44 |
|
45 |
-
The
|
46 |
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
|
47 |
|
48 |
## Run the model with a longer output
|
|
|
36 |
|
37 |
license: mit
|
38 |
---
|
39 |
+
# Model Card for the test-version of instructionRoberta-base for Bertology
|
40 |
|
41 |
|
42 |
A minimalistic instruction model with an already good analysed and pretrained encoder like roBERTa.
|
43 |
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
|
44 |
|
45 |
+
The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
|
46 |
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
|
47 |
|
48 |
## Run the model with a longer output
|