Update README.md
Browse files
README.md
CHANGED
@@ -4,8 +4,9 @@ license: apache-2.0
|
|
4 |
|
5 |
# Dataset for evaluation of (zero-shot) discourse marker prediction with language models
|
6 |
|
7 |
-
This is the Big-Bench version of our
|
8 |
|
|
|
9 |
<https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/discourse_marker_prediction>
|
10 |
|
11 |
GPT2 has to zero-shot 15% accuracy with on this multiple-choice task based on language modeling perplexity. As a comparison, a fully supervised model, trained with 10k examples per marker with ROBERTA and default hyperparameters with one epoch, leads to an accuracy of 30% with 174 possible markers. This shows that this task is hard for GPT2 and that the model didn't memorize the discourse markers, but that high accuracies are still possible.
|
|
|
4 |
|
5 |
# Dataset for evaluation of (zero-shot) discourse marker prediction with language models
|
6 |
|
7 |
+
This is the Big-Bench version of our discourse marker prediction dataset, [Discovery](https://huggingface.co/datasets/discovery)
|
8 |
|
9 |
+
Design considerations:
|
10 |
<https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/discourse_marker_prediction>
|
11 |
|
12 |
GPT2 has to zero-shot 15% accuracy with on this multiple-choice task based on language modeling perplexity. As a comparison, a fully supervised model, trained with 10k examples per marker with ROBERTA and default hyperparameters with one epoch, leads to an accuracy of 30% with 174 possible markers. This shows that this task is hard for GPT2 and that the model didn't memorize the discourse markers, but that high accuracies are still possible.
|