Update README.md
Browse files
README.md
CHANGED
@@ -65,6 +65,9 @@ All evaluations were done using our [evaluation script](https://github.com/asahi
|
|
65 |
| ------ | -------- | ------ | --------- | ---------- |
|
66 |
| 26.16 | 53.84 | 27.07 | 91.00 | 64.99 |
|
67 |
|
|
|
|
|
|
|
68 |
## Fine-tuning Parameters
|
69 |
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
|
70 |
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
|
|
|
65 |
| ------ | -------- | ------ | --------- | ---------- |
|
66 |
| 26.16 | 53.84 | 27.07 | 91.00 | 64.99 |
|
67 |
|
68 |
+
- [metric file](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
|
69 |
+
|
70 |
+
|
71 |
## Fine-tuning Parameters
|
72 |
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
|
73 |
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
|