Spaces:
Running
Running
zhuqiming
commited on
Commit
·
80557f2
1
Parent(s):
77e4a24
更新文本
Browse files- text_content.py +3 -2
text_content.py
CHANGED
@@ -3,7 +3,8 @@ Based on the DomainEval benchmark, we evaluate code generation ability of differ
|
|
3 |
|
4 |
More details about how to evaluate the LLM are available in the [DomainEval GitHub repository](https://github.com/domaineval/DomainEval).
|
5 |
|
6 |
-
For a complete description of DomainEval benchmark and related experimental analysis, please refer to the paper:
|
|
|
7 |
|
8 |
**_Latest News_** 🔥
|
9 |
- [24/08/26] We release our DomainEval benchmark, leaderboard and paper.
|
@@ -32,6 +33,6 @@ NOTES_TEXT = """
|
|
32 |
- Evaluate using pass@k as the evaluation metric.
|
33 |
- `Mean` denotes the macro average results of pass@k across 6 different domains.
|
34 |
- `Std` denotes the standard deviation of pass@k across 6 different domains.
|
35 |
-
-
|
36 |
- `⏬ Domains` can choose domains you want to show in the leaderboard.
|
37 |
"""
|
|
|
3 |
|
4 |
More details about how to evaluate the LLM are available in the [DomainEval GitHub repository](https://github.com/domaineval/DomainEval).
|
5 |
|
6 |
+
For a complete description of DomainEval benchmark and related experimental analysis, please refer to the paper:
|
7 |
+
[DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation](https://arxiv.org/abs/2408.13204). [![](https://img.shields.io/badge/arXiv-2408.13204-b31b1b.svg)](https://arxiv.org/abs/2408.13204)
|
8 |
|
9 |
**_Latest News_** 🔥
|
10 |
- [24/08/26] We release our DomainEval benchmark, leaderboard and paper.
|
|
|
33 |
- Evaluate using pass@k as the evaluation metric.
|
34 |
- `Mean` denotes the macro average results of pass@k across 6 different domains.
|
35 |
- `Std` denotes the standard deviation of pass@k across 6 different domains.
|
36 |
+
- You can choose differt pass@k in `⏬ Pass@k`.
|
37 |
- `⏬ Domains` can choose domains you want to show in the leaderboard.
|
38 |
"""
|