Spaces:
Running
Running
Add description to card metadata
#1
by
julien-c
HF staff
- opened
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
title: BERT Score
|
3 |
-
emoji: 🤗
|
4 |
colorFrom: blue
|
5 |
colorTo: red
|
6 |
sdk: gradio
|
@@ -8,10 +8,26 @@ sdk_version: 3.0.2
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
tags:
|
11 |
-
- evaluate
|
12 |
-
- metric
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
# Metric Card for BERT Score
|
16 |
|
17 |
## Metric description
|
|
|
1 |
---
|
2 |
title: BERT Score
|
3 |
+
emoji: 🤗
|
4 |
colorFrom: blue
|
5 |
colorTo: red
|
6 |
sdk: gradio
|
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
tags:
|
11 |
+
- evaluate
|
12 |
+
- metric
|
13 |
+
description: >-
|
14 |
+
BERTScore leverages the pre-trained contextual embeddings from BERT and
|
15 |
+
matches words in candidate and reference
|
16 |
+
|
17 |
+
sentences by cosine similarity.
|
18 |
+
|
19 |
+
It has been shown to correlate with human judgment on sentence-level and
|
20 |
+
system-level evaluation.
|
21 |
+
|
22 |
+
Moreover, BERTScore computes precision, recall, and F1 measure, which can be
|
23 |
+
useful for evaluating different language
|
24 |
|
25 |
+
generation tasks.
|
26 |
+
|
27 |
+
|
28 |
+
See the project's README at https://github.com/Tiiiger/bert_score#readme for
|
29 |
+
more information.
|
30 |
+
---
|
31 |
# Metric Card for BERT Score
|
32 |
|
33 |
## Metric description
|