Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ Our custom dataset has accurate manual labels created jointly by an undergraduat
|
|
57 |
|
58 |
### Evaluation
|
59 |
|
60 |
-
Our evaluation metric is F1 at the full entity-level. That is, we aggregated adjacent-indexed entities into full entities and computed F1 scores requiring an exact match. These scores are below.
|
61 |
|
62 |
<table>
|
63 |
<thead>
|
|
|
57 |
|
58 |
### Evaluation
|
59 |
|
60 |
+
Our evaluation metric is F1 at the full entity-level. That is, we aggregated adjacent-indexed entities into full entities and computed F1 scores requiring an exact match. These scores for the test set are below.
|
61 |
|
62 |
<table>
|
63 |
<thead>
|