Spaces:
Sleeping
Sleeping
APP_TITLE = "π NER Metrics Comparison βοΈ" | |
APP_INTRO = "The NER task is performed over a piece of text and involves recognition of entities belonging to a desired entity set and classifying them. The various metrics are explained in the explanation tab. Once you go through them, head to the comparision tab to test out some examples." | |
### EXPLANATION TAB ### | |
EVAL_FUNCTION_INTRO = "An evaluation function tells us how well a model is performing. The basic working of any evaluation function involves comparing the model's output with the ground truth to give a score of correctness." | |
EVAL_FUNCTION_PROPERTIES = """ | |
Some basic properties of an evaluation function are - | |
1. Give an output score equivalent to the upper bound when the prediction is completely correct(in some tasks, multiple variations of a predictions can be considered correct) | |
2. Give an output score equivalent to the lower bound when the prediction is completely wrong. | |
3. GIve an output score between upper and lower bound in other cases, corresponding to the degree of correctness. | |
""" | |
NER_TASK_EXPLAINER = """ | |
The output of the NER task can be represented in either token format or span format. | |
""" | |
### COMPARISION TAB ### | |
PREDICTION_ADDITION_INSTRUCTION = """ | |
Add predictions to the list of predictions on which the evaluation metric will be caculated. | |
- Select the entity type/label name and then highlight the span in the text below. | |
- To remove a span, double click on the higlighted text. | |
- Once you have your desired prediction, click on the 'Add' button.(The prediction created is shown in a json below) | |
""" | |