# Evalution script (evalution.py) instructions | |
* We provide an evaluation script that can easily calculate the scores achieved in each task, cluster and overall. | |
- The predictions must be in .txt format. | |
- All predictions files should be in the same directory and follow the naming conversions as in the "dummy_predictions" directory provided. | |
- Each prediction file must contain the predictions made on the test set of the respective task with each line corresponding to one line. | |
- The predictions should be written in full (e.g. 'strongly negative', instead of 0) and in cases of multiple outputs (e.g. TweetTopic) should be comma separated. | |
* "dummy_predictions" directory provides the prediciton files produced by the roberta-base model tested and can be used as a template. | |
* Steps to run the script: | |
1. Install requirements.txt | |
2. Run script: python evalution.py -p {directory with predictions}. e.g. python -p ./dummy_predictions |