Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
nermud / README.md
iperbole's picture
Update README.md
8fae43d verified
---
dataset_info:
- config_name: ADG
features:
- name: text
dtype: string
- name: target_entity
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 729565
num_examples: 3201
- name: validation
num_bytes: 168501
num_examples: 759
- name: test
num_bytes: 114693
num_examples: 470
download_size: 346052
dataset_size: 1012759
- config_name: WN
features:
- name: text
dtype: string
- name: target_entity
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3010007
num_examples: 14331
- name: validation
num_bytes: 665886
num_examples: 3320
- name: test
num_bytes: 862510
num_examples: 3463
download_size: 1253281
dataset_size: 4538403
configs:
- config_name: ADG
data_files:
- split: train
path: ADG/train-*
- split: validation
path: ADG/validation-*
- split: test
path: ADG/test-*
- config_name: WN
data_files:
- split: train
path: WN/train-*
- split: validation
path: WN/validation-*
- split: test
path: WN/test-*
---
# Named-Entities Recognition on Multi-Domain Documents (NERMUD)
Original paper: https://iris.unitn.it/retrieve/d833b9e4-e997-4ee4-b6aa-f5144a85f708/paper42.pdf
NERMuD is a task presented at EVALITA 2023 consisting in the extraction and classification of named-entities in a document, such as persons, organizations, and locations.
This dataset comes as a word level classification setting, we decided to reframe the task to be prompted to a generative LLMs as a multiclass classification without the extractive part.
The prompt will be a text and one of the possible named entities contained in it, then the model is asked to return the correct class of the Named entity (Location, Organization, Person).
To do this, **we generated a different sample for each Named-Entity** in the original dataset, the data come in an Inside-Outside-Begining (IOB) tagging scheme:
| Original format | Output Format |
| ------- | ------ |
| "L'":O, "astronauta":O, "Umberto":B-PER, "Guidoni":I-PER, "dell'":O "Agenzia":B-ORG, "Spaziale":I-ORG | ("L'astronauta Umberto Guidoni dell'Agenzia Spaziale", "Umberto Guidoni", PERSONA), ("L'astronauta Umberto Guidoni dell'Agenzia Spaziale", "Agenzia Spaziale", ORGANIZZAZIONE) |
Nermud come with three different Domains: ADG (Alcide De Gasperi writings), FIC (Fiction Books), and WN (Wikinews). After our reframing, with additional cleaning strategies (e.g. remove duplicate, ambiguous samples, ...) we decided to mantain only AGD and WN domains, since FIC ended up to be high unbalanced on the test set.
## Example
Here you can see the structure of the single sample in the present dataset.
```json
{
"text": string, # text of the sentence
"target_entity": string, # text of the entity to classify
"label": int, # 0: Luogo, 1: Organizzazione, 2: Persona
}
```
## Statitics
| NERMUD WN | Luogo | Persona | Organizzazione |
| :--------: | :----: | :----: | :----: |
| Training | 4661 | 5291 | 4379 |
| Validation | 1217 | 1056 | 1047 |
| Test | 859 | 1373 | 1231 |
| NERMUD AGD | Luogo | Persona | Organizzazione |
| :--------: | :----: | :----: | :----: |
| Training | 891 | 839 | 1471 |
| Validation | 220 | 198 | 341 |
| Test | 97 | 162 | 221 |
## Proposed Prompts
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
Description of the task: "Data una frase e un'entità, indica se tale entità rappresenta un luogo, un'organizzazione o una persona.\n\n"
### Cloze style:
Label (**Luogo**): "Data la frase: '{{text}}'\nL'entità {{target_entity}} è un luogo"
Label (**Persona**): "Data la frase: '{{text}}'\nL'entità {{target_entity}} è una persona"
Label (**Organizzazione**): "Data la frase: '{{text}}'\nL'entità {{target_entity}} è un'organizzazione"
### MCQA style:
```txt
Data la frase: \"{{text}}\"\nDomanda: A quale tipologia di entità appartiene \"{{target_entity}}\" nella frase precedente?\nA. Luogo\nB. Organizzazione\nC. Persona\nRisposta:
```
## Results
The following results are given by the Cloze-style prompting over some english and italian-adapted LLMs.
| NERMUD (AVG) | ACCURACY (5-shots) |
| :-----: | :--: |
| Gemma-2B | 55.25 |
| QWEN2-1.5B | 65.82 |
| Mistral-7B | 83.42 |
| ZEFIRO | 83.24 |
| Llama-3-8B | 85.64 |
| Llama-3-8B-IT | 89.5 |
| ANITA | 88.43 |
## Acknowledge
We would like to thank the authors of this resource for publicly releasing such an intriguing benchmark.
Further, We want to thanks the students of [MNLP-2024 course](https://naviglinlp.blogspot.com/), where with their first homework tried different interesting prompting strategies, and different reframing strategies that able us to generate this resource.
The original dataset is freely available for download [link](https://github.com/dhfbk/KIND/tree/main/evalita-2023)
## License
All the texts used are publicly available, under a license that permits both research and commercial use. In particular, the texts used for NERMuD are taken from:
Wikinews (WN) as a source of news texts belonging to the last decades;
Writings and speeches from the Italian politician Alcide De Gasperi (ADG).