Datasets:
viewer: true
annotations_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<50K
source_datasets:
- extended|other
task_categories:
- other
task_ids:
- named-entity-recognition
pretty_name: TweetTER
tags:
- Tweet_ter
- natural language processing
configs:
- config_name: tweet_ter
data_files:
- split: train
path: data/train.tsv
- split: test
path: data/test.tsv
- split: validation
path: data/val.tsv
TweetTER
Dataset Summary
TweetTER (Tweet Target Entity Retrieval) is a novel benchmark designed to address the challenges in entity linking, particularly in noisy domains like social media. Unlike traditional entity linking tasks that rely on a comprehensive knowledge base, TweetTER reframes entity linking as a binary entity retrieval task. This approach allows for the evaluation of language models’ performance without depending on a conventional knowledge base, offering a more practical and versatile framework for assessing the effectiveness of language models in entity retrieval tasks.
More details on the task and an evaluation of language models can be found on the Here: <TweetTER: A Benchmark for Target Entity Retrieval on Twitter without Knowledge Bases>
Features
target
(string): The target named entity.context
(string): The tweet in which the target entity has appeared.start
(int): The character index at which the target starts in the provided context.end
(int): The character index at which the target ends in the provided context.definition
(string): A possible candidate definition collected from Wikidata, to be matched against the target entity.date
(string): The date of the tweet.label
(int): The binary label (0 or 1) indicating if the provided definition is a match (1) or a non-match (0) with the target entity.
Usage
To load the dataset:
from datasets import load_dataset
data = load_dataset('cardiffnlp/tweet_ter')
Dataset Structure
Example
target | context | start | end | definition | date | label |
---|---|---|---|---|---|---|
Python | Learning Python programming is fun! | 9 | 15 | A high-level programming language | 2023-01-02 | 1 |
Paris | Paris is beautiful in the spring. | 0 | 5 | Capital city of France | 2023-01-03 | 1 |
Citation
If you use this dataset, please cite the following paper:
@inproceedings{rezaee-etal-2024-tweetter-benchmark,
title = "{T}weet{TER}: A Benchmark for Target Entity Retrieval on {T}witter without Knowledge Bases",
author = "Rezaee, Kiamehr and
Camacho-Collados, Jose and
Pilehvar, Mohammad Taher",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italy",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.1468",
pages = "16890--16896",
abstract = "Entity linking is a well-established task in NLP consisting of associating entity mentions with entries in a knowledge base. Current models have demonstrated competitive performance in standard text settings. However, when it comes to noisy domains such as social media, certain challenges still persist. Typically, to evaluate entity linking on existing benchmarks, a comprehensive knowledge base is necessary and models are expected to possess an understanding of all the entities contained within the knowledge base. However, in practical scenarios where the objective is to retrieve sentences specifically related to a particular entity, strict adherence to a complete understanding of all entities in the knowledge base may not be necessary. To address this gap, we introduce TweetTER (Tweet Target Entity Retrieval), a novel benchmark that aims to bridge the challenges in entity linking. The distinguishing feature of this benchmark is its approach of re-framing entity linking as a binary entity retrieval task. This enables the evaluation of language models{'} performance without relying on a conventional knowledge base, providing a more practical and versatile evaluation framework for assessing the effectiveness of language models in entity retrieval tasks.",
}