Datasets:
Tasks:
Text Classification
Sub-tasks:
multi-class-classification
Languages:
English
Size:
1K<n<10K
License:
albertvillanova
HF staff
Convert dataset sizes from base 2 to base 10 in the dataset card (#4)
6dc47d2
metadata
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: trecqa
pretty_name: Text Retrieval Conference Question Answering
dataset_info:
features:
- name: text
dtype: string
- name: coarse_label
dtype:
class_label:
names:
'0': ABBR
'1': ENTY
'2': DESC
'3': HUM
'4': LOC
'5': NUM
- name: fine_label
dtype:
class_label:
names:
'0': ABBR:abb
'1': ABBR:exp
'2': ENTY:animal
'3': ENTY:body
'4': ENTY:color
'5': ENTY:cremat
'6': ENTY:currency
'7': ENTY:dismed
'8': ENTY:event
'9': ENTY:food
'10': ENTY:instru
'11': ENTY:lang
'12': ENTY:letter
'13': ENTY:other
'14': ENTY:plant
'15': ENTY:product
'16': ENTY:religion
'17': ENTY:sport
'18': ENTY:substance
'19': ENTY:symbol
'20': ENTY:techmeth
'21': ENTY:termeq
'22': ENTY:veh
'23': ENTY:word
'24': DESC:def
'25': DESC:desc
'26': DESC:manner
'27': DESC:reason
'28': HUM:gr
'29': HUM:ind
'30': HUM:title
'31': HUM:desc
'32': LOC:city
'33': LOC:country
'34': LOC:mount
'35': LOC:other
'36': LOC:state
'37': NUM:code
'38': NUM:count
'39': NUM:date
'40': NUM:dist
'41': NUM:money
'42': NUM:ord
'43': NUM:other
'44': NUM:period
'45': NUM:perc
'46': NUM:speed
'47': NUM:temp
'48': NUM:volsize
'49': NUM:weight
splits:
- name: train
num_bytes: 385090
num_examples: 5452
- name: test
num_bytes: 27983
num_examples: 500
download_size: 359212
dataset_size: 413073
Dataset Card for "trec"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://cogcomp.seas.upenn.edu/Data/QA/QC/
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 0.36 MB
- Size of the generated dataset: 0.41 MB
- Total amount of disk used: 0.78 MB
Dataset Summary
The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.
The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.
Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled.
Supported Tasks and Leaderboards
Languages
The language in this dataset is English (en
).
Dataset Structure
Data Instances
- Size of downloaded dataset files: 0.36 MB
- Size of the generated dataset: 0.41 MB
- Total amount of disk used: 0.78 MB
An example of 'train' looks as follows.
{
'text': 'How did serfdom develop in and then leave Russia ?',
'coarse_label': 2,
'fine_label': 26
}
Data Fields
The data fields are the same among all splits.
text
(str
): Text of the question.coarse_label
(ClassLabel
): Coarse class label. Possible values are:- 'ABBR' (0): Abbreviation.
- 'ENTY' (1): Entity.
- 'DESC' (2): Description and abstract concept.
- 'HUM' (3): Human being.
- 'LOC' (4): Location.
- 'NUM' (5): Numeric value.
fine_label
(ClassLabel
): Fine class label. Possible values are:- ABBREVIATION:
- 'ABBR:abb' (0): Abbreviation.
- 'ABBR:exp' (1): Expression abbreviated.
- ENTITY:
- 'ENTY:animal' (2): Animal.
- 'ENTY:body' (3): Organ of body.
- 'ENTY:color' (4): Color.
- 'ENTY:cremat' (5): Invention, book and other creative piece.
- 'ENTY:currency' (6): Currency name.
- 'ENTY:dismed' (7): Disease and medicine.
- 'ENTY:event' (8): Event.
- 'ENTY:food' (9): Food.
- 'ENTY:instru' (10): Musical instrument.
- 'ENTY:lang' (11): Language.
- 'ENTY:letter' (12): Letter like a-z.
- 'ENTY:other' (13): Other entity.
- 'ENTY:plant' (14): Plant.
- 'ENTY:product' (15): Product.
- 'ENTY:religion' (16): Religion.
- 'ENTY:sport' (17): Sport.
- 'ENTY:substance' (18): Element and substance.
- 'ENTY:symbol' (19): Symbols and sign.
- 'ENTY:techmeth' (20): Techniques and method.
- 'ENTY:termeq' (21): Equivalent term.
- 'ENTY:veh' (22): Vehicle.
- 'ENTY:word' (23): Word with a special property.
- DESCRIPTION:
- 'DESC:def' (24): Definition of something.
- 'DESC:desc' (25): Description of something.
- 'DESC:manner' (26): Manner of an action.
- 'DESC:reason' (27): Reason.
- HUMAN:
- 'HUM:gr' (28): Group or organization of persons
- 'HUM:ind' (29): Individual.
- 'HUM:title' (30): Title of a person.
- 'HUM:desc' (31): Description of a person.
- LOCATION:
- 'LOC:city' (32): City.
- 'LOC:country' (33): Country.
- 'LOC:mount' (34): Mountain.
- 'LOC:other' (35): Other location.
- 'LOC:state' (36): State.
- NUMERIC:
- 'NUM:code' (37): Postcode or other code.
- 'NUM:count' (38): Number of something.
- 'NUM:date' (39): Date.
- 'NUM:dist' (40): Distance, linear measure.
- 'NUM:money' (41): Price.
- 'NUM:ord' (42): Order, rank.
- 'NUM:other' (43): Other number.
- 'NUM:period' (44): Lasting time of something
- 'NUM:perc' (45): Percent, fraction.
- 'NUM:speed' (46): Speed.
- 'NUM:temp' (47): Temperature.
- 'NUM:volsize' (48): Size, area and volume.
- 'NUM:weight' (49): Weight.
- ABBREVIATION:
Data Splits
name | train | test |
---|---|---|
default | 5452 | 500 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{li-roth-2002-learning,
title = "Learning Question Classifiers",
author = "Li, Xin and
Roth, Dan",
booktitle = "{COLING} 2002: The 19th International Conference on Computational Linguistics",
year = "2002",
url = "https://www.aclweb.org/anthology/C02-1150",
}
@inproceedings{hovy-etal-2001-toward,
title = "Toward Semantics-Based Answer Pinpointing",
author = "Hovy, Eduard and
Gerber, Laurie and
Hermjakob, Ulf and
Lin, Chin-Yew and
Ravichandran, Deepak",
booktitle = "Proceedings of the First International Conference on Human Language Technology Research",
year = "2001",
url = "https://www.aclweb.org/anthology/H01-1069",
}