Datasets:
license: cc-by-4.0
task_categories:
- text-classification
- zero-shot-classification
task_ids:
- multi-label-classification
language:
- en
tags:
- Human Values
- Value Detection
- Multi-Label
pretty_name: Human Value Detection Dataset
size_categories:
- 1K<n<10K
The Touché23-ValueEval Dataset
Table of Contents
Dataset Description
- Homepage: https://webis.de/data/touche23-valueeval.html
- Repository: Zenodo
- Paper: The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments.
- Leaderboard: https://touche.webis.de/
- Point of Contact: Webis Group
Dataset Summary
The Touché23-ValueEval Dataset comprises 9324 arguments from six different sources. An arguments source is indicated with the first letter of its Argument ID
:
A
: IBM-ArgQ-Rank-30kArgsC
:Chinese question-answering website ZhihuD
:Group Discussion Ideas (GD IDEAS)E
:The Conference for the Future of EuropeF
:Contribution by the language.ml lab (Doratossadat, Omid, Mohammad, Ehsaneddin) [1]: arguments from the "Nahj al-Balagha" [2] and "Ghurar al-Hikam wa Durar ak-Kalim" [3]G
:The New York Times
The annotated labels are based on the value taxonomy published in Identifying the Human Values behind Arguments (Kiesel et al. 2022) at ACL'22.
[1] https://language.ml [2] https://en.wikipedia.org/wiki/Nahj_al-Balagha [3] https://en.wikipedia.org/wiki/Ghurar_al-Hikam_wa_Durar_al-Kalim
Dataset Usage
The default configuration name is main
.
from datasets import load_dataset
dataset = load_dataset("webis/Touche23-ValueEval")
print(dataset['train'].info.description)
for argument in iter(dataset['train']):
print(f"{argument['Argument ID']}: {argument['Stance']} '{argument['Conclusion']}': {argument['Premise']}")
Supported Tasks and Leaderboards
Human Value Detection
Languages
The Argument Instances are all monolingual; it only includes English (mostly en-US) documents. The Metadata Instances for some dataset parts additionally state the arguments in their original language and phrasing.
Dataset Structure
Argument Instances
Each argument instance has the following attributes:
Argument ID
: The unique identifier for the argument within the datasetConclusion
: Conclusion text of the argumentStance
: Stance of thePremise
towards the `Conclusion; one of "in favor of", "against"Premise
: Premise text of the argumentLabels
: TheLabels
for each example is an array of 1s (argument resorts to value) and 0s (argument does not resort to value). The order is the same as in the original files.
Additionally, the labels are separated into value-categories, aka. level 2 labels of the value taxonomy (Kiesel et al. 2022b), and human values, aka. level 1 labels of the value taxonomy. This distinction is also reflected in the configuration names:
<config>
: As the Task is focused mainly on the detection of value-categories, each base configuration (listed below) has the 20 value-categories as labels:labels = ["Self-direction: thought", "Self-direction: action", "Stimulation", "Hedonism", "Achievement", "Power: dominance", "Power: resources", "Face", "Security: personal", "Security: societal", "Tradition", "Conformity: rules", "Conformity: interpersonal", "Humility", "Benevolence: caring", "Benevolence: dependability", "Universalism: concern", "Universalism: nature", "Universalism: tolerance", "Universalism: objectivity"]
<config>-level1
: The 54 human values from the level 1 of the value taxonomy are not used for the 2023 task (except for the annotation), but are still listed here for some might find them useful for understanding the value categories. Their order is also the same as in the original files. For more details see the value-categories configuration.
The configuration names (as replacements for <config>
) in this dataset are:
main
: 8865 arguments (sources:A
,D
,E
) with splitstrain
,validation
, andtest
(default configuration name)dataset_main_train = load_dataset("webis/Touche23-ValueEval", split="train") dataset_main_validation = load_dataset("webis/Touche23-ValueEval", split="validation") dataset_main_test = load_dataset("webis/Touche23-ValueEval", split="test")
nahjalbalagha
: 279 arguments (source:F
) with splittest
dataset_nahjalbalagha_test = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha", split="test")
nyt
: 80 arguments (source:G
) with splittest
dataset_nyt_test = load_dataset("webis/Touche23-ValueEval", name="nyt", split="test")
zhihu
: 100 arguments (source:C
) with splitvalidation
dataset_zhihu_validation = load_dataset("webis/Touche23-ValueEval", name="zhihu", split="validation")
Please note that due to copyright reasons, there currently does not exist a direct download link to the arguments contained in the
New york Times
dataset. Accessing any of the nyt
or nyt-level1
configurations will therefore use the specifically created
nyt-downloader program
to create and access the arguments locally. See the program's
README
for further details.
Metadata Instances
The following lists all configuration names for metadata. Each configuration only has a single split named meta
.
ibm-meta
: Each row corresponds to one argument (IDs starting withA
) from the IBM-ArgQ-Rank-30kArgsArgument ID
: The unique identifier for the argumentWA
: the quality label according to the weighted-average scoring functionMACE-P
: the quality label according to the MACE-P scoring functionstance_WA
: the stance label according to the weighted-average scoring functionstance_WA_conf
: the confidence in the stance label according to the weighted-average scoring function
dataset_ibm_metadata = load_dataset("webis/Touche23-ValueEval", name="ibm-meta", split="meta")
zhihu-meta
: Each row corresponds to one argument (IDs starting withC
) from the Chinese question-answering website ZhihuArgument ID
: The unique identifier for the argumentConclusion Chinese
: The original chinese conclusion statementPremise Chinese
: The original chinese premise statementURL
: Link to the original statement the argument was taken from
dataset_zhihu_metadata = load_dataset("webis/Touche23-ValueEval", name="zhihu-meta", split="meta")
gdi-meta
: Each row corresponds to one argument (IDs starting withD
) from GD IDEASArgument ID
: The unique identifier for the argumentURL
: Link to the topic the argument was taken from
dataset_gdi_metadata = load_dataset("webis/Touche23-ValueEval", name="gdi-meta", split="meta")
cofe-meta
: Each row corresponds to one argument (IDs starting withE
) from the Conference for the Future of EuropeArgument ID
: The unique identifier for the argumentURL
: Link to the comment the argument was taken from
dataset_cofe_metadata = load_dataset("webis/Touche23-ValueEval", name="cofe-meta", split="meta")
nahjalbalagha-meta
: Each row corresponds to one argument (IDs starting withF
). This file contains information on the 279 arguments innahjalbalagha
(ornahjalbalagha-level1
) and 1047 additional arguments that were not labeled so far. This data was contributed by the language.ml lab.Argument ID
: The unique identifier for the argumentConclusion Farsi
: Conclusion text of the argument in FarsiStance Farsi
: Stance of thePremise
towards theConclusion
, in FarsiPremise Farsi
: Premise text of the argument in FarsiConclusion English
: Conclusion text of the argument in English (translated from Farsi)Stance English
: Stance of thePremise
towards theConclusion
; one of "in favor of", "against"Premise English
: Premise text of the argument in English (translated from Farsi)Source
: Source text of the argument; one of "Nahj al-Balagha", "Ghurar al-Hikam wa Durar ak-Kalim"; their Farsi translations were usedMethod
: How the premise was extracted from the source; one of "extracted" (directly taken), "deduced"; the conclusion are deduced
dataset_nahjalbalagha_metadata = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha-meta", split="meta")
nyt-meta
: Each row corresponds to one argument (IDs starting withG
) from The New York TimesArgument ID
: The unique identifier for the argumentURL
: Link to the article the argument was taken fromInternet Archive timestamp
: Timestamp of the article's version in the Internet Archive that was used
dataset_nyt_metadata = load_dataset("webis/Touche23-ValueEval", name="nyt-meta", split="meta")
value-categories
: Contains a single JSON-entry with the structure of level 2 and level 1 values regarding the value taxonomy:
As this configuration contains just a single entry, an example usage could be:{ "<value category>": { "<level 1 value>": [ "<exemplary effect a corresponding argument might target>", ... ], ... }, ... }
value_categories = load_dataset("webis/Touche23-ValueEval", name="value-categories", split="meta")[0]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
Citation Information
@Article{mirzakhmedova:2023a,
author = {Nailia Mirzakhmedova and Johannes Kiesel and Milad Alshomary and Maximilian Heinrich and Nicolas Handke\
and Xiaoni Cai and Valentin Barriere and Doratossadat Dastgheib and Omid Ghahroodi and {Mohammad Ali} Sadraei\
and Ehsaneddin Asgari and Lea Kawaletz and Henning Wachsmuth and Benno Stein},
doi = {10.48550/arXiv.2301.13771},
journal = {CoRR},
month = jan,
publisher = {arXiv},
title = {{The Touch{\'e}23-ValueEval Dataset for Identifying Human Values behind Arguments}},
volume = {abs/2301.13771},
year = 2023
}