Add ICCS dataset card metadata
Browse files
README.md
CHANGED
@@ -1,13 +1,13 @@
|
|
1 |
# IPCC Confidence in Climate Statements
|
2 |
_What do LLMs know about climate? Let's find out!_
|
3 |
|
4 |
-
|
5 |
|
6 |
We introduce the **ICCS dataset (IPCC Confidence in Climate Statements)** is a novel, curated, expert-labeled, natural language dataset of 8094 statements extracted or paraphrased from the IPCC Assessment Report 6: [Working Group I report](https://www.ipcc.ch/report/ar6/wg1/), [Working Group II report](https://www.ipcc.ch/report/ar6/wg2/), and [Working Group III report](https://www.ipcc.ch/report/ar6/wg3/), respectively.
|
7 |
|
8 |
Each statement is labeled with the corresponding IPCC report source, the page number in the report PDF, and the corresponding confidence level (, along with their associated confidence levels (`low`, `medium`, `high`, or `very high`) as assessed by IPCC climate scientists based on available evidence and agreement among their peers.
|
9 |
|
10 |
-
|
11 |
|
12 |
To construct the dataset, we retrieved the complete raw text from each of the three IPCC report PDFs that are available online using an open-source library [pypdf2](https://pypi.org/project/PyPDF2/). We then normalized the whitespace, tokenized the text into sentences using [NLTK](https://www.nltk.org/) , and used regex search to filter for complete sentences including a parenthetical confidence label at the end of the statement, of the form _sentence (low|medium|high|very high confidence)_. The final ICCS dataset contains 8094 labeled sentences.
|
13 |
|
@@ -21,17 +21,30 @@ Then, we manually reviewed and cleaned each sentence in the test set to provide
|
|
21 |
- We split 19 compound statements with conflicting confidence sub-labels, and removed 6 extraneous mid-sentence labels of the same category as the end-of-sentence label;
|
22 |
- We added light context to 23 sentences, and replaced 5 sentences by others when they were meaningless outside of a longer paragraph;
|
23 |
- We removed qualifiers at the beginning of 29 sentences to avoid biasing classification (e.g. 'But...', 'In summary...', 'However...').
|
24 |
-
\end{itemize}
|
25 |
|
26 |
**The remaining 7794 sentences not allocated to the test split form our train split.**
|
27 |
|
28 |
Of note: while the IPCC report uses a 5 levels scale for confidence, almost no `very low confidence` statement makes it through the peer review process to the final reports, such that no statement of the form _sentence (very low confidence)_ was retrievable. Therefore, we chose to build our data set with only statements labeled as `low`, `medium`, `high` and `very high` confidence.
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
The code to reproduce dataset collection and our LLM benchmarking experiments is avalaible on [GitHub](https://github.com/rlacombe/Climate-LLMs).
|
33 |
|
34 |
-
|
35 |
|
36 |
We use this dataset to evaluate how recent LLMs fare at classifying the scientific confidence associated with each statement in a statistically representative, carefully constructed test split of the dataset.
|
37 |
|
|
|
1 |
# IPCC Confidence in Climate Statements
|
2 |
_What do LLMs know about climate? Let's find out!_
|
3 |
|
4 |
+
## ICCS Dataset
|
5 |
|
6 |
We introduce the **ICCS dataset (IPCC Confidence in Climate Statements)** is a novel, curated, expert-labeled, natural language dataset of 8094 statements extracted or paraphrased from the IPCC Assessment Report 6: [Working Group I report](https://www.ipcc.ch/report/ar6/wg1/), [Working Group II report](https://www.ipcc.ch/report/ar6/wg2/), and [Working Group III report](https://www.ipcc.ch/report/ar6/wg3/), respectively.
|
7 |
|
8 |
Each statement is labeled with the corresponding IPCC report source, the page number in the report PDF, and the corresponding confidence level (, along with their associated confidence levels (`low`, `medium`, `high`, or `very high`) as assessed by IPCC climate scientists based on available evidence and agreement among their peers.
|
9 |
|
10 |
+
## Dataset Construction
|
11 |
|
12 |
To construct the dataset, we retrieved the complete raw text from each of the three IPCC report PDFs that are available online using an open-source library [pypdf2](https://pypi.org/project/PyPDF2/). We then normalized the whitespace, tokenized the text into sentences using [NLTK](https://www.nltk.org/) , and used regex search to filter for complete sentences including a parenthetical confidence label at the end of the statement, of the form _sentence (low|medium|high|very high confidence)_. The final ICCS dataset contains 8094 labeled sentences.
|
13 |
|
|
|
21 |
- We split 19 compound statements with conflicting confidence sub-labels, and removed 6 extraneous mid-sentence labels of the same category as the end-of-sentence label;
|
22 |
- We added light context to 23 sentences, and replaced 5 sentences by others when they were meaningless outside of a longer paragraph;
|
23 |
- We removed qualifiers at the beginning of 29 sentences to avoid biasing classification (e.g. 'But...', 'In summary...', 'However...').
|
|
|
24 |
|
25 |
**The remaining 7794 sentences not allocated to the test split form our train split.**
|
26 |
|
27 |
Of note: while the IPCC report uses a 5 levels scale for confidence, almost no `very low confidence` statement makes it through the peer review process to the final reports, such that no statement of the form _sentence (very low confidence)_ was retrievable. Therefore, we chose to build our data set with only statements labeled as `low`, `medium`, `high` and `very high` confidence.
|
28 |
|
29 |
+
## Dataset Card
|
30 |
+
|
31 |
+
---
|
32 |
+
language:
|
33 |
+
- en
|
34 |
+
pretty_name: "ICCS (IPCC Confidence in Climate Statements)"
|
35 |
+
tags:
|
36 |
+
- climate
|
37 |
+
- nlp
|
38 |
+
license: mit
|
39 |
+
task_categories:
|
40 |
+
- classification
|
41 |
+
---
|
42 |
+
|
43 |
+
## Code Download
|
44 |
|
45 |
The code to reproduce dataset collection and our LLM benchmarking experiments is avalaible on [GitHub](https://github.com/rlacombe/Climate-LLMs).
|
46 |
|
47 |
+
## Paper
|
48 |
|
49 |
We use this dataset to evaluate how recent LLMs fare at classifying the scientific confidence associated with each statement in a statistically representative, carefully constructed test split of the dataset.
|
50 |
|