Update dataset card metadata
Browse files
README.md
CHANGED
@@ -1,3 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# IPCC Confidence in Climate Statements
|
2 |
_What do LLMs know about climate? Let's find out!_
|
3 |
|
@@ -26,30 +41,16 @@ Then, we manually reviewed and cleaned each sentence in the test set to provide
|
|
26 |
|
27 |
Of note: while the IPCC report uses a 5 levels scale for confidence, almost no `very low confidence` statement makes it through the peer review process to the final reports, such that no statement of the form _sentence (very low confidence)_ was retrievable. Therefore, we chose to build our data set with only statements labeled as `low`, `medium`, `high` and `very high` confidence.
|
28 |
|
29 |
-
## Dataset Card
|
30 |
-
|
31 |
-
---
|
32 |
-
language:
|
33 |
-
- en
|
34 |
-
pretty_name: "ICCS (IPCC Confidence in Climate Statements)"
|
35 |
-
tags:
|
36 |
-
- climate
|
37 |
-
- nlp
|
38 |
-
license: mit
|
39 |
-
task_categories:
|
40 |
-
- classification
|
41 |
-
---
|
42 |
-
|
43 |
## Code Download
|
44 |
|
45 |
The code to reproduce dataset collection and our LLM benchmarking experiments is avalaible on [GitHub](https://github.com/rlacombe/Climate-LLMs).
|
46 |
|
47 |
## Paper
|
48 |
|
49 |
-
We use this dataset to evaluate how recent LLMs fare at classifying the
|
50 |
|
51 |
We show that `gpt3.5-turbo` and `gpt4` assess the correct confidence level with reasonable accuracy even in the zero-shot setting; but that, along with other language models we tested, they consistently overstate the certainty level associated with low and medium confidence labels. Models generally perform better on reports before their knowledge cutoff, and demonstrate intuitive classifications on a baseline of non-climate statements. However, we caution it is still not fully clear why these models perform well, and whether they may also pick up on linguistic cues within the climate statements and not just prior exposure to climate knowledge and/or IPCC reports.
|
52 |
|
53 |
Our results have implications for climate communications and the use of generative language models in knowledge retrieval systems. We hope the ICCS dataset provides the NLP and climate sciences communities with a valuable tool with which to evaluate and improve model performance in this critical domain of human knowledge.
|
54 |
|
55 |
-
Pre-print upcomping.
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- zero-shot-classification
|
5 |
+
- text-classification
|
6 |
+
- feature-extraction
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
tags:
|
10 |
+
- climate
|
11 |
+
pretty_name: ICCS (IPCC Confidence in Climate Statements)
|
12 |
+
size_categories:
|
13 |
+
- 1K<n<10K
|
14 |
+
---
|
15 |
+
|
16 |
# IPCC Confidence in Climate Statements
|
17 |
_What do LLMs know about climate? Let's find out!_
|
18 |
|
|
|
41 |
|
42 |
Of note: while the IPCC report uses a 5 levels scale for confidence, almost no `very low confidence` statement makes it through the peer review process to the final reports, such that no statement of the form _sentence (very low confidence)_ was retrievable. Therefore, we chose to build our data set with only statements labeled as `low`, `medium`, `high` and `very high` confidence.
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
## Code Download
|
45 |
|
46 |
The code to reproduce dataset collection and our LLM benchmarking experiments is avalaible on [GitHub](https://github.com/rlacombe/Climate-LLMs).
|
47 |
|
48 |
## Paper
|
49 |
|
50 |
+
We use this dataset to evaluate how recent LLMs fare at classifying the scientific confidence associated with each statement in a statistically representative, carefully constructed test split of the dataset.
|
51 |
|
52 |
We show that `gpt3.5-turbo` and `gpt4` assess the correct confidence level with reasonable accuracy even in the zero-shot setting; but that, along with other language models we tested, they consistently overstate the certainty level associated with low and medium confidence labels. Models generally perform better on reports before their knowledge cutoff, and demonstrate intuitive classifications on a baseline of non-climate statements. However, we caution it is still not fully clear why these models perform well, and whether they may also pick up on linguistic cues within the climate statements and not just prior exposure to climate knowledge and/or IPCC reports.
|
53 |
|
54 |
Our results have implications for climate communications and the use of generative language models in knowledge retrieval systems. We hope the ICCS dataset provides the NLP and climate sciences communities with a valuable tool with which to evaluate and improve model performance in this critical domain of human knowledge.
|
55 |
|
56 |
+
Pre-print upcomping.
|