OfekGlick commited on
Commit
9c752d1
1 Parent(s): 78011fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -30
README.md CHANGED
@@ -8,68 +8,159 @@ tags:
8
  - Discourse
9
  - Discourse Evaluation
10
  - NLP
11
- pretty_name: DiscoEval Benchmar
12
  size_categories:
13
- - 10K<n<100K
14
  ---
15
 
16
  # DiscoEval Benchmark Datasets
17
 
18
- ## Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- The DiscoEval benchmark datasets offer a collection of tasks designed to evaluate natural language understanding models in the context of discourse analysis and coherence. These tasks encompass various aspects of discourse comprehension and generation:
 
21
 
22
  1. **Sentence Positioning**
23
- - **Datasets**: Arxiv, Wikipedia, Rocstory
24
- - **Description**: Determine the correct placement of a sentence within a given context or document.
25
 
26
  2. **Binary Sentence Ordering**
27
- - **Datasets**: Arxiv, Wikipedia, Rocstory
28
- - **Description**: Choose between two possible orders for a pair of sentences, identifying the more coherent structure.
29
 
30
  3. **Discourse Coherence**
31
- - **Datasets**: Ubuntu IRC channel, Wikipedia
32
- - **Description**: Evaluate the coherence of a discourse or conversation by determining if a sequence of sentences forms a coherent conversation.
33
 
34
  4. **Sentence Section Prediction**
35
- - **Dataset**: Constructed from PeerRead
36
- - **Description**: Predict the section or category to which a sentence belongs within a scientific paper, based on the content and context.
37
 
38
  5. **Discourse Relations**
39
- - **Datasets**: RST Discourse Treebank, Penn Discourse Treebank
40
- - **Description**: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse.
41
 
42
- ## Dataset Sources
43
 
44
- - **Arxiv**: A repository of scientific papers and research articles.
45
- - **Wikipedia**: An extensive online encyclopedia with articles on diverse topics.
46
- - **Rocstory**: A dataset consisting of fictional stories.
47
- - **Ubuntu IRC channel**: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.
48
- - **PeerRead**: A dataset of scientific papers frequently used for discourse-related tasks.
49
- - **RST Discourse Treebank**: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.
50
- - **Penn Discourse Treebank**: Another dataset with annotated discourse relations, facilitating the study of discourse structure.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
- ## Use Cases
 
 
 
 
 
 
 
53
 
54
- The DiscoEval benchmark datasets are valuable for training and evaluating natural language understanding models, particularly those focusing on discourse analysis, document structure prediction, and text generation tasks.
55
 
56
- ## Citation
 
57
 
58
- Ensure you cite the appropriate sources and papers associated with each dataset when using them in your research or applications.
59
 
60
- ## License
61
 
62
- Each dataset within the DiscoEval benchmark may have distinct licensing terms. Always review and comply with the individual dataset's licensing terms and conditions when using them.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
  ## Loading Data Examples
65
 
66
- ### Loading Data for Sentence Positioning Task
67
 
68
  ```python
69
  from datasets import load_dataset
70
 
71
  # Load the Sentence Positioning dataset
72
- dataset = load_dataset("disco_eval/sentence_positioning")
73
 
74
  # Access the train, validation, and test splits
75
  train_data = dataset["train"]
@@ -79,3 +170,7 @@ test_data = dataset["test"]
79
  # Example usage: Print the first few training examples
80
  for example in train_data[:5]:
81
  print(example)
 
 
 
 
 
8
  - Discourse
9
  - Discourse Evaluation
10
  - NLP
11
+ pretty_name: DiscoEval
12
  size_categories:
13
+ - 100K<n<1M
14
  ---
15
 
16
  # DiscoEval Benchmark Datasets
17
 
18
+ ## Table of Contents
19
+ - [Dataset Description](#dataset-description)
20
+ - [Dataset Summary](#dataset-summary)
21
+ - [Dataset Sources](#dataset-sources)
22
+ - [Supported Tasks](#supported-tasks)
23
+ - [Languages](#languages)
24
+ - [Dataset Structure](#dataset-structure)
25
+ - [Data Instances](#data-instances)
26
+ - [Data Fields](#data-fields)
27
+ - [Data Splits](#data-splits)
28
+ - [Additional Information](#additional-information)
29
+ - [Benchmark Creators](#benchmark-creators)
30
+ - [Citation Information](#citation-information)
31
+ - [Loading Data Examples](#loading-data-examples)
32
+ - [Loading Data for Sentence Positioning Task with the Arxiv data source](#loading-data-for-sentence-positioning-task-with-the-arxiv-data-source)
33
+
34
+ ## Dataset Description
35
+
36
+ - **Repository:** [DiscoEval repository](https://github.com/ZeweiChu/DiscoEval)
37
+ - **Paper:** [Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations](https://arxiv.org/pdf/1909.00142)
38
+
39
+ ### Dataset Summary
40
+
41
+ The DiscoEval is an English-language Benchmark that contains a test suite of 7
42
+ tasks to evaluate whether sentence representations include semantic information
43
+ relevant to discourse processing. The benchmark datasets offer a collection of
44
+ tasks designed to evaluate natural language understanding models in the context
45
+ of discourse analysis and coherence.
46
+
47
+ ### Dataset Sources
48
+
49
+ - **Arxiv**: A repository of scientific papers and research articles.
50
+ - **Wikipedia**: An extensive online encyclopedia with articles on diverse topics.
51
+ - **Rocstory**: A dataset consisting of fictional stories.
52
+ - **Ubuntu IRC channel**: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.
53
+ - **PeerRead**: A dataset of scientific papers frequently used for discourse-related tasks.
54
+ - **RST Discourse Treebank**: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.
55
+ - **Penn Discourse Treebank**: Another dataset with annotated discourse relations, facilitating the study of discourse structure.
56
 
57
+
58
+ ### Supported Tasks
59
 
60
  1. **Sentence Positioning**
61
+ - **Datasets Sources**: Arxiv, Wikipedia, Rocstory
62
+ - **Description**: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations $x_i$. As input to the classfier we include $x_1$ and the contcatination of $x_1 - x_i$ for all $i$: $[x_1, x_1 - x_2, x_1-x_3,x_1-x_4,x_1-x_5]$
63
 
64
  2. **Binary Sentence Ordering**
65
+ - **Datasets Sources**: Arxiv, Wikipedia, Rocstory
66
+ - **Description**: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: $[x_1, x_2, x_1-x_2]$
67
 
68
  3. **Discourse Coherence**
69
+ - **Datasets Sources**: Ubuntu IRC channel, Wikipedia
70
+ - **Description**: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: $[x_1, x_2, x_3, x_4, x_5, x_6]$
71
 
72
  4. **Sentence Section Prediction**
73
+ - **Datasets Sources**: Constructed from PeerRead
74
+ - **Description**: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding.
75
 
76
  5. **Discourse Relations**
77
+ - **Datasets Sources**: RST Discourse Treebank, Penn Discourse Treebank
78
+ - **Description**: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the [original paper](https://arxiv.org/pdf/1909.00142) for instructions
79
 
 
80
 
81
+ ### Languages
82
+
83
+ The text in all datasets is in English. The associated BCP-47 code is `en`.
84
+
85
+
86
+ ## Dataset Structure
87
+
88
+ ### Data Instances
89
+
90
+ All tasks are classification tasks, and they differ by the number of sentences per example and the type of label.
91
+
92
+ An example from the Sentence Positioning task would look as follows:
93
+ ```
94
+ {'sentence_1': 'Dan was overweight as well.',
95
+ 'sentence_2': 'Dan's parents were overweight.',
96
+ 'sentence_3': 'The doctors told his parents it was unhealthy.',
97
+ 'sentence_4': 'His parents understood and decided to make a change.',
98
+ 'sentence_5': 'They got themselves and Dan on a diet.'
99
+ 'label': '1'
100
+ }
101
+ ```
102
+ The label is '1' since the first sentence should go at position number 1 (counting from zero)
103
 
104
+ An example from the Binary Sentence Ordering task would look as follows:
105
+ ```
106
+ {'sentence_1': 'When she walked in, she felt awkward.',
107
+ 'sentence_2': 'Janet decided to go to her high school's party.',
108
+ 'label': '0'
109
+ }
110
+ ```
111
+ The label is '0' because this is not the correct order of the sentences. It should be sentence_2 and then sentence_1.
112
 
113
+ For more examples, you can refer the [original paper]((https://arxiv.org/pdf/1909.00142).
114
 
115
+ ### Data Fields
116
+ In this benchmark, all data fields are string, including the labels.
117
 
118
+ ### Data Splits
119
 
120
+ The data is split into training, validation and test set for each of the tasks in the benchmark.
121
 
122
+ | Task and Dataset | Train | Valid | Test |
123
+ | ----- | ------ | ----- | ---- |
124
+ | Sentence Positioning: Arxiv| 10000 | 4000 | 4000|
125
+ | Sentence Positioning: Rocstory| 10000 | 4000 | 4000|
126
+ | Sentence Positioning: Wiki| 10000 | 4000 | 4000|
127
+ | Binary Sentence Ordering: Arxiv| 20000 | 8000 | 8000|
128
+ | Binary Sentence Ordering: Rocstory| 20000 | 8000 | 8000|
129
+ | Binary Sentence Ordering: Wiki| 20000 | 8000 | 8000|
130
+ | Discourse Coherence: Chat| 5816 | 1834 | 2418|
131
+ | Discourse Coherence: Wiki| 10000 | 4000 | 4000|
132
+ | Sentence Section Prediction | 10000 | 4000 | 4000 |
133
+ | Discourse Relation: Penn Discourse Tree Bank: Implicit | 8693 | 2972 | 3024 |
134
+ | Discourse Relation: Penn Discourse Tree Bank: Explicit | 9383 | 3613 | 3758 |
135
+ | Discourse Relation: RST Discourse Tree Bank | 17051 | 2045 | 2308 |
136
+
137
+ ## Additional Information
138
+
139
+ ### Benchmark Creators
140
+
141
+ This benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago.
142
+
143
+ ### Citation Information
144
+
145
+ ```
146
+
147
+ @inproceedings{mchen-discoeval-19,
148
+ title = {Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations},
149
+ author = {Mingda Chen and Zewei Chu and Kevin Gimpel},
150
+ booktitle = {Proc. of {EMNLP}},
151
+ year={2019}
152
+ }
153
+ ```
154
 
155
  ## Loading Data Examples
156
 
157
+ ### Loading Data for Sentence Positioning Task with the Arxiv data source
158
 
159
  ```python
160
  from datasets import load_dataset
161
 
162
  # Load the Sentence Positioning dataset
163
+ dataset = load_dataset(path="OfekGlick/DiscoEval", name="SParxiv")
164
 
165
  # Access the train, validation, and test splits
166
  train_data = dataset["train"]
 
170
  # Example usage: Print the first few training examples
171
  for example in train_data[:5]:
172
  print(example)
173
+ ```
174
+
175
+ The other possible inputs for the `name` parameter are:
176
+ `SParxiv`, `SProcstory`, `SPwiki`, `SSPabs`, `PDTB-I`, `PDTB-E`, `BSOarxiv`, `BSOrocstory`, `BSOwiki`, `DCchat`, `DCwiki`, `RST`