Joelito commited on
Commit
7c48587
1 Parent(s): 85f5e39

uploaded dataset files

Browse files
Files changed (4) hide show
  1. README.md +122 -0
  2. all_v1.json +0 -0
  3. prepare_data.py +29 -0
  4. train.jsonl.xz +3 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for PlainEnglishContractsSummarization
2
+
3
+ ## Table of Contents
4
+ - [Table of Contents](#table-of-contents)
5
+ - [Dataset Description](#dataset-description)
6
+ - [Dataset Summary](#dataset-summary)
7
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
8
+ - [Languages](#languages)
9
+ - [Dataset Structure](#dataset-structure)
10
+ - [Data Instances](#data-instances)
11
+ - [Data Fields](#data-fields)
12
+ - [Data Splits](#data-splits)
13
+ - [Dataset Creation](#dataset-creation)
14
+ - [Curation Rationale](#curation-rationale)
15
+ - [Source Data](#source-data)
16
+ - [Annotations](#annotations)
17
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
18
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
19
+ - [Social Impact of Dataset](#social-impact-of-dataset)
20
+ - [Discussion of Biases](#discussion-of-biases)
21
+ - [Other Known Limitations](#other-known-limitations)
22
+ - [Additional Information](#additional-information)
23
+ - [Dataset Curators](#dataset-curators)
24
+ - [Licensing Information](#licensing-information)
25
+ - [Citation Information](#citation-information)
26
+ - [Contributions](#contributions)
27
+
28
+ ## Dataset Description
29
+
30
+ - **Homepage:** [GitHub](https://github.com/lauramanor/legal_summarization)
31
+ - **Repository:**
32
+ - **Paper:** [ACL Anthology](https://aclanthology.org/W19-2201/)
33
+ - **Leaderboard:**
34
+ - **Point of Contact:**
35
+
36
+ ### Dataset Summary
37
+
38
+ [More Information Needed]
39
+
40
+ ### Supported Tasks and Leaderboards
41
+
42
+ [More Information Needed]
43
+
44
+ ### Languages
45
+
46
+ [More Information Needed]
47
+
48
+ ## Dataset Structure
49
+
50
+ ### Data Instances
51
+
52
+ [More Information Needed]
53
+
54
+ ### Data Fields
55
+
56
+ [More Information Needed]
57
+
58
+ ### Data Splits
59
+
60
+ [More Information Needed]
61
+
62
+ ## Dataset Creation
63
+
64
+ ### Curation Rationale
65
+
66
+ [More Information Needed]
67
+
68
+ ### Source Data
69
+
70
+ #### Initial Data Collection and Normalization
71
+
72
+ [More Information Needed]
73
+
74
+ #### Who are the source language producers?
75
+
76
+ [More Information Needed]
77
+
78
+ ### Annotations
79
+
80
+ #### Annotation process
81
+
82
+ [More Information Needed]
83
+
84
+ #### Who are the annotators?
85
+
86
+ [More Information Needed]
87
+
88
+ ### Personal and Sensitive Information
89
+
90
+ [More Information Needed]
91
+
92
+ ## Considerations for Using the Data
93
+
94
+ ### Social Impact of Dataset
95
+
96
+ [More Information Needed]
97
+
98
+ ### Discussion of Biases
99
+
100
+ [More Information Needed]
101
+
102
+ ### Other Known Limitations
103
+
104
+ [More Information Needed]
105
+
106
+ ## Additional Information
107
+
108
+ ### Dataset Curators
109
+
110
+ [More Information Needed]
111
+
112
+ ### Licensing Information
113
+
114
+ [More Information Needed]
115
+
116
+ ### Citation Information
117
+
118
+ [More Information Needed]
119
+
120
+ ### Contributions
121
+
122
+ Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
all_v1.json ADDED
The diff for this file is too large to render. See raw diff
 
prepare_data.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import pandas as pd
4
+ import datasets
5
+
6
+ from typing import Union
7
+
8
+
9
+ def save_and_compress(dataset: Union[datasets.Dataset, pd.DataFrame], name: str, idx=None):
10
+ if idx:
11
+ path = f"{name}_{idx}.jsonl"
12
+ else:
13
+ path = f"{name}.jsonl"
14
+
15
+ print("Saving to", path)
16
+ dataset.to_json(path, force_ascii=False, orient='records', lines=True)
17
+
18
+ print("Compressing...")
19
+ os.system(f'xz -zkf -T0 {path}') # -TO to use multithreading
20
+
21
+
22
+ entries = []
23
+ # read json file into a python dictionary
24
+ with open('all_v1.json') as f:
25
+ data = json.load(f)
26
+ for _, obj in data.items():
27
+ entries.append(obj)
28
+
29
+ save_and_compress(pd.DataFrame(entries), "data/train")
train.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c2dfeb33626c09ea7e81398b88000c64e2ac8f24f84c30abbd111c0e51380ee
3
+ size 84060