endomorphosis commited on
Commit
2a73d06
·
verified ·
1 Parent(s): 822fc76

Upload 4 files

Browse files
Files changed (3) hide show
  1. README.md +50 -1
  2. cap.png +3 -0
  3. paraquet_to_json.py +37 -0
README.md CHANGED
@@ -1,3 +1,52 @@
1
  ---
2
- license: unknown
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc0-1.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - legal
9
+ - law
10
+ - caselaw
11
+ pretty_name: Caselaw Access Project
12
+ size_categories:
13
+ - 1M<n<10M
14
  ---
15
+
16
+ <img src="https://huggingface.co/datasets/TeraflopAI/Caselaw_Access_project/resolve/main/cap.png" width="800">
17
+
18
+ # The Caselaw Access Project
19
+
20
+ In collaboration with Ravel Law, Harvard Law Library digitized over 40 million U.S. court decisions consisting of 6.7 million cases from the last 360 years into a dataset that is widely accessible to use. Access a bulk download of the data through the Caselaw Access Project API (CAPAPI): https://case.law/caselaw/
21
+
22
+ Find more information about accessing state and federal written court decisions of common law through the bulk data service documentation here: https://case.law/docs/
23
+
24
+ Learn more about the Caselaw Access Project and all of the phenomenal work done by Jack Cushman, Greg Leppert, and Matteo Cargnelutti here: https://case.law/about/
25
+
26
+ Watch a live stream of the data release here: https://lil.law.harvard.edu/about/cap-celebration/stream
27
+
28
+ # Post-processing
29
+
30
+ Teraflop AI is excited to help support the Caselaw Access Project and Harvard Library Innovation Lab, in the release of over 6.6 million state and federal court decisions published throughout U.S. history. It is important to democratize fair access to data to the public, legal community, and researchers. This is a processed and cleaned version of the original CAP data.
31
+
32
+ During the digitization of these texts, there were erroneous OCR errors that occurred. We worked to post-process each of the texts for model training to fix encoding, normalization, repetition, redundancy, parsing, and formatting.
33
+
34
+ Teraflop AI’s data engine allows for the massively parallel processing of web-scale datasets into cleaned text form. Our one-click deployment allowed us to easily split the computation between 1000s of nodes on our managed infrastructure.
35
+
36
+
37
+ # Licensing Information
38
+
39
+ The Caselaw Access Project dataset is licensed under the [CC0 License](https://creativecommons.org/public-domain/cc0/).
40
+
41
+ # Citation Information
42
+ ```
43
+ The President and Fellows of Harvard University. "Caselaw Access Project." 2024, https://case.law/
44
+ ```
45
+ ```
46
+ @misc{ccap,
47
+ title={Cleaned Caselaw Access Project},
48
+ author={Enrico Shippole, Aran Komatsuzaki},
49
+ howpublished{\url{https://huggingface.co/datasets/TeraflopAI/Caselaw_Access_Project}},
50
+ year={2024}
51
+ }
52
+ ```
cap.png ADDED

Git LFS Details

  • SHA256: 3f1b7f484d26ee0052b27eb7e5b6dacd563ccfde29d306b02e2c1b8e02059d00
  • Pointer size: 131 Bytes
  • Size of remote file: 617 kB
paraquet_to_json.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import pyarrow.parquet as pq
3
+ import os
4
+ import json
5
+ import pathlib
6
+
7
+ this_dir = os.path.dirname(os.path.abspath(__file__))
8
+ folders = []
9
+ for folder in os.listdir(this_dir):
10
+ if os.path.isdir(os.path.join(this_dir, folder)):
11
+ print(os.path.join(this_dir, folder))
12
+ folders.append(os.path.join(this_dir, folder))
13
+
14
+ for this_folder in folders:
15
+ this_path = this_folder
16
+ print(this_path)
17
+ for file in os.listdir(this_path):
18
+ print(file)
19
+ if file.endswith(".parquet"):
20
+ this_file = os.path.join(this_path, file)
21
+ print(this_file)
22
+ # Read the Parquet file
23
+ data = pq.read_table(this_file)
24
+
25
+ # Convert to pandas DataFrame
26
+ df = data.to_pandas()
27
+ del data
28
+ # Convert to JSON
29
+ json_data = df.to_json(orient='records')
30
+ del df
31
+ json_data = json.loads(json_data)
32
+ for i in json_data:
33
+ id = i['id']
34
+ data = json.dumps(i)
35
+ with open(f'{this_path}/{id}.json', 'w') as f:
36
+ json.dump(data, f)
37
+