license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 8972956600
num_examples: 503698
- name: validation
num_bytes: 1259708059
num_examples: 71638
download_size: 4925396868
dataset_size: 10232664659
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
Lawma fine-tuning dataset
This fine-tuning dataset contains 260 legal classification tasks derived from the Supreme Court and Songer Court of Appeals databases, totalling over 500k training examples and 2B tokens. This dataset was used to train Lawma 8B and Lawma 70B. The Lawma models outperform GPT-4 on 95% of these legal tasks, on average by over 17 accuracy points. See our arXiv preprint and GitHub repository for more details.
Our reasons to study these legal classification tasks are both technical and substantive. From a technical machine learning perspective, these tasks provide highly non-trivial classification problems where even the best models leave much room for improvement. From a substantive legal perspective, efficient solutions to such classification problems have rich and important applications in legal research.
This dataset was created for the project
Lawma: The Power of Specizalization for Legal Tasks. Ricardo Dominguez-Olmedo and Vedant Nanda and Rediet Abebe and Stefan Bechtold and Christoph Engel and Jens Frankenreiter and Krishna Gummadi and Moritz Hardt and Michael Livermore. 2024
Please cite as:
@misc{dominguezolmedo2024lawmapowerspecializationlegal,
title={Lawma: The Power of Specialization for Legal Tasks},
author={Ricardo Dominguez-Olmedo and Vedant Nanda and Rediet Abebe and Stefan Bechtold and Christoph Engel and Jens Frankenreiter and Krishna Gummadi and Moritz Hardt and Michael Livermore},
year={2024},
eprint={2407.16615},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.16615},
}