Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
csv
Languages:
Portuguese
Size:
10K - 100K
Tags:
legal
DOI:
License:
metadata
configs:
- config_name: default
data_files:
- split: train
path:
- first section/first_section_train.csv
- second section/second_section_train.csv
- third section/third_section_train.csv
- forth section/forth_section_train.csv
- fifth section/fifth_section_train.csv
- sixth section/sixth_section_train.csv
- seventh section/seventh_section_train.csv
- contentious/contencioso_section_train.csv
- split: test
path:
- first section/first_section_test.csv
- second section/second_section_test.csv
- third section/third_section_test.csv
- forth section/forth_section_test.csv
- fifth section/fifth_section_test.csv
- sixth section/sixth_section_test.csv
- seventh section/seventh_section_test.csv
- contentious/contencioso_section_test.csv
license: mit
task_categories:
- token-classification
language:
- pt
tags:
- legal
size_categories:
- 1K<n<10K
Work developed as part of [IRIS] (https://www.inesc-id.pt/projects/PR07005/)
Extreme Multi-Label Classification of Descriptors
The goal of this dataset is to train an Extreme Multi-Label classifier that, given a judgment from the Supreme Court of Justice of Portugal (STJ), can associate relevant descriptors to the judgment.
Dataset Contents:
- Judgment ID: Unique identifier for each judgment.
- STJ Section: The section of the STJ to which the judgment belongs.
- Judgment Text: Full text of the judgment.
- Descriptors List: A list of binary values (0's and 1's) where 1's indicate the presence of active descriptors.
The dataset is organized by the sections of the STJ, and each section is further divided into training and testing subsets.
Additional Files:
- label.py: A Python file containing lists of descriptor names for each section. The order of these lists corresponds to the order of 0's and 1's in the dataset.
Contributions
insert paper :)