Edit model card

Pre-trained Language Model for England and Wales Court of Appeal (Criminal Division) Decisions

Introduction

The research for understanding the bias in criminal court decisions need the support of natural language processing tools.

The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of court decision texts.

We used the text from the Bailii website as the training set. Based on the deep language model framework of RoBERTa, we constructed bailii-roberta pre-training language model by transformers/run_mlm.py and transformers/mlm_wwm.

How to use

Huggingface Transformers

The from_pretrained method based on Huggingface Transformers can directly obtain bailii-roberta model online.

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("tsantosh7/bailii-roberta")

model = AutoModel.from_pretrained("tsantosh7/bailii-roberta")

Download Models

  • The version of the model we provide is PyTorch.

From Huggingface

Disclaimer

  • The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to the random number of seeds and computing equipment.
  • Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.

Acknowledgment

Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.