File size: 1,404 Bytes
68e8116 ae8cb59 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
license: mit
---
# Model description
LegalBert is a BERT-base-cased model fine-tuned on a subset of the `case.law` corpus. Further details can be found in this paper:
[A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering](http://ceur-ws.org/Vol-2645/paper5.pdf)
Nils Holzenberger, Andrew Blair-Stanek and Benjamin Van Durme
*Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop, 24 August 2020*
# Usage
```
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("jhu-clsp/LegalBert")
tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/LegalBert")
```
# Citation
```
@inproceedings{holzenberger20dataset,
author = {Nils Holzenberger and
Andrew Blair{-}Stanek and
Benjamin Van Durme},
title = {A Dataset for Statutory Reasoning in Tax Law Entailment and Question
Answering},
booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2020
co-located with the 26th {ACM} {SIGKDD} International Conference on
Knowledge Discovery {\&} Data Mining {(KDD} 2020), Virtual Workshop,
August 24, 2020},
series = {{CEUR} Workshop Proceedings},
volume = {2645},
pages = {31--38},
publisher = {CEUR-WS.org},
year = {2020},
url = {http://ceur-ws.org/Vol-2645/paper5.pdf},
}
```
|