File size: 922 Bytes
59edb10
 
dbd34ca
 
 
 
4c00a22
59edb10
35d2a13
dbd34ca
5af068f
37e663c
73acf19
37e663c
4c00a22
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
license: cc-by-nc-sa-4.0
pipeline_tag: fill-mask
language: en
datasets:
- OpenSubtitles
library_name: transformers
---
## Model description
This model is based on [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).

Initial weights were taken from [google/bert_uncased_L-8_H-256_A-4](https://huggingface.co/google/bert_uncased_L-8_H-256_A-4).
Model was additionally pretrained for 20_000 steps on 5m lines of text from english version of [OpenSubtitles](http://www.opensubtitles.org/) dataset. 

Maximum input length is 512 tokens that is enoungh to encode dialog with few previous utterances (average sentence length per utterance in SWDA, MAPTASK, MRDA, BT_OASIS, FRAMES, AMI, DSTC3 is less than 11 tokens).