igorktech commited on
Commit
37e663c
1 Parent(s): 5af068f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -8,4 +8,6 @@ datasets:
8
  ## Model description
9
  This model is based on [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
10
 
11
- Initial weights were taken from [google/bert_uncased_L-8_H-256_A-4](https://huggingface.co/google/bert_uncased_L-8_H-256_A-4).
 
 
 
8
  ## Model description
9
  This model is based on [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
10
 
11
+ Initial weights were taken from [google/bert_uncased_L-8_H-256_A-4](https://huggingface.co/google/bert_uncased_L-8_H-256_A-4).
12
+
13
+ Maximum input length is 512 tokens that is enoungh to encode dialog with few previous utterances (average sentence length per utterance in SWDA, MAPTASK, MRDA, BT_OASIS, FRAMES, AMI, DSTC3 is less than 11 tokens).