aslessor julien-c HF staff commited on
Commit
d37221f
β€’
0 Parent(s):

Duplicate from microsoft/layoutlmv2-base-uncased

Browse files

Co-authored-by: Julien Chaumond <julien-c@users.noreply.huggingface.co>

Files changed (6) hide show
  1. .gitattributes +16 -0
  2. README.md +17 -0
  3. config.json +31 -0
  4. preprocessor_config.json +7 -0
  5. pytorch_model.bin +3 -0
  6. vocab.txt +0 -0
.gitattributes ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
2
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.h5 filter=lfs diff=lfs merge=lfs -text
5
+ *.tflite filter=lfs diff=lfs merge=lfs -text
6
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.ot filter=lfs diff=lfs merge=lfs -text
8
+ *.onnx filter=lfs diff=lfs merge=lfs -text
9
+ *.arrow filter=lfs diff=lfs merge=lfs -text
10
+ *.ftz filter=lfs diff=lfs merge=lfs -text
11
+ *.joblib filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.pb filter=lfs diff=lfs merge=lfs -text
15
+ *.pt filter=lfs diff=lfs merge=lfs -text
16
+ *.pth filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: cc-by-nc-sa-4.0
4
+
5
+ ---
6
+
7
+ # LayoutLMv2
8
+ **Multimodal (text + layout/format + image) pre-training for document AI**
9
+
10
+ The documentation of this model in the Transformers library can be found [here](https://huggingface.co/docs/transformers/model_doc/layoutlmv2).
11
+
12
+ [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutlmv2)
13
+ ## Introduction
14
+ LayoutLMv2 is an improved version of LayoutLM with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. It outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including , including FUNSD (0.7895 β†’ 0.8420), CORD (0.9493 β†’ 0.9601), SROIE (0.9524 β†’ 0.9781), Kleister-NDA (0.834 β†’ 0.852), RVL-CDIP (0.9443 β†’ 0.9564), and DocVQA (0.7295 β†’ 0.8672).
15
+
16
+ [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740)
17
+ Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou, ACL 2021
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_probs_dropout_prob": 0.1,
3
+ "coordinate_size": 128,
4
+ "fast_qkv": true,
5
+ "gradient_checkpointing": false,
6
+ "hidden_act": "gelu",
7
+ "hidden_dropout_prob": 0.1,
8
+ "hidden_size": 768,
9
+ "image_feature_pool_shape": [
10
+ 7,
11
+ 7,
12
+ 256
13
+ ],
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-12,
17
+ "max_2d_position_embeddings": 1024,
18
+ "max_position_embeddings": 512,
19
+ "max_rel_2d_pos": 256,
20
+ "max_rel_pos": 128,
21
+ "model_type": "layoutlmv2",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "output_past": true,
25
+ "pad_token_id": 0,
26
+ "shape_size": 128,
27
+ "rel_2d_pos_bins": 64,
28
+ "rel_pos_bins": 32,
29
+ "type_vocab_size": 2,
30
+ "vocab_size": 30522
31
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "apply_ocr": true,
3
+ "do_resize": true,
4
+ "feature_extractor_type": "LayoutLMv2FeatureExtractor",
5
+ "resample": 2,
6
+ "size": 224
7
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cffd5ed065ff81e1e5c9a38968372c8541ecb8499999c89a8d9e10d65de3406
3
+ size 802243295
vocab.txt ADDED
The diff for this file is too large to render. See raw diff