zohaib99k commited on
Commit
8c08bd3
1 Parent(s): 8b97a4f

Upload 11 files

Browse files
README (2).md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: cc-by-4.0
4
+ datasets:
5
+ - squad_v2
6
+ model-index:
7
+ - name: deepset/roberta-base-squad2
8
+ results:
9
+ - task:
10
+ type: question-answering
11
+ name: Question Answering
12
+ dataset:
13
+ name: squad_v2
14
+ type: squad_v2
15
+ config: squad_v2
16
+ split: validation
17
+ metrics:
18
+ - type: exact_match
19
+ value: 79.9309
20
+ name: Exact Match
21
+ verified: true
22
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhNjg5YzNiZGQ1YTIyYTAwZGUwOWEzZTRiYzdjM2QzYjA3ZTUxNDM1NjE1MTUyMjE1MGY1YzEzMjRjYzVjYiIsInZlcnNpb24iOjF9.EH5JJo8EEFwU7osPz3s7qanw_tigeCFhCXjSfyN0Y1nWVnSfulSxIk_DbAEI5iE80V4EKLyp5-mYFodWvL2KDA
23
+ - type: f1
24
+ value: 82.9501
25
+ name: F1
26
+ verified: true
27
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjk5ZDYwOGQyNjNkMWI0OTE4YzRmOTlkY2JjNjQ0YTZkNTMzMzNkYTA0MDFmNmI3NjA3NjNlMjhiMDQ2ZjJjNSIsInZlcnNpb24iOjF9.DDm0LNTkdLbGsue58bg1aH_s67KfbcmkvL-6ZiI2s8IoxhHJMSf29H_uV2YLyevwx900t-MwTVOW3qfFnMMEAQ
28
+ - type: total
29
+ value: 11869
30
+ name: total
31
+ verified: true
32
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFkMmI2ODM0NmY5NGNkNmUxYWViOWYxZDNkY2EzYWFmOWI4N2VhYzY5MGEzMTVhOTU4Zjc4YWViOGNjOWJjMCIsInZlcnNpb24iOjF9.fexrU1icJK5_MiifBtZWkeUvpmFISqBLDXSQJ8E6UnrRof-7cU0s4tX_dIsauHWtUpIHMPZCf5dlMWQKXZuAAA
33
+ ---
34
+
35
+ # roberta-base for QA
36
+
37
+ This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
38
+
39
+
40
+ ## Overview
41
+ **Language model:** roberta-base
42
+ **Language:** English
43
+ **Downstream-task:** Extractive QA
44
+ **Training data:** SQuAD 2.0
45
+ **Eval data:** SQuAD 2.0
46
+ **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
47
+ **Infrastructure**: 4x Tesla v100
48
+
49
+ ## Hyperparameters
50
+
51
+ ```
52
+ batch_size = 96
53
+ n_epochs = 2
54
+ base_LM_model = "roberta-base"
55
+ max_seq_len = 386
56
+ learning_rate = 3e-5
57
+ lr_schedule = LinearWarmup
58
+ warmup_proportion = 0.2
59
+ doc_stride=128
60
+ max_query_length=64
61
+ ```
62
+
63
+ ## Using a distilled model instead
64
+ Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
65
+
66
+ ## Usage
67
+
68
+ ### In Haystack
69
+ Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
70
+ ```python
71
+ reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
72
+ # or
73
+ reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
74
+ ```
75
+ For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
76
+
77
+ ### In Transformers
78
+ ```python
79
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
80
+
81
+ model_name = "deepset/roberta-base-squad2"
82
+
83
+ # a) Get predictions
84
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
85
+ QA_input = {
86
+ 'question': 'Why is model conversion important?',
87
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
88
+ }
89
+ res = nlp(QA_input)
90
+
91
+ # b) Load model & tokenizer
92
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
93
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
94
+ ```
95
+
96
+ ## Performance
97
+ Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
98
+
99
+ ```
100
+ "exact": 79.87029394424324,
101
+ "f1": 82.91251169582613,
102
+
103
+ "total": 11873,
104
+ "HasAns_exact": 77.93522267206478,
105
+ "HasAns_f1": 84.02838248389763,
106
+ "HasAns_total": 5928,
107
+ "NoAns_exact": 81.79983179142137,
108
+ "NoAns_f1": 81.79983179142137,
109
+ "NoAns_total": 5945
110
+ ```
111
+
112
+ ## Authors
113
+ **Branden Chan:** branden.chan@deepset.ai
114
+ **Timo Möller:** timo.moeller@deepset.ai
115
+ **Malte Pietsch:** malte.pietsch@deepset.ai
116
+ **Tanay Soni:** tanay.soni@deepset.ai
117
+
118
+ ## About us
119
+
120
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
121
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
122
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
123
+ </div>
124
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
125
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
126
+ </div>
127
+ </div>
128
+
129
+ [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
130
+
131
+
132
+ Some of our other work:
133
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
134
+ - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
135
+ - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
136
+
137
+ ## Get in touch and join the Haystack community
138
+
139
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
140
+
141
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
142
+
143
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
144
+
145
+ By the way: [we're hiring!](http://www.deepset.ai/jobs)
config (1).json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForQuestionAnswering"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "eos_token_id": 2,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "language": "english",
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "roberta",
18
+ "name": "Roberta",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "pad_token_id": 1,
22
+ "type_vocab_size": 1,
23
+ "vocab_size": 50265
24
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a8d759d881d9c1b39dbf4ee451fb8a8c2d43ccbd180218863a54ffd9b4d2447
3
+ size 496233457
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac5db66fdcfecb400345d09787b71009d60805ef9883451071669cf951b5e2c7
3
+ size 496254442
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0b64ccefc1bcb569b604baea27eb873e5482fdf6eb3ceff1fb5368397db5aed
3
+ size 496313727
rust_model.ot ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a16ed126bbc8c4cf794406bac0c7946f62d0f175c02dc54d77a00a6255597ed
3
+ size 498638704
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
tf_model (1).h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b672dd16f09f6f805d407800278e60217b9d7c040df1dde5098765a40cdc88a
3
+ size 496513256
tokenizer_config (1).json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "model_max_length": 512, "full_tokenizer_file": null}
vocab.json ADDED
The diff for this file is too large to render. See raw diff