techysanoj commited on
Commit
4ecdd05
1 Parent(s): ec8fdb0

updating readme main

Browse files
Files changed (1) hide show
  1. README.md +112 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: openrail
3
  datasets:
4
  - squad_v2
5
  language:
@@ -14,4 +14,114 @@ tags:
14
  - physics
15
  - chemistry
16
  - ancient
17
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
  datasets:
4
  - squad_v2
5
  language:
 
14
  - physics
15
  - chemistry
16
  - ancient
17
+ ---
18
+
19
+
20
+ # roberta-fine-tuned-squadv2 for QA
21
+
22
+ This is the [roberta-base](https://huggingface.co/techysanoj/roberta-fine-tuned-squadv2) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
23
+
24
+
25
+ ## Overview
26
+ **Language model:** roberta-fine-tuned-squadv2
27
+ **Language:** English, Hindi(Upcoming)
28
+ **Downstream-task:** Extractive QA
29
+ **Training data:** SQuAD 2.0
30
+ **Eval data:** SQuAD 2.0
31
+ **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
32
+ **Infrastructure**: 4x Tesla v100
33
+
34
+ ## Hyperparameters
35
+
36
+ ```
37
+ batch_size = 4
38
+ n_epochs = 50
39
+ base_LM_model = "roberta-base"
40
+ max_seq_len = 512
41
+ learning_rate = 9e-5
42
+ lr_schedule = LinearWarmup
43
+ warmup_proportion = 0.2
44
+ doc_stride=128
45
+ max_query_length=64
46
+ ```
47
+
48
+ ## Usage
49
+
50
+ ### In Haystack
51
+ Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
52
+ ```python
53
+ reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
54
+ # or
55
+ reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
56
+ ```
57
+ For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
58
+
59
+ ### In Transformers
60
+ ```python
61
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
62
+
63
+ model_name = "deepset/roberta-base-squad2"
64
+
65
+ # a) Get predictions
66
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
67
+ QA_input = {
68
+ 'question': 'Why is model conversion important?',
69
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
70
+ }
71
+ res = nlp(QA_input)
72
+
73
+ # b) Load model & tokenizer
74
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
75
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
76
+ ```
77
+
78
+ ## Performance
79
+ Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
80
+
81
+ ```
82
+ "exact": 79.87029394424324,
83
+ "f1": 82.91251169582613,
84
+
85
+ "total": 11873,
86
+ "HasAns_exact": 77.93522267206478,
87
+ "HasAns_f1": 84.02838248389763,
88
+ "HasAns_total": 5928,
89
+ "NoAns_exact": 81.79983179142137,
90
+ "NoAns_f1": 81.79983179142137,
91
+ "NoAns_total": 5945
92
+ ```
93
+
94
+ ## Authors
95
+ **Branden Chan:** branden.chan@deepset.ai
96
+ **Timo Möller:** timo.moeller@deepset.ai
97
+ **Malte Pietsch:** malte.pietsch@deepset.ai
98
+ **Tanay Soni:** tanay.soni@deepset.ai
99
+
100
+ ## About us
101
+
102
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
103
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
104
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
105
+ </div>
106
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
107
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
108
+ </div>
109
+ </div>
110
+
111
+ [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
112
+
113
+
114
+ Some of our other work:
115
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
116
+ - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
117
+ - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
118
+
119
+ ## Get in touch and join the Haystack community
120
+
121
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
122
+
123
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
124
+
125
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
126
+
127
+ By the way: [we're hiring!](http://www.deepset.ai/jobs)