julianrisch commited on
Commit
78fb38a
1 Parent(s): c419f18

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -14
README.md CHANGED
@@ -129,7 +129,7 @@ model-index:
129
  name: F1
130
  ---
131
 
132
- # roberta-large for QA
133
 
134
  This is the [roberta-large](https://huggingface.co/roberta-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
135
 
@@ -140,7 +140,7 @@ This is the [roberta-large](https://huggingface.co/roberta-large) model, fine-tu
140
  **Downstream-task:** Extractive QA
141
  **Training data:** SQuAD 2.0
142
  **Eval data:** SQuAD 2.0
143
- **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
144
  **Infrastructure**: 4x Tesla v100
145
 
146
  ## Hyperparameters
@@ -155,13 +155,27 @@ Please note that we have also released a distilled version of this model called
155
  ## Usage
156
 
157
  ### In Haystack
158
- Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
 
159
  ```python
160
- reader = FARMReader(model_name_or_path="deepset/roberta-large-squad2")
161
- # or
162
- reader = TransformersReader(model_name_or_path="deepset/roberta-large-squad2",tokenizer="deepset/roberta-large-squad2")
 
 
 
 
 
 
 
 
 
 
 
 
 
163
  ```
164
- For a complete example of ``roberta-large-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
165
 
166
  ### In Transformers
167
  ```python
@@ -199,13 +213,12 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
199
  </div>
200
  </div>
201
 
202
- [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
203
-
204
 
205
  Some of our other work:
206
- - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
207
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
208
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
209
 
210
  ## Get in touch and join the Haystack community
211
 
@@ -213,6 +226,6 @@ Some of our other work:
213
 
214
  We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
215
 
216
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
217
 
218
- By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
129
  name: F1
130
  ---
131
 
132
+ # roberta-large for Extractive QA
133
 
134
  This is the [roberta-large](https://huggingface.co/roberta-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
135
 
 
140
  **Downstream-task:** Extractive QA
141
  **Training data:** SQuAD 2.0
142
  **Eval data:** SQuAD 2.0
143
+ **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
144
  **Infrastructure**: 4x Tesla v100
145
 
146
  ## Hyperparameters
 
155
  ## Usage
156
 
157
  ### In Haystack
158
+ Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
159
+ To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
160
  ```python
161
+ # After running pip install haystack-ai "transformers[torch,sentencepiece]"
162
+
163
+ from haystack import Document
164
+ from haystack.components.readers import ExtractiveReader
165
+
166
+ docs = [
167
+ Document(content="Python is a popular programming language"),
168
+ Document(content="python ist eine beliebte Programmiersprache"),
169
+ ]
170
+
171
+ reader = ExtractiveReader(model="deepset/roberta-large-squad2")
172
+ reader.warm_up()
173
+
174
+ question = "What is a popular programming language?"
175
+ result = reader.run(query=question, documents=docs)
176
+ # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
177
  ```
178
+ For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
179
 
180
  ### In Transformers
181
  ```python
 
213
  </div>
214
  </div>
215
 
216
+ [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
 
217
 
218
  Some of our other work:
219
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
220
+ - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
221
+ - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
222
 
223
  ## Get in touch and join the Haystack community
224
 
 
226
 
227
  We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
228
 
229
+ [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
230
 
231
+ By the way: [we're hiring!](http://www.deepset.ai/jobs)