Update README.md
Browse files
README.md
CHANGED
@@ -48,6 +48,34 @@ question_answerer(question=question, context=context)
|
|
48 |
```
|
49 |
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
## Ethical Considerations
|
52 |
|
53 |
Care has been taken to minimize biases in the training data. However, biases may still be present, and users are encouraged to evaluate the model's predictions for potential bias and fairness concerns, especially when applied to different demographic groups.
|
|
|
48 |
```
|
49 |
|
50 |
|
51 |
+
|
52 |
+
|
53 |
+
```python
|
54 |
+
from transformers import AutoTokenizer
|
55 |
+
from transformers import AutoModelForQuestionAnswering
|
56 |
+
|
57 |
+
question = "What human advancement first emerged around 12,000 years ago during the Neolithic era?"
|
58 |
+
context = "The development of agriculture began around 12,000 years ago during the Neolithic Revolution. Hunter-gatherers transitioned to cultivating crops and raising livestock. Independent centers of early agriculture thrived in the Fertile Crescent, Egypt, China, Mesoamerica and the Andes. Farming supported larger, settled societies leading to rapid cultural development and population growth."
|
59 |
+
|
60 |
+
|
61 |
+
|
62 |
+
tokenizer = AutoTokenizer.from_pretrained("Falconsai/question_answering")
|
63 |
+
inputs = tokenizer(question, context, return_tensors="pt")
|
64 |
+
|
65 |
+
model = AutoModelForQuestionAnswering.from_pretrained("Falconsai/question_answering")
|
66 |
+
with torch.no_grad():
|
67 |
+
outputs = model(**inputs)
|
68 |
+
|
69 |
+
answer_start_index = outputs.start_logits.argmax()
|
70 |
+
answer_end_index = outputs.end_logits.argmax()
|
71 |
+
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
|
72 |
+
tokenizer.decode(predict_answer_tokens)
|
73 |
+
|
74 |
+
```
|
75 |
+
|
76 |
+
|
77 |
+
|
78 |
+
|
79 |
## Ethical Considerations
|
80 |
|
81 |
Care has been taken to minimize biases in the training data. However, biases may still be present, and users are encouraged to evaluate the model's predictions for potential bias and fairness concerns, especially when applied to different demographic groups.
|