sjrhuschlee commited on
Commit
05f588c
1 Parent(s): 6909c41

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -96,7 +96,12 @@ This model was trained using LoRA available through the [PEFT library](https://g
96
  ### Using Transformers
97
  This uses the merged weights (base model weights + LoRA weights) to allow for simple use in Transformers pipelines. It has the same performance as using the weights separately when using the PEFT library.
98
  ```python
99
- from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
 
 
 
 
 
100
  model_name = "sjrhuschlee/deberta-v3-large-squad2"
101
 
102
  # a) Using pipelines
@@ -106,10 +111,25 @@ qa_input = {
106
  'context': 'My name is Sarah and I live in London'
107
  }
108
  res = nlp(qa_input)
 
109
 
110
  # b) Load model & tokenizer
111
  model = AutoModelForQuestionAnswering.from_pretrained(model_name)
112
  tokenizer = AutoTokenizer.from_pretrained(model_name)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ```
114
 
115
  ### Using with Peft
 
96
  ### Using Transformers
97
  This uses the merged weights (base model weights + LoRA weights) to allow for simple use in Transformers pipelines. It has the same performance as using the weights separately when using the PEFT library.
98
  ```python
99
+ import torch
100
+ from transformers import(
101
+ AutoModelForQuestionAnswering,
102
+ AutoTokenizer,
103
+ pipeline
104
+ )
105
  model_name = "sjrhuschlee/deberta-v3-large-squad2"
106
 
107
  # a) Using pipelines
 
111
  'context': 'My name is Sarah and I live in London'
112
  }
113
  res = nlp(qa_input)
114
+ # {'score': 0.984, 'start': 30, 'end': 37, 'answer': ' London'}
115
 
116
  # b) Load model & tokenizer
117
  model = AutoModelForQuestionAnswering.from_pretrained(model_name)
118
  tokenizer = AutoTokenizer.from_pretrained(model_name)
119
+
120
+ question = 'Where do I live?'
121
+ context = 'My name is Sarah and I live in London'
122
+ encoding = tokenizer(question, context, return_tensors="pt")
123
+ start_scores, end_scores = model(
124
+ encoding["input_ids"],
125
+ attention_mask=encoding["attention_mask"],
126
+ return_dict=False
127
+ )
128
+
129
+ all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
130
+ answer_tokens = all_tokens[torch.argmax(start_scores):torch.argmax(end_scores) + 1]
131
+ answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
132
+ # 'London'
133
  ```
134
 
135
  ### Using with Peft