smji's picture
Update README.md
efb732c verified
|
raw
history blame
1.72 kB
---
license: apache-2.0
datasets:
- hakurei/open-instruct-v1
language:
- en
tags:
- code
- instruction-following
widget:
- text: Tell me how to bake a cake
example_title: Baking cakes
- text: How can I print a fibonacci series upto N in C++
example_title: Coding
---
# DialoGPT2 Instruction Following
This is the fine-tuned version of the [microsoft/dialogpt-small](https://huggingface.co/microsoft/DialoGPT-small) on the instruction following task. The dataset used was the [hakurei/open-instruct-v1](https://huggingface.co/datasets/hakurei/open-instruct-v1) dataset.
## Using the model
### Using `model.generate()`
To use the model, first call the checkpoints and initialize the model
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("smji/dialogpt2-instruct-following")
model = AutoModelForCausalLM.from_pretrained("smji/dialogpt2-instruct-following")
```
And then move onto generating the text
```python
def generate_text(prompt):
inputs = tokenizer.encode(prompt, return_tensors='pt').to(device)
outputs = model.generate(inputs, max_length=512, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
return generated_text[:generated_text.rfind('.')+1]
generate_text("How can I bake a cake?")
```
### Using the pipeline
Or, you can also use the pipeline
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="smji/dialogpt2-instruct-following")
pipe("How can I bake a cake?", max_length=512)
```
---
Done by [S M Jishanul Islam](https://github.com/S-M-J-I)