File size: 973 Bytes
f25b82b 3908efa 398b450 f7d94c1 3bff58d f25b82b e5f12a0 f7d94c1 c0ca576 5c7d276 3bff105 c0ca576 35b3132 f25b82b 3bff105 f25b82b 4b3bc99 8583f67 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
library_name: peft
base_model: shpotes/codegen-350M-mono
datasets:
- flytech/python-codes-25k
pipeline_tag: text-generation
tags:
- code
license: mit
---
## How to Get Started with the Model
```python
import torch
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
config = PeftConfig.from_pretrained("yamete4/codegen-350M-mono-QLoRa-flytech")
model = AutoModelForCausalLM.from_pretrained("shpotes/codegen-350M-mono",
quantization_config=BitsAndBytesConfig(config),)
peft_model = PeftModel.from_pretrained(model, "yamete4/codegen-350M-mono-QLoRa-flytech")
text = "Help me manage my subscriptions!?"
inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = perf_model.generate(inputs.input_ids, max_new_tokens=250, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
### Framework versions
- PEFT 0.9.0 |