|
# ebisuke/liz-nojaloli-ja |
|
|
|
## License |
|
[MIT License](https://opensource.org/licenses/MIT) |
|
|
|
ใใผในใจใใฆ[rinna/japanese-gpt-neox-3.6b](https://huggingface.co/rinna/japanese-gpt-neox-3.6b)ใไฝฟ็จใใฆใใพใใ |
|
|
|
## Description |
|
ใฎใใใญใช้ขจๅณใใฃใใใขใใซใงใใ |
|
[rinna/japanese-gpt-neox-3.6b](https://huggingface.co/rinna/japanese-gpt-neox-3.6b)ใใใผในใจใใฆใใกใคใณใใฅใผใณใใฆใใพใใ |
|
|
|
## Usage |
|
|
|
ใฆใผใถใผใฎๅ
ฅๅใ`็ธๆใฏ่จใใพใใใใ๏ผๅ
ๅฎน๏ผใ\n`ใงๆฌใฃใฆใใ ใใใ |
|
ใขใใซใฏ`็ธๆใฏ่จใใพใใใใ`ไปฅ้ใฎๆ่ใ็ๆใใพใใ |
|
ใใไปฅ้ใ็ถใๅ ดๅใใใใฎใงๅฟ
่ฆใซๅฟใใฆ`ใ`ใฎๆๅญใพใงใงๆใกๅใฃใฆใใ ใใใ |
|
|
|
```python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import os |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("ebisuke/liz-nojaloli-ja", use_fast=False) |
|
model = AutoModelForCausalLM.from_pretrained("ebisuke/liz-nojaloli-ja", load_in_8bit=True, device_map='auto') |
|
|
|
text = "็ธๆใฏ่จใใพใใใใ็ ใใซใใปใปใปใ \nใใชใใฏ่จใใพใใใใ" |
|
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") |
|
|
|
with torch.no_grad(): |
|
output_ids = model.generate( |
|
input_ids=token_ids.to(model.device), |
|
max_new_tokens=1000, |
|
do_sample=True, |
|
temperature=0.7, |
|
pad_token_id=tokenizer.pad_token_id, |
|
bos_token_id=tokenizer.bos_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
) |
|
|
|
output = tokenizer.decode(output_ids.tolist()[0]) |
|
print(output) |
|
``` |
|
|
|
|