FLAN T5
FLAN T5λ paust/pko-t5-large λͺ¨λΈμ κΈ°λ°μΌλ‘ λ€μν νμ€ν¬λ₯Ό instruction finetuningμ ν΅ν΄μ λ§λ λͺ¨λΈμ λλ€.
νμ¬ κ³μ Instruction Finetuning μ μ§ννλ©΄μ μ€κ°κ²°κ³Όλ₯Ό λͺ¨λΈλ‘ μ λ°μ΄νΈνκ³ μμ΅λλ€.
νμ΅λ νμ€ν¬
Task name | Task type |
---|---|
NSMC | Classification |
Klue Ynat | Classification |
KorNLI | Classification |
KorSTS | Classification |
QuestionPair | Classification |
Klue STS | Classification |
AIHub news Summary | Summarization |
AIHub document Summary | Summarization |
AIHub book Summary | Summarization |
AIHub conversation Summary | Summarization |
AIHub ko-to-en | Translation |
AIHub ko-to-en Expert | Translation |
AIHub ko-to-en Tech | Translation |
AIHub ko-to-en social | Translation |
AIHub ko-to-jp | Translation |
AIHub ko-to-cn Tech | Translation |
AIHub Translation Corpus | Translation |
korquad | QA |
Klue MRC | QA |
AIHub mindslab's MRC | QA |
λͺ¨λΈ
μ¬μ© μμ
from transformers import T5ForConditionalGeneration, T5TokenizerFast
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-flan-t5-large')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-flan-t5-large', device_map='cuda')
prompt = """μμΈνΉλ³μ(μμΈηΉε₯εΈ, μμ΄: Seoul Metropolitan Government)λ λνλ―Όκ΅ μλμ΄μ μ΅λ λμμ΄λ€. μ μ¬μλλΆν° μ¬λμ΄ κ±°μ£ΌνμμΌλ λ³Έ μμ¬λ λ°±μ 첫 μλ μλ‘μ±μ μμ΄λ‘ νλ€. μΌκ΅μλμλ μ λ΅μ μμΆ©μ§λ‘μ κ³ κ΅¬λ €, λ°±μ , μ λΌκ° λ²κ°μ μ°¨μ§νμμΌλ©°, κ³ λ € μλμλ μμ€μ λ³κΆμ΄ μΈμμ§ λ¨κ²½(εδΊ¬)μΌλ‘ μ΄λ¦νμλ€.
νκ΅μ μλλ μ΄λμ
λκΉ?"""
input_ids = tokenizer(prompt, add_special_tokens=True, return_tensors='pt').input_ids
output_ids = model.generate(input_ids=input_ids.cuda(), max_new_tokens=32, num_beams=12)
text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(text) # μμΈνΉλ³μ
License
PAUSTμμ λ§λ pko-t5λ MIT license νμ 곡κ°λμ΄ μμ΅λλ€.
- Downloads last month
- 88
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.