|
--- |
|
language: |
|
- fi |
|
tags: |
|
- finnish |
|
- gpt2 |
|
widget: |
|
- text: "Jotta voidaan luoda tekstiä" |
|
library: |
|
- transformers |
|
license: apache-2.0 |
|
|
|
--- |
|
|
|
## DEPRECATED! |
|
This model is old and no longer relevant with the releases of all around better Finnish models such as GPT-3 models from [TurkuNLP](https://huggingface.co/TurkuNLP) |
|
|
|
You may of course still use this for experiments and benchmarking, but I doubt this will work any better. |
|
|
|
## Background and model name |
|
|
|
This model was trained for my master's thesis: "A generative pre-trained transformer model for Finnish" (2022) |
|
|
|
Model name in my thesis was FinnGPT but I chose not to pollute the namespace and leave that kind of name for a more serious attempt at Finnish GPT models. You may call this however you want. Example names are Väinö's GPT-FI or by hatanp/gpt-fi. If you really want you can also refer to this with the FinnGPT like I did in my thesis. |
|
|
|
## Versions |
|
|
|
- 300M parameter distilled model, [gpt-fi-distill](https://huggingface.co/hatanp/gpt-fi-distill) |
|
- 125M parameter small model, [gpt-fi-small](https://huggingface.co/hatanp/gpt-fi-small) |
|
|
|
### How to use |
|
|
|
Example with text generation pipeline: |
|
|
|
|
|
```python |
|
>>> from transformers import pipeline |
|
>>> generator = pipeline('text-generation', model='hatanp/gpt-fi') |
|
>>> generator("Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta", max_length=3,do_sample=True, top_p=0.9, top_k=12, temperature=0.9, num_return_sequences=2) |
|
|
|
[{'generated_text': 'Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta ei mahdotonta. \n Jos et ole kiinnostunut tokenis'}, |
|
{'generated_text': 'Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta sen toteuttaminen onnistuu, jos testilaboratorio osaa analysoida'}, |
|
{'generated_text': 'Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta sen testaaminen on silti hyödyllistä. Jos testisuorit'}] |
|
``` |
|
|
|
Example to generate text manually: |
|
|
|
```python |
|
>>> from transformers import AutoModelForCausalLM,AutoTokenizer |
|
>>> model = AutoModelForCausalLM.from_pretrained("hatanp/gpt-fi") |
|
>>> tokenizer = AutoTokenizer.from_pretrained("hatanp/gpt-fi") |
|
>>> prompt = "Testilauseella voidaan testata tokenisointia. Tämän jatkaminen on luultavasti vaikeaa, mutta" |
|
>>> inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") |
|
>>> prompt_len = len(tokenizer.decode(inputs[0],skip_special_tokens=True, clean_up_tokenization_spaces=True)) |
|
>>> outputs = model.generate(inputs, max_length=len(inputs[0])+20, do_sample=True, top_p=0.9, top_k=12, temperature=0.9) |
|
>>> text_out = tokenizer.decode(outputs[0])[prompt_len:] |
|
>>> print(text_out) |
|
|
|
" on olemassa joitain keinoja, joilla voit testata tokenisointia. Tässä artikkelissa käydään läpi testilauseiden" |
|
``` |
|
|