dkleczek commited on
Commit
96c0d42
1 Parent(s): a2dc7c2

corrections

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -86,7 +86,7 @@ generator('Najwi臋kszym polskim poet膮 by艂')
86
  >>> [{'generated_text': 'Najwi臋kszym polskim poet膮 by艂 Adam Mickiewicz - uwa偶any za jednego z dw贸ch geniusz贸w j臋zyka polskiego. "Pan Tadeusz" by艂 jednym z najpopularniejszych dzie艂 w historii Polski. W 1801 zosta艂 wystawiony publicznie w Teatrze Wilama Horzycy. Pod jego'}]
87
  ```
88
 
89
- The pipeline uses `model.generate()` method in the background. In [our notebook](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_text_generation.ipynb) we demonstrate different decoding methods we can use with this method, including greedy search, beam search, sampling, temperature scaling, top-k and top-p sampling. As an example, the below snippet
90
 
91
  ```python
92
  from transformers import AutoTokenizer, AutoModelWithLMHead
 
86
  >>> [{'generated_text': 'Najwi臋kszym polskim poet膮 by艂 Adam Mickiewicz - uwa偶any za jednego z dw贸ch geniusz贸w j臋zyka polskiego. "Pan Tadeusz" by艂 jednym z najpopularniejszych dzie艂 w historii Polski. W 1801 zosta艂 wystawiony publicznie w Teatrze Wilama Horzycy. Pod jego'}]
87
  ```
88
 
89
+ The pipeline uses `model.generate()` method in the background. In [our notebook](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_text_generation.ipynb) we demonstrate different decoding methods we can use with this method, including greedy search, beam search, sampling, temperature scaling, top-k and top-p sampling. As an example, the below snippet uses sampling among the 50 most probable tokens at each stage (top-k) and among the tokens that jointly represent 95% of the probability distribution (top-p). It also returns 3 output sequences.
90
 
91
  ```python
92
  from transformers import AutoTokenizer, AutoModelWithLMHead