Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
The *OpenAI* API allows to retrieve log-probabilities per token (including both prompt and completion tokens) through the ``logprobs`` return argument. Currently, the ``CausalLM`` only provide ``logits`` return values, which should are the prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
2 |
|
3 |
The following code provides an example of how to retrieve the log-probabilities per token of ``CausalLMs`` for the huggingface API:
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-2.0
|
3 |
+
tags:
|
4 |
+
- logprobs
|
5 |
+
- logits
|
6 |
+
- CausalLM
|
7 |
+
---
|
8 |
+
|
9 |
+
|
10 |
The *OpenAI* API allows to retrieve log-probabilities per token (including both prompt and completion tokens) through the ``logprobs`` return argument. Currently, the ``CausalLM`` only provide ``logits`` return values, which should are the prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
11 |
|
12 |
The following code provides an example of how to retrieve the log-probabilities per token of ``CausalLMs`` for the huggingface API:
|