Endre Stølsvik

stolsvik
·

AI & ML interests

LLMs, generative

Recent Activity

Organizations

None yet

stolsvik's activity

reacted to santiviquez's post with ❤️ about 1 year ago
view post
Post
Confidence * may be * all you need.

A simple average of the log probabilities of the output tokens from an LLM might be all it takes to tell if the model is hallucinating.🫨

The idea is that if a model is not confident (low output token probabilities), the model may be inventing random stuff.

In these two papers:
1. https://aclanthology.org/2023.eacl-main.75/
2. https://arxiv.org/abs/2303.08896

The authors claim that this simple method is the best heuristic for detecting hallucinations. The beauty is that it only uses the generated token probabilities, so it can be implemented at inference time ⚡
·
New activity in Qwen/Qwen-72B-Chat about 1 year ago

TheBloke's quants?

#5 opened about 1 year ago by
stolsvik