ofermend commited on
Commit
e2aca33
1 Parent(s): 3c0cb66

Update src/display/about.py

Browse files
Files changed (1) hide show
  1. src/display/about.py +2 -2
src/display/about.py CHANGED
@@ -23,7 +23,7 @@ TITLE = """<h1 align="center" id="space-title">Hughes Hallucination Evaluation M
23
 
24
  # What does your leaderboard evaluate?
25
  INTRODUCTION_TEXT = """
26
- This leaderboard by [Vectara](https://vectara.com) evaluates how often an LLM introduces hallucinations when summarizing a document.
27
 
28
 
29
  """
@@ -38,7 +38,7 @@ Hallucinations refer to instances where a model introduces factually incorrect o
38
 
39
  ## How it works
40
 
41
- Using Vectara's HHEM, we measure the occurrence of hallucinations in generated summaries.
42
  Given a source document and a summary generated by an LLM, HHEM outputs a hallucination score between 0 and 1, with 0 indicating complete hallucination and 1 representing perfect factual consistency.
43
  The model card for HHEM can be found [here](https://huggingface.co/vectara/hallucination_evaluation_model).
44
 
 
23
 
24
  # What does your leaderboard evaluate?
25
  INTRODUCTION_TEXT = """
26
+ This leaderboard evaluates how often an LLM introduces hallucinations when summarizing a document.
27
 
28
 
29
  """
 
38
 
39
  ## How it works
40
 
41
+ Using [Vectara](https://vectara.com)'s HHEM, we measure the occurrence of hallucinations in generated summaries.
42
  Given a source document and a summary generated by an LLM, HHEM outputs a hallucination score between 0 and 1, with 0 indicating complete hallucination and 1 representing perfect factual consistency.
43
  The model card for HHEM can be found [here](https://huggingface.co/vectara/hallucination_evaluation_model).
44