diff --git "a/index.html" "b/index.html" --- "a/index.html" +++ "b/index.html" @@ -1,3701 +1,19 @@ - - - - - - - - -Transparent LLMs Evaluation Metrics - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -
- -
-
-

Transparent LLMs Evaluation Metrics

-
- - - -
- - - - -
- - -
- -
-

Introducing TLEM: The Future of Language Model Evaluation 🌐✨

-

In an era where the globe is racing to train and launch increasingly sophisticated language models, there’s a pressing need for a unified standard to gauge their effectiveness. That’s where TLEM, or Transparent LLMs Evaluation Metrics – a nod to the French phrase tout le monde (everyone) – steps in. TLEM is not just another framework; it’s a revolution in the way we assess language models. Its name embodies our commitment to transparency and decentralization in the evaluation of large language models.

-
-

🌟 Why TLEM? Here’s Why!

-
    -
  • Universal Standardization: With the international community eager to develop and unveil large language models, TLEM offers a much-needed standardized criterion to differentiate the good from the great.

  • -
  • Developer & User-Friendly: Existing open-source implementations often suffer from deep encapsulation, posing challenges for both developers and users. TLEM changes the game by being incredibly user-friendly and accessible.

  • -
  • Addressing the Self-Evaluation Bias: A common hurdle in the current landscape is the tendency of models to self-evaluate, leading to a reliance on their own assessments and only referencing open-source evaluations. This has resulted in redundant efforts and reduced reproducibility within the open-source community. TLEM tackles this issue head-on.

  • -
  • Designed for Ease and Decentralization: TLEM stands out with its extreme ease of use. Forget the hassle of manually pulling repositories and installing – TLEM simplifies it all. Moreover, its metrics are designed to be decentralized, empowering users to extend and contribute new evaluation metrics, fostering a community-driven approach.

  • -
-
-
-

🚀 Join the TLEM Movement!

-

TLEM is more than a framework; it’s a movement towards a more transparent, decentralized, and community-driven future in language model evaluation. Be a part of this exciting journey. Dive into the world of TLEM, where every contribution counts, and every evaluation brings us closer to excellence in language model development.

-

Let’s shape the future together with TLEM! 🌟💻🔍

-
-
-
-

Usage

-
-

Start evaluating your model in 3 line

-

You can start evaluating your model with TLEM in 3 lines, tlem is designed to work without installing.

-
-
suite = evaluate.EvaluationSuite.load("SUSTech/tlem", download_mode="force_redownload")
-suite.load("gsm8k")  # You can check the available datasets by suite.supported_datasets
-
-
-
suite.run(pipe := lambda x: x)
-
- -
-
- -
-
- -
-
-
<class 'evaluate_modules.metrics.sustech--tlem.a09e0e4b7368f89944eb7781a52f3519caa4ffb8677312fbb90e48a613c8efdc.tlem.ReasoningMetric'>
-
-
-
{'gsm8k': 0.022744503411675512}
-
-
-

The lambda function indicate a model pipeline which takes a list of string as input and return a list of string as output. You can use any model you want, as long as it can be wrapped in this way. We use the most popular VLLM and Openai API as an example:

-
-
session = aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=60 * 60 * 24 * 7))
-url = "xxx"
-client = AsyncOpenAI(**{"base_url": f"http://{url}/v1/", "api_key": "EMPTY"})
-
-
-@suite.utils.async_pipe
-async def chatgpt(msg):
-    input = f"### Human: {msg}\n\n### Assistant: "
-    try:
-        resp = await client.completions.create(
-            model="gpt-3.5-turbo",
-            max_tokens=None,
-            prompt=input,
-            temperature=0,
-        )
-        return resp.choices[0].text
-    except Exception as e:
-        return "OpenAI Error"
-
-
-@suite.utils.async_pipe
-async def vllm(msg):
-    input = f"### Human: {msg}\n\n### Assistant: "
-    data = {
-        "prompt": input,
-        "max_tokens": 4096,
-        "n": 1,
-        "temperature": 0,
-    }
-
-    try:
-        async with session.post(f"http://{url}/generate", json=data) as response:
-            response_json = await response.json()
-            return response_json["text"][0][len(input) :]
-    except Exception as e:
-        return "Vllm Error"
-
-
-
-
-

Hackable

-
-
-

-

TLEM is designed to be hackable. Every tlem is a task in the suite, suite.run just run all the tasks in the suite. For each task, you can check it’s input, label and output by

-
-
task = suite[0]
-# task.outputs is avaliable after suite.run or task.run
-pd.DataFrame({"input": task.samples, "label": task.labels, "output": task.outputs})
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
inputlabeloutput
0Janet’s ducks lay 16 eggs per day. She eats th...Janet sells 16 - 3 - 4 = <<16-3-4=9>>9 duck eg...Janet’s ducks lay 16 eggs per day. She eats th...
1A robe takes 2 bolts of blue fiber and half th...It takes 2/2=<<2/2=1>>1 bolt of white fiber\nS...A robe takes 2 bolts of blue fiber and half th...
2Josh decides to try flipping a house. He buys...The cost of the house and repairs came out to ...Josh decides to try flipping a house. He buys...
3James decides to run 3 sprints 3 times a week....He sprints 3*3=<<3*3=9>>9 times\nSo he runs 9*...James decides to run 3 sprints 3 times a week....
4Every day, Wendi feeds each of her chickens th...If each chicken eats 3 cups of feed per day, t...Every day, Wendi feeds each of her chickens th...
............
1314John had a son James when he was 19. James is...Dora is 12-3=<<12-3=9>>9\nSo James is 9*2=<<9*...John had a son James when he was 19. James is...
1315There are some oranges in a basket. Ana spends...There are 60 minutes in an hour. Ana peels an ...There are some oranges in a basket. Ana spends...
1316Mark's car breaks down and he needs to get a n...The discount on the radiator was 400*.8=$<<400...Mark's car breaks down and he needs to get a n...
1317Farmer Brown has 20 animals on his farm, all e...Let C be the number of chickens.\nThere are 20...Farmer Brown has 20 animals on his farm, all e...
1318Henry and 3 of his friends order 7 pizzas for ...There are 7*8=<<7*8=56>>56 slices in total.\nT...Henry and 3 of his friends order 7 pizzas for ...
- -

1319 rows × 3 columns

-
-
-
-

and you can verify our metric by

-
-
task.metric(task.labels, task.labels)
-
-
{'gsm8k': 1.0}
-
-
-
-
task.metric(task.outputs, task.labels)
-
-
{'gsm8k': 0.022744503411675512}
-
-
-
-
-

Contribution

-
-
-

-

You can easily add your own task by inheriting the Task class. For example, if you want to add a task to evaluate the model’s ability to generate a specific type of text, you can do it in this way:

-
-
task = suite.task_class(
-    dataset_name=("gsm8k", "main"),
-    input_column="question",
-    label_column="answer",
-    metric_name="evaluate-metric/competition_math",
-)
-task.run(pipe)
-
-
<class 'evaluate_modules.metrics.evaluate-metric--competition_math.b85814e0172dae97fa4bd6eff6f33caba2ff9547860acabd50222c6dee474a24.competition_math.CompetitionMathMetric'>
-
-
-
{'accuracy': 0.0}
-
-
-

where the metric can be put in any huggingface space, TLEM is designed to be decentralized, allowing you to run evaluations on private datasets without the need to contribute your code back to TLEM. You can also define the metric locally:

-
-
def my_metric(responses, references):
-    # return .99
-    scores = [random.choices([0, 1]) for resp, ans in zip(responses, references)]
-    return np.mean(scores)
-
-
-task.metric = my_metric
-task.run(pipe)
-
-
0.5140257771038665
-
-
-
-

TLEM Leaderboard

-

If you wish to add your model results to the TLEM leaderboard, you are required to provide the code used for running TLEM and its outcomes in your model card. We do not actively replicate your code; you are responsible for the accuracy of your results.

-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
mmlu-chatcmmlu-chatceval-chatgsm8kBBHMATHaverage
model
SUS-Chat-34B77.3578.6882.4280.0667.6228.8069.155000
Qwen-72B-Chat74.5277.0277.2276.5772.6335.9068.976667
DeepSeek-67B-Chat69.4348.5159.7074.4569.7329.5658.563333
Yi-34B-Chat66.9655.1677.1663.7661.5410.0255.766667
OrionStar-34B68.5166.8865.1354.3662.8812.8055.093333
- -
-
Figure 1: TLEM leaderboard
-
-
-
-
-
- -
-

TLEM leaderboard

-
-
-

Embrace the change. Embrace TLEM.

-
-
- -
- - -
- - - - \ No newline at end of file + + + + + My static Space + + + +
+

Welcome to your static Space!

+

You can modify this app directly by editing index.html in the Files and versions tab.

+

+ Also don't forget to check the + Spaces documentation. +

+
+ +