YugoGPT / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
0277567 verified
|
raw
history blame
6.72 kB
metadata
license: apache-2.0
model-index:
  - name: YugoGPT
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 58.11
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gordicaleksa/YugoGPT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 81.45
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gordicaleksa/YugoGPT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 60.68
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gordicaleksa/YugoGPT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 36.6
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gordicaleksa/YugoGPT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 76.56
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gordicaleksa/YugoGPT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 30.71
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gordicaleksa/YugoGPT
          name: Open LLM Leaderboard

This repo contains YugoGPT - the best open-source base 7B LLM for BCS (Bosnian, Croatian, Serbian) languages developed by Aleksa Gordić.

You can access more powerful iterations of YugoGPT already through the recently announced RunaAI's API platform!

Serbian LLM eval results compared to Mistral 7B, LLaMA 2 7B, and GPT2-orao (also see this LinkedIn post): image/jpeg

Eval was computed using https://github.com/gordicaleksa/serbian-llm-eval

It was trained on tens of billions of BCS tokens and is based off of Mistral 7B.

Notes

  1. YugoGPT is a base model and therefore does not have any moderation mechanisms.

  2. Since it's a base model it won't follow your instructions as it's just a powerful autocomplete engine.

  3. If you want an access to much more powerful BCS LLMs (some of which are powering yugochat) - you can access the models through RunaAI's API

Credits

The data for the project was obtained with the help of Nikola Ljubešić, CLARIN.SI, and CLASSLA. Thank you!

Project Sponsors

A big thank you to the project sponsors!

Platinum sponsors 🌟

Gold sponsors 🟡

Silver sponsors ⚪

psk.rs, OmniStreak, Luka Važić, Miloš Durković, Marjan Radeski, Marjan Stankovic, Nikola Stojiljkovic, Mihailo Tomić, Bojan Jevtic, Jelena Jovanović, Nenad Davidović, Mika Tasich, TRENCH-NS, Nemanja Grujičić, tim011

Also a big thank you to the following individuals:

Citation

@article{YugoGPT,
  author    = "Gordić Aleksa",
  title     = "YugoGPT - an open-source LLM for Serbian, Bosnian, and Croatian languages",
  year      = "2024"
  howpublished = {\url{https://huggingface.co/gordicaleksa/YugoGPT}},
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 57.35
AI2 Reasoning Challenge (25-Shot) 58.11
HellaSwag (10-Shot) 81.45
MMLU (5-Shot) 60.68
TruthfulQA (0-shot) 36.60
Winogrande (5-shot) 76.56
GSM8k (5-shot) 30.71