uukuguy's picture
Init
8371b49
metadata
language:
  - en
library_name: transformers
pipeline_tag: text-generation
datasets:
  - jondurbin/airoboros-2.2
  - Open-Orca/OpenOrca
  - garage-bAInd/Open-Platypus
  - WizardLM/WizardLM_evol_instruct_V2_196k
  - TokenBender/python_eval_instruct_51k
  - ise-uiuc/Magicoder-OSS-Instruct-75K
  - meta-math/MetaMathQA
tags:
  - code
license: apache-2.0
model-index:
  - name: SpeechlessCoder
    results:
      - task:
          type: text-generation
        dataset:
          type: openai_humaneval
          name: HumanEval
        metrics:
          - name: pass@1
            type: pass@1
            value: null
            verified: false

speechless-code-mistral-7b-v2.0

Code: https://github.com/uukuguy/speechless

Use the following dataset to fine-tune mistralai/Mistral-7B-v0.1 in order to improve the model's reasoning and planning abilities.

Total 343,370 samples 603 MB

  • jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 21,923 samples.
  • Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 62,973 samples.
  • garage-bAInd/Open-Platypus: 100%, 22,760 samples.
  • WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,077 samples
  • TokenBender/python_eval_instruct_51k: “python” in output .39,596 samples
  • OpenHermes code block in output 18,969 samples
  • CollectiveCognition-2023-09-27 200 samples
  • ise-uiuc/Magicoder-OSS-Instruct-75K 75,197 samples
  • meta-math/MetaMathQA 20% 395K 71,706 samples

HumanEval

Metric Value
humaneval-python

Big Code Models Leaderboard

CodeLlama-34B-Python: 53.29

CodeLlama-34B-Instruct: 50.79

CodeLlama-13B-Instruct: 50.6

CodeLlama-34B: 45.11

CodeLlama-13B-Python: 42.89

CodeLlama-13B: 35.07

lm-evaluation-harness

Open LLM Leaderboard

Metric Value
ARC
HellaSwag
MMLU
TruthfulQA
Average