Nikhil Pinnaparaju
Model Upload
6409fe5
|
raw
history blame
2.49 kB
metadata
license: other
language:
  - en
tags:
  - causal-lm
  - code
metrics:
  - code_eval
library_name: transformers
model-index:
  - name: stabilityai/stable-code-instruct-3b
    results:
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (Python)
        metrics:
          - name: pass@1
            type: pass@1
            value: 32.4
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (C++)
        metrics:
          - name: pass@1
            type: pass@1
            value: 30.9
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (Java)
        metrics:
          - name: pass@1
            type: pass@1
            value: 32.1
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (JavaScript)
        metrics:
          - name: pass@1
            type: pass@1
            value: 32.1
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (PHP)
        metrics:
          - name: pass@1
            type: pass@1
            value: 24.2
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (Rust)
        metrics:
          - name: pass@1
            type: pass@1
            value: 23
            verified: false

stable-code-instruct-3b

Model Description

stable-code-instruct-3b is a 2.7B billion parameter decoder-only language model tuned from stable-code-3b. This model was trained on a mix of publicly available datasets, synthetic datasets using Direct Preference Optimization (DPO).

This instruct tune demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using BigCode's Evaluation Harness, and on the code portions of MT Bench

How to Cite

@misc{stable-code-instruct-3b,
      url={[https://huggingface.co/stabilityai/stable-code-3b](https://huggingface.co/stabilityai/stable-code-instruct-3b)},
      title={Stable Code 3B},
      author={Phung, Duy, and Pinnaparaju, Nikhil and Adithyan, Reshinth and Tow, Jonathan and Cooper, Nathan}
}