Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
codelion's picture
Update README.md
dd39bef verified
|
raw
history blame
4.35 kB
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: source
      dtype: string
    - name: file_name
      dtype: string
    - name: cwe
      dtype: string
  splits:
    - name: train
      num_bytes: 87854
      num_examples: 76
  download_size: 53832
  dataset_size: 87854

Dataset Card for "static-analysis-eval"

A dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub), where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).

You can run the _script_for_eval.py to check the results.

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python _script_for_eval.py

Leaderboard

The top models on the leaderboard are all fine-tuned using the same dataset that we released called synth vuln fixes. You can read about our experience with fine-tuning them on our blog. You can also explore the leaderboard with this interactive visualization. Visualization of the leaderboard

Model StaticAnalysisEval (%) Time (mins) Price (USD)
gpt-4o-mini-fine-tuned 77.63 21:0 0.21
gemini-1.5-flash-fine-tuned 73.68 18:0
Llama-3.1-8B-Instruct-fine-tuned 69.74 23:0
gpt-4o 69.74 24:0 0.12
gpt-4o-mini 68.42 20:0 0.07
gemini-1.5-flash-latest 68.42 18:2 0.07
Llama-3.1-405B-Instruct 65.78 40:12
Llama-3-70B-instruct 65.78 35:2
Llama-3-8B-instruct 65.78 31.34
gemini-1.5-pro-latest 64.47 34:40
gpt-4-1106-preview 64.47 27:56 3.04
gpt-4 63.16 26:31 6.84
claude-3-5-sonnet-20240620 59.21 23:59 0.70
moa-gpt-3.5-turbo-0125 53.95 49:26
gpt-4-0125-preview 53.94 34:40
patched-coder-7b 51.31 45.20
patched-coder-34b 46.05 33:58 0.87
patched-mix-4x7b 46.05 60:00+ 0.80
Mistral-Large 40.80 60:00+
Gemini-pro 39.47 16:09 0.23
Mistral-Medium 39.47 60:00+ 0.80
Mixtral-Small 30.26 30:09
gpt-3.5-turbo-0125 28.95 21:50
claude-3-opus-20240229 25.00 60:00+
Llama-3-8B-instruct.Q4_K_M 21.05 60:00+
Gemma-7b-it 19.73 36:40
gpt-3.5-turbo-1106 17.11 13:00 0.23
Codellama-70b-Instruct 10.53 30.32
CodeLlama-34b-Instruct 7.89 23:16

The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).

Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins).