File size: 6,532 Bytes
f48ce8a
51dc329
 
c01b227
 
 
f48ce8a
fc448c2
 
 
 
 
51dc329
f48ce8a
fc448c2
c01b227
51dc329
 
fc448c2
51dc329
fc448c2
51dc329
fc448c2
51dc329
fc448c2
51dc329
fc448c2
51dc329
 
fc448c2
51dc329
 
 
 
 
 
 
fc448c2
51dc329
 
 
 
 
 
 
 
 
fc448c2
51dc329
 
 
 
 
fc448c2
51dc329
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fc448c2
51dc329
9d769d1
fc448c2
9d769d1
 
fc448c2
9d769d1
fc448c2
9d769d1
fc448c2
9d769d1
51dc329
 
9d769d1
 
51dc329
9d769d1
fc448c2
51dc329
 
fc448c2
3f4e984
 
52c0dcc
3f4e984
51dc329
f48ce8a
 
 
ff6646d
f48ce8a
c01b227
 
ff6646d
f48ce8a
 
 
 
 
 
 
 
 
 
c01b227
 
2f3cd93
 
c01b227
f48ce8a
 
 
c01b227
f48ce8a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fc448c2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
language:
- en
license:
- apache-2.0
- cc-by-nc-4.0
tags:
- generated_from_trainer
- instruct
- instructions
- code
- instructiongen
datasets: pszemraj/fleece2instructions-codealpaca
metrics:
- rouge
widget:
- text: 'git lfs install

    huggingface-cli lfs-enable-largefiles .

    git lfs track "*.bin"

    git add .

    git commit -a -m "add fp32 chkpt"

    git push

    '
  example_title: bash
- text: "export interface DocumentParams {\n  pageContent: string;\n\n  // eslint-disable-next-line\
    \ @typescript-eslint/no-explicit-any\n  metadata: Record<string, any>;\n}\n\n\
    /**\n * Interface for interacting with a document.\n */\nexport class Document\
    \ implements DocumentParams {\n  pageContent: string;\n\n  // eslint-disable-next-line\
    \ @typescript-eslint/no-explicit-any\n  metadata: Record<string, any>;\n\n  constructor(fields?:\
    \ Partial<DocumentParams>) {\n    this.pageContent = fields?.pageContent ?? this.pageContent;\n\
    \    this.metadata = fields?.metadata ?? {};\n  }\n}\n"
  example_title: js
- text: "def merge(left, right):\n    if len(left) == 0:\n        return right\n\n\
    \    if len(right) == 0:\n        return left\n\n    result = []\n    index_left\
    \ = index_right = 0\n\n    while len(result) < len(left) + len(right):\n     \
    \   if left[index_left] <= right[index_right]:\n            result.append(left[index_left])\n\
    \            index_left += 1\n        else:\n            result.append(right[index_right])\n\
    \            index_right += 1\n\n        if index_right == len(right):\n     \
    \       result += left[index_left:]\n            break\n\n        if index_left\
    \ == len(left):\n            result += right[index_right:]\n            break\n\
    \n    return result\n"
  example_title: merge
- text: "import pandas as pd\nimport plotly.graph_objects as go\n\ndf = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2014_apple_stock.csv')\n\
    \nfig = go.Figure(go.Scatter(x = df['AAPL_x'], y = df['AAPL_y'],\n           \
    \       name='Share Prices (in USD)'))\n\nfig.update_layout(title='Apple Share\
    \ Prices over time (2014)',\n                   plot_bgcolor='rgb(230, 230,230)',\n\
    \                   showlegend=True)\n\nfig.show()\n"
  example_title: plot
- text: "from spellchecker import SpellChecker\n\nspell = SpellChecker()\n\ndef check_word_spelling(word:\
    \ str):\n    misspelled = spell.unknown([word])\n    return len(misspelled) ==\
    \ 0\n\ndef eval_and_replace(text: str, match_token: str = \"- \"):\n    if match_token\
    \ not in text:\n        return text\n    else:\n        while True:\n        \
    \    full_before_text = text.split(match_token, maxsplit=1)[0]\n            before_text\
    \ = [\n                char for char in full_before_text.split()[-1] if char.isalpha()\n\
    \            ]\n            before_text = \"\".join(before_text)\n           \
    \ full_after_text = text.split(match_token, maxsplit=1)[-1]\n            after_text\
    \ = [char for char in full_after_text.split()[0] if char.isalpha()]\n        \
    \    after_text = \"\".join(after_text)\n            full_text = before_text +\
    \ after_text\n            if check_word_spelling(full_text):\n               \
    \ text = full_before_text + full_after_text\n            else:\n             \
    \   text = full_before_text + \" \" + full_after_text\n            if match_token\
    \ not in text:\n                break\n        return text\n\ntext = \"I- am-\
    \ a go- od- boy\"\neval_and_replace(text)\n"
  example_title: spell check
- text: 'import torch

    from transformers import AutoTokenizer, AutoModelForSequenceClassification


    checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"

    tokenizer = AutoTokenizer.from_pretrained(checkpoint)

    model = AutoModelForSequenceClassification.from_pretrained(checkpoint)

    sequences = ["I''ve been waiting for a HuggingFace course my whole life.", "So
    have I!"]


    tokens = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt")

    output = model(**tokens)

    '
  example_title: model inference
inference:
  parameters:
    max_length: 96
    num_beams: 4
base_model: facebook/bart-base
---


# bart-base-code-instructiongen

Use this text2text model to find out what LLM instructions might be able to generate an arbitary piece of code!

This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the `pszemraj/fleece2instructions-codealpaca` dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0136
- Rouge1: 59.9513
- Rouge2: 33.9118
- Rougel: 55.7815
- Rougelsum: 56.9064
- Gen Len: 29.7146

## Intended uses & limitations

🚨 **note:** as the authors elected to release the [original dataset](https://github.com/sahil280114/codealpaca) under `cc-by-nc`, the license carries over to this model and **cannot be used for commercial activity**. 

> This is just a `base` size model, which does a decent job for its size, but is not perfect. For better quality instructions, check out [bart-large](https://huggingface.co/pszemraj/bart-large-code-instructiongen) or fine tune your own larger model on the dataset :)

Intended use: Research on domain adaptation and/or other improvements to LLMs by extending instruction:text data pairs.

## Training and evaluation data

Refer to the linked dataset card for `pszemraj/fleece2instructions-codealpaca` or the [original dataset](https://github.com/sahil280114/codealpaca) repo.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 3.0

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.1165        | 1.0   | 281  | 1.1090          | 57.9239 | 31.9259 | 53.8737 | 54.9811   | 28.2924 |
| 1.0763        | 2.0   | 563  | 1.0267          | 59.9605 | 34.0298 | 55.7523 | 56.8021   | 29.6966 |
| 0.9595        | 2.99  | 843  | 1.0136          | 59.9513 | 33.9118 | 55.7815 | 56.9064   | 29.7146 |