HyperionHF commited on
Commit
516af98
1 Parent(s): 14ce658

Update with README with links

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
 
16
  **Model Description**
17
 
18
- Diff-Codegen-350M is the first in a series of diff models released by CarperAI. A diff model is an autoregressive language model trained on edits to a piece of text, formatted in Unified Diff Format. These diff models can suggest, given a section of text and a description of the desired change, an intelligent change to the text that fits the description, marking the lines added, changed, and deleted in diff format. The primary use case for these models is for suggesting changes to code—as such, most models we release will be fine-tuned versions of models trained on code datasets.
19
 
20
  Diff-Codegen-350M-v1 is an initial preliminary release of an experimental artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code.
21
 
@@ -23,9 +23,9 @@ CarperAI will be releasing larger diff LMs trained on larger code datasets in th
23
 
24
  **Training Data**
25
 
26
- This model is a fine-tune of Codegen-350m-mono by Salesforce. This language model was first pre-trained on The PIle, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found here and here respectively. The model was then fine-tuned on a large corpus of code data in multiple languages, before finally being fine-tuned on a Python code dataset. The Codegen paper with full details of these datasets can be found here.
27
 
28
- Our diff model was trained on a dataset of commits from BigQuery, a large-scale dataset of many programming languages from GitHub repositories. We filtered the dataset by the number of stars in the repository (>100 stars), license (only open-source non-copyleft licensed code included), and length of file (files greater than 2048 tokens in length were excluded).
29
 
30
  The model was trained using the GPT-2 tokenizer.
31
 
@@ -44,13 +44,13 @@ Each file was formatted as follows for input to the language model:
44
 
45
  **Intended Uses and Limitations**
46
 
47
- Due to the models small size and restriction to code, one should not expect the model to generalize to domains beyond code and perform (successful) reasoning over large chunks of code. This model is intended to be used in prototyping ELM-like systems, and for solely experimental purposes. This model is provided without warranty and should not be used in commercial settings -- even though the license permits.
48
 
49
  **Limitations and Biases**
50
 
51
  Due to the short context length restriction and due to the fact that all repositories with under 100 stars were excluded, we expect our diff model to underperform on underrepresented languages, for instance Lean or Coq.
52
 
53
- The output of this model should not be trusted as correct and secure code. This model should not be used in any mission critical setting where security is of importance. Similarly, when running the output of this model, it should be done in a sandbox like gVisor.
54
 
55
  **Evaluation Results**
56
 
@@ -63,4 +63,4 @@ This model is licensed as MIT. While it can be used in commercial settings, we d
63
 
64
  **Acknowledgements**
65
 
66
- Wed like to thank Honglu Fan, Harry Saini, Herbie Bradley, and Joel Lehman
 
15
 
16
  **Model Description**
17
 
18
+ Diff-Codegen-350M-v1 is the first in a series of diff models released by CarperAI. A diff model is an autoregressive language model trained on edits to a piece of text, formatted in [Unified Diff Format](https://en.wikipedia.org/wiki/Diff#Unified_format). These diff models can suggest, given a section of text and a description of the desired change, an intelligent change to the text that fits the description, marking the lines added, changed, and deleted in diff format. The primary use case for these models is for suggesting changes to code—as such, most models we release will be fine-tuned versions of models trained on code datasets.
19
 
20
  Diff-Codegen-350M-v1 is an initial preliminary release of an experimental artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code.
21
 
 
23
 
24
  **Training Data**
25
 
26
+ This model is a fine-tune of [Codegen-350m-mono by Salesforce](https://huggingface.co/Salesforce/codegen-350M-mono). This language model was first pre-trained on The PIle, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found [here](https://arxiv.org/abs/2201.07311) and [here](https://arxiv.org/abs/2101.00027) respectively. The model was then fine-tuned on a large corpus of code data in multiple languages, before finally being fine-tuned on a Python code dataset. The Codegen paper with full details of these datasets can be found [here](https://arxiv.org/abs/2203.13474).
27
 
28
+ Our diff model was trained on a dataset of commits from [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of many programming languages from GitHub repositories. We filtered the dataset by the number of stars in the repository (>100 stars), license (only open-source non-copyleft licensed code included), and length of file (files greater than 2048 tokens in length were excluded).
29
 
30
  The model was trained using the GPT-2 tokenizer.
31
 
 
44
 
45
  **Intended Uses and Limitations**
46
 
47
+ Due to the model's small size and restriction to code, one should not expect the model to generalize to domains beyond code and perform (successful) reasoning over large chunks of code. This model is intended to be used in prototyping ELM-like systems, and for solely experimental purposes. This model is provided without warranty and should not be used in commercial settingseven though the license permits.
48
 
49
  **Limitations and Biases**
50
 
51
  Due to the short context length restriction and due to the fact that all repositories with under 100 stars were excluded, we expect our diff model to underperform on underrepresented languages, for instance Lean or Coq.
52
 
53
+ The output of this model should not be trusted as correct and secure code. This model should not be used in any mission critical setting where security is of importance. Similarly, when running the output of this model, it should be done in a sandbox like [gVisor](https://gvisor.dev).
54
 
55
  **Evaluation Results**
56
 
 
63
 
64
  **Acknowledgements**
65
 
66
+ We'd like to thank Honglu Fan, Harry Saini, Herbie Bradley, and Joel Lehman