Datasets:

ArXiv:
DOI:
License:
Files changed (1) hide show
  1. README.md +5 -27
README.md CHANGED
@@ -4,7 +4,7 @@ license: cc-by-4.0
4
 
5
  This document serves as an overview of the different mechanisms and areas of governance in the BigCode project.
6
  It aims to support transparency by providing relevant information about choices that were made during the project to the broader public,
7
- and to serve as an example of intentional governance of an open research project that future endeavors can leverage to shape their own approach.
8
  The first section, **[Project Structure](https://huggingface.co/datasets/bigcode/governance-card#1-project-structure)**, covers the project organization, its stated goals and values, its internal decision processes, and its funding and resources.
9
  The second section, **[Data and Model Governance](https://huggingface.co/datasets/bigcode/governance-card#2-data-and-model-governance)**, covers decisions relating to the questions of data subject consent, privacy, and model release.
10
 
@@ -81,7 +81,7 @@ In general, we expect applicants to be affiliated with a research organization (
81
 
82
  BigCode has 675 participants with 629 members across the research community (including from Hugging Face and ServiceNow) from 62 countries. The top 5 countries include USA (222), India (60), UK (36), Canada (35), and Germany (30). The community communicates across a total of 48 Slack channels, including Steering Committee (3 channels), Working Groups (7 channels), Task Forces (25 channels), and General Community (13 channels).
83
 
84
- Everyone who joins the project is required to follow the [BigCode Code of Conduct](https://www.bigcode-project.org/docs/about/code_of_conduct/), understand [how we manage intellectual property](https://www.bigcode-project.org/docs/about/ip/), and are encouraged to introduce themselves, and to join any working group or task force that aligns to their own interests. If a group does not cover their interests, they are encouraged to pitch their ideas and to take a leadership role for a new working group or task force with the approval of the Steering Committee. Researchers that wish to cite StarCoder are asked to please use the following: TBD
85
 
86
 
87
  ### Project Governance
@@ -145,7 +145,7 @@ The time commitment from volunteers is harder to estimate given the large number
145
 
146
  **Community events and appreciation** ServiceNow and Hugging Face organized a community meetup that coincided with NeurIPS 2022 in New Orleans, USA. The budget for the event was approximately \$6,000 from ServiceNow Research for the venue with hospitality. Hugging face also provided promotional items including stickers and tshirts at the event, and sent named contributors to the research paper complementary BigCode branded tshirts.
147
 
148
- **Crowdsourcing costs** Hugging Face funded the data annotation services from Toloka, with a total outlay of \$xxx. Since this was a research project, Toloka agreed to waive the fees for running the annotation tasks on their platform.
149
 
150
 
151
  # 2. Data and Model Governance
@@ -176,33 +176,11 @@ The legal basis for data collection under fair use and with regards to GDPR and
176
  * **How can a data subject request that their data be removed:** as a derived dataset of The Stack, the PII dataset will be updated to reflect data that has been opted out from the source dataset.
177
  * **How often is the data updated:** similarly, following The Stack terms of use, the PII Dataset will be updated as often as the Stack if some of the files it contains have been opted out.
178
 
179
-
180
- ### Technical Desiderata for the Training Dataset
181
-
182
- Curation choices the the training datasets were shaped both by the social impact and performance goals of the project. A critical success factor for BigCode is the volume, variety, and validity of training data to support fundamental and applied scientific research within the AI community. The dataset curation working group invested considerable time to clean up data by combining heuristic filtering and manual inspection.
183
-
184
- **Volume** Generally speaking, more training data leads to better performance, as the LLM has more examples to learn from, however the larger the dataset, the more computational power, time and costs will be needed. There were several discussions within the BigCode community about whether to up-sample or down-sample certain programming languages, as the amount of compute budget allocated to a data source in a given language can significantly affect the model's performance in that language. However, we realized that the largest amount of available data comes from the popular programming languages and would, therefore, benefit a larger group of end-users. After deduplication, we found that several high-resource programming languages, such as C, C++, C\#, Java, Javascript, Python, and PHP, had a similar amount of data ranging from 44--87 GB. This further reinforced our belief that we did not need to drastically re-weigh the existing data distribution. Thus, in this work, we followed the natural distribution of data during training and sampled data sources proportionally to their volume.
185
-
186
- **Variety** The more diverse the training data, the better the LLM can understand and generate code in different contexts. We ultimately selected 86 of the 358~programming languages in The Stack. Selection criteria targeted all programming languages with more than 500MB of data, as well as languages that were ranked in the top 50 in programming language popularity. In addition, we included dialects of already selected programming languages (e.g., Racket and Scheme for Lisp), but excluded configuration languages (Nix, Puppet, etc.) and languages that are no longer actively supported (ActionScript). We also included data formats like JSON and YAML but limited its data volume. The full list of selected programming languages can be found in the paper. Out of the languages present in MultiPL-E, only D and Swift were not included in the training set. For D, language misclassification of the files led to a very small number of files in The Stack and Swift was omitted due to a human error. We implemented various filters to help with data cleanup, including an XML filter, an Alpha filter to remove files with less that 25% alphabetic characters, a custom HTML filter to target excessive HTML boilerplate and links, as well as JSON and YAML filters to remove most of the data heavy files.
187
-
188
- **Validity** Invalid or incorrect training data can lead to biases and errors in the model’s predictions, and it is important to ensure the model is trained on data that is representative of the real-world applications and scenarios it is intended to support.
189
- The Stack is a highly curated and highly valuable open access contribution to the community.
190
- We performed a visual inspection to ensure that we only retain data of high quality.
191
- To achieve this, we randomly selected 30,000 files from The Stack for each programming language, categorized them by extension, and kept a maximum of 1,000 files for each extension.
192
- We then reached out to the BigCode community for assistance with data inspection.
193
- Eighteen community annotators evaluated 300 programming language extensions.
194
- After inspection, we excluded 36 extensions and eliminated the long-line filter for 27 extensions.
195
- The complete outcomes of the data inspection, including annotator remarks, can be found in [this Google sheet](https://docs.google.com/spreadsheets/d/1Lk-pTk_rXI__fCgixr7ZWSi8wR09Zzd2j_G90J80r00/edit?usp=sharing).
196
-
197
- **Malicious code** An additional factor in selecting files from the source data was their impact on the dual use potential of the trained model, and in particular its ability to help users generate malicious code. On the HuggingFace platform, where the Stack is hosted, a malicious code detection tool identified 654 files as unsafe. After an initial conversation between participants about the pros and cos on leaving these files in the training dataset, a consensus was reached that the possible improvements in LLM applications to software security applications did not warrant increasing the likelihood that the model may generate malicious code by including known examples. With help from the BigCode community, we removed these files ahead of the release of The Stack v1.2.
198
-
199
-
200
  ### Consent of Data Subjects
201
 
202
  **Between implicit and explicit consent** One of the goals of BigCode is to give developers agency over their source code and let them decide whether or not it can be used to develop and evaluate LLMs. Software developers typically rely on licenses to express how they want their work to be re-used; in particular, developers who choose Open Source licenses often do so because they want their code to be broadly re-used. This motivated us to start by selecting data from repositories that met the following criteria:
203
 
204
- * The repository has a license attached - most repositories on GitHub actually do not have a license, which means that the developer retains all their rights to its content. As regards consent, this also means that we have no positive signal that the developer is OK with their code being broadly re-used
205
- * The license is an open source license - as mentioned above, open source, while chosen for very different reasons by different people, typically indicates a willingness to have one’s work reused or adapted
206
  * The license does not have an attribution clause - attribution is a difficult technical problem for code LLMs. Since we cannot guarantee that the model will be used in a way that attributes its generations to specific training data in a way that satisfies the intent of the licensor, we chose to only keep licenses without an attribution clause
207
 
208
  Selecting repositories based on licenses is only the first step, however, as many of these licenses were chosen before the recent developments in code LLMs. Thus, we complement this initial approach by also giving repository owners the ability to **opt out** of having their repositories included in The Stack. We see this approach as a meaningful step forward in improving the agency of data subject in the development of code LLMs, and we present both the tools we developed to support it and its known limitations in the rest of this section.
@@ -241,7 +219,7 @@ Finally, we are also releasing **StarCoderData**, the pre-processed version of T
241
 
242
  ### Model Licensing
243
 
244
- The model is released under an open and responsible AI model license agreement ([BigCode OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)) which enables royalty free access and flexible use and sharing of it, while setting specific use restrictions for identified critical scenarios. Most importantly, the license agreement requires stakeholders wishing to share the model or a modified version of it: (i) to place the same set of use restrictions or a similar one in their legal forms; (ii) to keep the model card and provide a similar one or one of better quality when sharing a modified version of the model (FAQ for the model license agreement available here).
245
 
246
  The BigCode OpenRAIL-M license agreement (i.e. the legal document itself) is available under a CC-BY-4.0 license. Therefore, any stakeholders can freely adopt the same license agreement for their models, or modify it for their specific AI artifacts. For more information about responsible AI licensing, please visit the RAIL Initiative webpage, [The Turing Way Handbook for ML researchers](https://the-turing-way.netlify.app/reproducible-research/licensing/licensing-ml.html) (Alan Turing Insitute), or OECD AI [content](https://oecd.ai/en/wonk/rails-licenses-trustworthy-ai) on RAILs and trustworthy AI principles.
247
 
 
4
 
5
  This document serves as an overview of the different mechanisms and areas of governance in the BigCode project.
6
  It aims to support transparency by providing relevant information about choices that were made during the project to the broader public,
7
+ and to serve as an example of intentional governance of an open research project that future endeavors can leverage to shape their own approach.
8
  The first section, **[Project Structure](https://huggingface.co/datasets/bigcode/governance-card#1-project-structure)**, covers the project organization, its stated goals and values, its internal decision processes, and its funding and resources.
9
  The second section, **[Data and Model Governance](https://huggingface.co/datasets/bigcode/governance-card#2-data-and-model-governance)**, covers decisions relating to the questions of data subject consent, privacy, and model release.
10
 
 
81
 
82
  BigCode has 675 participants with 629 members across the research community (including from Hugging Face and ServiceNow) from 62 countries. The top 5 countries include USA (222), India (60), UK (36), Canada (35), and Germany (30). The community communicates across a total of 48 Slack channels, including Steering Committee (3 channels), Working Groups (7 channels), Task Forces (25 channels), and General Community (13 channels).
83
 
84
+ Everyone who joins the project is required to follow the [BigCode Code of Conduct](https://www.bigcode-project.org/docs/about/code_of_conduct/), understand [how we manage intellectual property](https://www.bigcode-project.org/docs/about/ip/), and are encouraged to introduce themselves, and to join any working group or task force that aligns to their own interests. If a group does not cover their interests, they are encouraged to pitch their ideas and to take a leadership role for a new working group or task force with the approval of the Steering Committee. Researchers that wish to cite StarCoder are asked to please use the DOI link from the top of this page.
85
 
86
 
87
  ### Project Governance
 
145
 
146
  **Community events and appreciation** ServiceNow and Hugging Face organized a community meetup that coincided with NeurIPS 2022 in New Orleans, USA. The budget for the event was approximately \$6,000 from ServiceNow Research for the venue with hospitality. Hugging face also provided promotional items including stickers and tshirts at the event, and sent named contributors to the research paper complementary BigCode branded tshirts.
147
 
148
+ **Crowdsourcing costs** Hugging Face funded the data annotation services from Toloka.
149
 
150
 
151
  # 2. Data and Model Governance
 
176
  * **How can a data subject request that their data be removed:** as a derived dataset of The Stack, the PII dataset will be updated to reflect data that has been opted out from the source dataset.
177
  * **How often is the data updated:** similarly, following The Stack terms of use, the PII Dataset will be updated as often as the Stack if some of the files it contains have been opted out.
178
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
179
  ### Consent of Data Subjects
180
 
181
  **Between implicit and explicit consent** One of the goals of BigCode is to give developers agency over their source code and let them decide whether or not it can be used to develop and evaluate LLMs. Software developers typically rely on licenses to express how they want their work to be re-used; in particular, developers who choose Open Source licenses often do so because they want their code to be broadly re-used. This motivated us to start by selecting data from repositories that met the following criteria:
182
 
183
+ * The repository has an open source licensed attached - open source, while chosen for very different reasons by different people, typically indicates a willingness to have one's work reused or adapted
 
184
  * The license does not have an attribution clause - attribution is a difficult technical problem for code LLMs. Since we cannot guarantee that the model will be used in a way that attributes its generations to specific training data in a way that satisfies the intent of the licensor, we chose to only keep licenses without an attribution clause
185
 
186
  Selecting repositories based on licenses is only the first step, however, as many of these licenses were chosen before the recent developments in code LLMs. Thus, we complement this initial approach by also giving repository owners the ability to **opt out** of having their repositories included in The Stack. We see this approach as a meaningful step forward in improving the agency of data subject in the development of code LLMs, and we present both the tools we developed to support it and its known limitations in the rest of this section.
 
219
 
220
  ### Model Licensing
221
 
222
+ The model is released under an open and responsible AI model license agreement ([BigCode OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)) which enables royalty free access and flexible use and sharing of it, while setting specific use restrictions for identified critical scenarios. Most importantly, the license agreement requires stakeholders wishing to share the model or a modified version of it: (i) to include the same set of use restrictions or a similar one in their legal agreements; (ii) to keep the model card and provide a similar one or one of better quality when sharing a modified version of the model (FAQ for the model license agreement [available here](https://www.bigcode-project.org/docs/pages/bigcode-openrail/)).
223
 
224
  The BigCode OpenRAIL-M license agreement (i.e. the legal document itself) is available under a CC-BY-4.0 license. Therefore, any stakeholders can freely adopt the same license agreement for their models, or modify it for their specific AI artifacts. For more information about responsible AI licensing, please visit the RAIL Initiative webpage, [The Turing Way Handbook for ML researchers](https://the-turing-way.netlify.app/reproducible-research/licensing/licensing-ml.html) (Alan Turing Insitute), or OECD AI [content](https://oecd.ai/en/wonk/rails-licenses-trustworthy-ai) on RAILs and trustworthy AI principles.
225