language:
- zh
- en
tags:
- codeshell
- wisdomshell
- pku-kcl
- openbankai
CodeShell
CodeShell是北京大学知识计算实验室与蚌壳智能科技联合研发的多语言代码大模型基座。CodeShell具有70亿参数,在五千亿Tokens进行了训练,上下文窗口长度为8194。在权威的代码评估Benchmark(HumanEval与MBPP)上,CodeShell取得同等规模最好的性能。与此同时,我们提供了与CodeShell配套的部署方案与IDE插件,请参考代码库CodeShell。
CodeShell is a multi-language code LLM jointly developed by the Knowledge Computing Lab of Peking University and Bangke Intelligence Technology. CodeShell has 7 billion parameters and was trained on 500 billion tokens with a context window length of 8194. On authoritative code evaluation benchmarks (HumanEval and MBPP), CodeShell achieves the best performance of its scale. Meanwhile, we provide deployment solutions and IDE plugins that complement CodeShell. Please refer to the CodeShell code repository for more details."
Main Characteristics of CodeShell
强大的性能:CodelShell在HumanEval和MBPP上达到了7B代码基座大模型的最优性能
完整的体系:除了代码大模型,同时开源IDE(VS Code与JetBrains)插件,形成开源的全栈技术体系
轻量化部署:支持本地C++部署,提供轻量快速的本地化软件开发助手解决方案
全面的评测:提供支持完整项目上下文、覆盖代码生成、代码缺陷检测与修复、测试用例生成等常见软件开发活动的多任务评测体系(即将开源)
高效的训练:基于高效的数据治理体系,CodeShell在完全冷启动情况下,只训练了五千亿Token即获得了优异的性能
Powerful Performance: CodeShell achieves optimal performance for a 7B code base model on HumanEval and MBPP.
Complete Ecosystem: In addition to the mega code model, open-source IDE plugins (for VS Code and JetBrains) are also available, forming a comprehensive open-source full-stack technology system.
Lightweight Deployment: Supports local C++ deployment, offering a lightweight and fast localized software development assistant solution.
Comprehensive Evaluation: Provides a multi-task evaluation system that supports full project context, covering code generation, code defect detection and repair, test case generation, and other common software development activities (to be open-sourced soon).
Efficient Training: Based on an efficient data governance system, CodeShell, even when starting from scratch, achieved outstanding performance with training on just 500 trillion tokens.
Quickstart
Code Generation
Codeshell 提供了Hugging Face格式的模型,开发者可以通过下列代码加载并使用。
Codeshell offers a model in the Hugging Face format. Developers can load and use it with the following code.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codeshell", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("codeshell", trust_remote_code=True).cuda()
inputs = tokenizer('def print_hello_world():', return_tensors='pt').cuda()
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Fill in the Moddle
CodeShell 支持Fill-in-the-Middle模式,从而更好的支持软件开发过程。
CodeShell supports the Fill-in-the-Middle mode, thereby better facilitating the software development process.
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer(input_text, return_tensors='pt').cuda()
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Model Details
Code Shell使用GPT-2作为基础架构,采用Grouped-Query Attention、RoPE相对位置编码等技术。
Code Shell uses GPT-2 as its foundational architecture and incorporates technologies such as Grouped-Query Attention and RoPE relative position encoding.
Hyper-parameter | Value |
---|---|
n_layer | 42 |
n_embd | 4096 |
n_inner | 16384 |
n_head | 32 |
num_query_groups | 8 |
seq-length | 8192 |
vocab_size | 70144 |
Evaluation
我们选取了目前最流行的两个代码评测数据集(HumanEval与MBPP)对模型进行评估,与目前最先进的两个7b代码大模型CodeLllama与Starcoder相比,Codeshell 取得了最优的成绩。具体评测结果如下。
We selected the two most popular code evaluation datasets currently available (HumanEval and MBPP) to assess the model. Compared to the two most advanced 7b LLM for code, CodeLllama and Starcoder, Codeshell achieved the best results. The specific evaluation results are as follows.
Pass@1
Task | codeshell | codellama-7B | starcoderbase-7B |
---|---|---|---|
humaneval | 33.48 | 29.44 | 27.80 |
mbpp | 39.08 | 37.60 | 34.16 |
multiple-java | 29.56 | 29.24 | 24.30 |
multiple-js | 33.60 | 31.30 | 27.02 |
multiple-r | 20.99 | 18.57 | 14.29 |
multiple-rkt | 12.48 | 12.55 | 10.43 |
multiple-cpp | 28.20 | 27.33 | 23.04 |
multiple-cs | 22.34 | 20.38 | 18.99 |
multiple-d | 8.59 | 11.60 | 8.08 |
multiple-go | 71.69 | 75.91 | 73.83 |
multiple-jl | 20.63 | 25.28 | 22.96 |
multiple-lua | 22.92 | 30.50 | 22.92 |
multiple-php | 30.43 | 25.96 | 22.11 |
multiple-pl | 15.65 | 17.45 | 16.40 |
multiple-py | 33.54 | 29.25 | 28.82 |
multiple-rb | 25.71 | 30.06 | 18.51 |
multiple-rs | 26.86 | 25.90 | 22.82 |
multiple-swift | 25.00 | 25.32 | 15.70 |
multiple-ts | 33.90 | 32.64 | 27.48 |
multiple-sh | 8.42 | 9.75 | 7.09 |
multiple-scala | 22.56 | 24.50 | 19.12 |
License
本仓库开源的模型遵循Apache 2.0 许可证,对学术研究完全开放,若需要商用,开发者可发送邮件进行申请,得到书面授权后即可使用。联系邮箱:wye@pku.edu.cn
The model open-sourced in this repository follows the Apache 2.0 License. It is fully open for academic research. For commercial use, developers can send an email to apply. Once written authorization is obtained, it can be used. Contact email: wye@pku.edu.cn.