# Ling-Coder-lite-base

🤖 ModelScope 🤗 Hugging Face 🖥️ GitHub

## Introduction Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks. ## Model Downloads You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on modelscope.cn to speed up the download process.

| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: | | Ling-Coder-lite-base | 16.8B | 2.75B | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite-base) | | Ling-Coder-lite | 16.8B | 2.75B | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) |
## Evaluation Detailed evaluation results are reported in our technical report [TBD]. ## Quickstart ### 🤗 Hugging Face Transformers Here is a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "inclusionAI/Ling-Coder-lite" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True ) prompt = "Write a quick sort algorithm in python." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Deployment Please refer to [Github](https://github.com/inclusionAI/Ling/blob/master/README.md) ## License This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-Coder-lite/blob/main/LICENCE). ## Citation [TBD]