Edit model card

deepseek-coder-7b-instruct-v1.5-RK3588-1.1.1

This version of deepseek-coder-7b-instruct-v1.5 has been converted to run on the RK3588 NPU using {'w8a8_g256', 'w8a8_g128'} quantization.

This model has been optimized with the following LoRA:

Compatible with RKLLM version: 1.1.1

###Useful links: Official RKLLM GitHub

RockhipNPU Reddit

EZRKNN-LLM

Pretty much anything by these folks: [marty1885][https://github.com/marty1885] and happyme531

Original Model Card for base model, deepseek-coder-7b-instruct-v1.5, below:

DeepSeek Coder

[🏠Homepage] | [🤖 Chat with DeepSeek Coder] | [Discord] | [Wechat(微信)]


1. Introduction of Deepseek-Coder-7B-Instruct v1.5

Deepseek-Coder-7B-Instruct-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective, and then fine-tuned on 2B tokens of instruction data.

2. Evaluation Results

DeepSeek Coder

3. How to Use

Here give some examples of how to use our model.

Chat Model Inference

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True).cuda()
messages=[
    { 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)

outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))

4. License

This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.

See the LICENSE-MODEL for more details.

5. Contact

If you have any questions, please raise an issue or contact us at service@deepseek.com.

Downloads last month
16
Inference API
Unable to determine this model's library. Check the docs .

Collection including c01zaut/deepseek-coder-7b-instruct-v1.5-rk3588-1.1.1