Model Card for DeepSeekCodeCodeQ&A
This is a version of DeepSeek-Coder model that was fine-tuned on the grammatically corrected texts.
Model Details
Model Description
- Model type: LLaMa
- Number of Parameters: 6.7B
- Supported Programming Language: Python
- Finetuned from model: DeepSeek-Coder
Model Sources [optional]
- Repository: GitHub Repo
- Paper: "Leveraging Large Language Models in Code Question Answering: Baselines and Issues" Georgy Andryushchenko, Vladimir V. Ivanov, Vladimir Makharev, Elizaveta Tukhtina, Aidar Valeev
How to Get Started with the Model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('deepseek-ai/deepseek-coder-6.7b-instruct')
model = AutoModelForCausalLM.from_pretrained('datapaf/DeepSeekCoderCodeQnA', device_map="cuda")
code = ... # Your Python code snippet here
question = ... # Your question regarding the snippet here
q = f"{question}\n{code}"
inputs = tokenizer.encode(q, return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=512, pad_token_id=tokenizer.eos_token_id)
text = tokenizer.decode(outputs[0])
print(text)
-->
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.