CodeV:Empowering LLMs for Verilog Generation through Multi-Level Summarization
CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality Verilog code, addressing the challenges faced by existing models in this domain. (This repo is under development)
Models and Datasets
Test
If you want to test the generation capability of existing models on Verilog, you need to install the VerilogEval and RTLLM environments.
Quick Start
from transformers import pipeline
import torch
prompt= "FILL IN THE QUESTION"
generator = pipeline(
model="CODEV",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)
response = result[0]["generated_text"]
print("Response:", response)
Acknowledgements
- Magicoder: Training code, original datasets and data decontamination
- DeepSeek-Coder: Base model for CodeV-DeepSeek
- CodeLlama: Base model for CodeLlama
- CodeQwen: CodeV-CodeQwen