zhaoyang commited on
Commit
b1aceaf
1 Parent(s): 8090282

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +62 -3
  2. assets/logo.png +0 -0
  3. assets/overview.png +0 -0
README.md CHANGED
@@ -1,3 +1,62 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <img src="./assets/logo.png" style="zoom:25%;" />
3
+ </div>
4
+
5
+ # CodeV:Empowering LLMs for Verilog Generation through Multi-Level Summarization
6
+
7
+ <img src="assets/overview.png" style="zoom:50%;" />
8
+
9
+ CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality Verilog code, addressing the challenges faced by existing models in this domain. **(This repo is under development)**
10
+
11
+ ## Models and Datasets
12
+
13
+ | | Base Model | CodeV |
14
+ | ---- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
15
+ | 6.7B | [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | [[zyyy1023399127/CodeV-DS-6.7B](https://huggingface.co/zyyy1023399127/CodeV-DS-6.7B) |
16
+ | 7B | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [zyyy1023399127/CodeV-CL-7B](https://huggingface.co/zyyy1023399127/CodeV-CL-7B) |
17
+ | 7B | [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) | [zyyy1023399127/CodeV-QW-7B](https://huggingface.co/zyyy1023399127/CodeV-QW-7B) |
18
+
19
+ ## Test
20
+
21
+ If you want to test the generation capability of existing models on Verilog, you need to install the [VerilogEval](https://github.com/NVlabs/verilog-eval) and [RTLLM](https://github.com/hkust-zhiyao/rtllm) environments.
22
+
23
+ ## Quick Start
24
+
25
+ ```python
26
+ from transformers import pipeline
27
+
28
+ import torch
29
+
30
+
31
+
32
+ prompt= "FILL IN THE QUESTION"
33
+
34
+
35
+
36
+ generator = pipeline(
37
+
38
+ model="CODEV",
39
+
40
+ task="text-generation",
41
+
42
+ torch_dtype=torch.bfloat16,
43
+
44
+ device_map="auto",
45
+
46
+ )
47
+
48
+
49
+
50
+ result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)
51
+
52
+ response = result[0]["generated_text"]
53
+
54
+ print("Response:", response)
55
+ ```
56
+
57
+ ## Acknowledgements
58
+
59
+ * [Magicoder](https://github.com/ise-uiuc/magicoder): Training code, original datasets and data decontamination
60
+ * [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for CodeV-DeepSeek
61
+ * [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for CodeLlama
62
+ * [CodeQwen](https://github.com/QwenLM/CodeQwen1.5): CodeV-CodeQwen
assets/logo.png ADDED
assets/overview.png ADDED