Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ language:
|
|
12 |
## CodeSage-Small-v2
|
13 |
|
14 |
### [Blogpost]
|
15 |
-
Please
|
16 |
|
17 |
### Model description
|
18 |
CodeSage is a family of open code embedding models with an encoder architecture that supports a wide range of source code understanding tasks. It was initially introduced in the paper:
|
@@ -61,9 +61,11 @@ For this V2 model, we enhanced semantic search performance by improving the qual
|
|
61 |
### Training Data
|
62 |
This pretrained checkpoint is the same as those used by our V1 model ([codesage/codesage-small](https://huggingface.co/codesage/codesage-small), which is trained on [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup) data. The constative learning data are extracted from [The Stack V2](https://huggingface.co/datasets/bigcode/the-stack-v2). Same as our V1 model, we supported nine languages as follows: c, c-sharp, go, java, javascript, typescript, php, python, ruby.
|
63 |
|
64 |
-
### How to
|
65 |
-
This checkpoint consists of an encoder (130M model), which can be used to extract code embeddings of 1024 dimension.
|
66 |
|
|
|
|
|
67 |
```
|
68 |
from transformers import AutoModel, AutoTokenizer
|
69 |
|
@@ -80,6 +82,12 @@ inputs = tokenizer.encode("def print_hello_world():\tprint('Hello World!')", ret
|
|
80 |
embedding = model(inputs)[0]
|
81 |
```
|
82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
### BibTeX entry and citation info
|
84 |
```
|
85 |
@inproceedings{
|
|
|
12 |
## CodeSage-Small-v2
|
13 |
|
14 |
### [Blogpost]
|
15 |
+
Please check out our [blogpost](https://code-representation-learning.github.io/codesage-v2.html) for more details.
|
16 |
|
17 |
### Model description
|
18 |
CodeSage is a family of open code embedding models with an encoder architecture that supports a wide range of source code understanding tasks. It was initially introduced in the paper:
|
|
|
61 |
### Training Data
|
62 |
This pretrained checkpoint is the same as those used by our V1 model ([codesage/codesage-small](https://huggingface.co/codesage/codesage-small), which is trained on [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup) data. The constative learning data are extracted from [The Stack V2](https://huggingface.co/datasets/bigcode/the-stack-v2). Same as our V1 model, we supported nine languages as follows: c, c-sharp, go, java, javascript, typescript, php, python, ruby.
|
63 |
|
64 |
+
### How to Use
|
65 |
+
This checkpoint consists of an encoder (130M model), which can be used to extract code embeddings of 1024 dimension.
|
66 |
|
67 |
+
1. Accessing CodeSage via HuggingFace: it can be easily loaded using the AutoModel functionality and employs the [Starcoder Tokenizer](https://arxiv.org/pdf/2305.06161.pdf).
|
68 |
+
|
69 |
```
|
70 |
from transformers import AutoModel, AutoTokenizer
|
71 |
|
|
|
82 |
embedding = model(inputs)[0]
|
83 |
```
|
84 |
|
85 |
+
2. Accessing CodeSage via SentenceTransformer
|
86 |
+
```
|
87 |
+
from sentence_transformers import SentenceTransformer
|
88 |
+
model = SentenceTransformer("codesage/codesage-small-v2", trust_remote_code=True)
|
89 |
+
```
|
90 |
+
|
91 |
### BibTeX entry and citation info
|
92 |
```
|
93 |
@inproceedings{
|