mmukh commited on
Commit
14df95c
1 Parent(s): d9bcfc4

Upload 6 files

Browse files
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SOBertLarge
2
+
3
+ ## Model Description
4
+
5
+ SOBertBase is a 762M parameter BERT models trained on 27 billion tokens of SO data StackOverflow answer and comment text using the Megatron Toolkit.
6
+
7
+ SOBert is pre-trained with 19 GB data presented as 15 million samples where each sample contains an entire post and all its corresponding comments. We also include
8
+ all code in each answer so that our model is bimodal in nature. We use a SentencePiece tokenizer trained with BytePair Encoding, which has the benefit over WordPiece of never labeling tokens as “unknown".
9
+ Additionally, SOBert is trained with a a maximum sequence length of 2048 based on the empirical length distribution of StackOverflow posts and a relatively
10
+ large batch size of 0.5M tokens. A smaller 109 million parameter model can also be found [here](https://huggingface.co/mmukh/SOBertBase) . More details can be found in the paper
11
+ [Stack Over-Flowing with Results: The Case for Domain-Specific Pre-Training Over One-Size-Fits-All Models](https://arxiv.org/pdf/2306.03268).
12
+
13
+ #### How to use
14
+
15
+ ```python
16
+ from transformers import AutoTokenizer,AutoModel
17
+ model = AutoModel.from_pretrained(mmukh/SOBertLarge")
18
+ tokenizer = AutoTokenizer.from_pretrained("mmukh/SOBertLarge")
19
+
20
+ ```
21
+
22
+ ### BibTeX entry and citation info
23
+
24
+ ```bibtex
25
+ @article{mukherjee2023stack,
26
+ title={Stack Over-Flowing with Results: The Case for Domain-Specific Pre-Training Over One-Size-Fits-All Models},
27
+ author={Mukherjee, Manisha and Hellendoorn, Vincent J},
28
+ journal={arXiv preprint arXiv:2306.03268},
29
+ year={2023}
30
+ }
31
+ ```
config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_probs_dropout_prob": 0.1,
3
+ "hidden_act": "gelu",
4
+ "hidden_dropout_prob": 0.1,
5
+ "hidden_size": 1536,
6
+ "initializer_range": 0.02,
7
+ "intermediate_size": 6144,
8
+ "layer_norm_eps": 1e-12,
9
+ "max_position_embeddings": 2048,
10
+ "model_type": "megatron-bert",
11
+ "num_attention_heads": 16,
12
+ "num_hidden_layers": 24,
13
+ "pad_token_id": 0,
14
+ "position_embedding_type": "absolute",
15
+ "tokenizer_type": "SentencePieceTokenizer",
16
+ "transformers_version": "4.31.0",
17
+ "type_vocab_size": 2,
18
+ "use_cache": true,
19
+ "vocab_size": 50048
20
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9478715f1eb7f048c6d9f29a04eb4dafd543553b1102a3be1155aab4513cb52
3
+ size 1524894129
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "model_max_length": 2048,
3
+ "tokenizer_class": "PreTrainedTokenizerFast"
4
+ }