NeuralNovel
commited on
Commit
•
fab26dd
1
Parent(s):
8f11f70
Update README.md
Browse filesFixing typo "bellow" to below
README.md
CHANGED
@@ -161,7 +161,7 @@ Although Replete-Coder has amazing coding capabilities, its trained on vaste amo
|
|
161 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png)
|
162 |
|
163 |
Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
|
164 |
-
you can check out their website for cloud compute rental
|
165 |
- https://tensordock.com
|
166 |
__________________________________________________________________________________________________
|
167 |
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
|
|
|
161 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png)
|
162 |
|
163 |
Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
|
164 |
+
you can check out their website for cloud compute rental below.
|
165 |
- https://tensordock.com
|
166 |
__________________________________________________________________________________________________
|
167 |
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
|