athirdpath commited on
Commit
6414770
·
verified ·
1 Parent(s): 81c339c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -4,4 +4,10 @@ license: llama3.1
4
 
5
  Llama 3.1 **Base**, continually pretrained with 0.5 Epochs (2100 steps @ total batch 64) of the same 1.5gb private dataset that underpins Iambe
6
 
7
- Mostly a proof of concept, but outputs are better than expected. It'd likely be quite good with some instruction tuning.
 
 
 
 
 
 
 
4
 
5
  Llama 3.1 **Base**, continually pretrained with 0.5 Epochs (2100 steps @ total batch 64) of the same 1.5gb private dataset that underpins Iambe
6
 
7
+ Mostly a proof of concept, but outputs are better than expected. It'd likely be quite good with some instruction tuning.
8
+
9
+ -----
10
+
11
+ Why do this? I have a niche use case where I cannot increase compute over 8b, and L3/3.1 are the only models in this size category that meet my needs for logic. However, both versions of L3/3.1 have the damn repetition/token overconfidence problem, and this is meant to disrupt that certainty without disrupting the model's ability to function.
12
+
13
+ By the way, I *think* it's the lm_head that is causing the looping, but it might be the embeddings being too separated. I'm not going to pay two more times to test them separately, however :p