DavidGF commited on
Commit
b30359f
1 Parent(s): 62079b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -45,7 +45,7 @@ Without their independent research collaboration this model release would not ha
45
 
46
  ## All FC SauerkrautLM-7b-beta Models
47
 
48
- **For function calling, we provide several branches with different versions of the model.Since Function Calling is currently still in beta status, we depend on your feedback. Please test each model extensively and let us know which model you achieved the best results with.**
49
  | Model | HF | GPTQ | GGUF | AWQ |
50
  |-------|-------|-------|-------|-------|
51
  | FC SauerkrautLM-7b-beta - Laser 16 layers | [Link](https://huggingface.co/VAGOsolutions/FC-SauerkrautLM-7b-beta) | coming soon | coming soon | coming soon |
@@ -76,7 +76,7 @@ This process not only helps in understanding the effectiveness of Spherical Line
76
  Additionally, we integrated a novel training strategy on the SFT and DPO training process, where we partially freeze the model according to a laser-like analysis aiming to navigate and optimize the trade-offs highlighted by the no free lunch theorem. This innovative training method effectively prevents the significant problem of language models forgetting previously acquired knowledge.
77
  This aspect is particularly crucial when attempting to teach the model specific skills, such as a new language, where in general, the model might lose a considerable amount of its prior knowledge and exhibit a decline in overall intelligence.
78
 
79
- **For function calling, we provide several branches with different versions of the model.Since Function Calling is currently still in beta status, we depend on your feedback. Please test each model extensively and let us know which model you achieved the best results with.**
80
 
81
  Detailed information on how the new training strategy works and the advantages it offers over conventional training methods will soon be published in a detailed paper by the LaserRMT research group.
82
 
 
45
 
46
  ## All FC SauerkrautLM-7b-beta Models
47
 
48
+ **For function calling, we provide several branches with different versions of the model. Since Function Calling is currently still in beta status, we depend on your feedback. Please test each model extensively and let us know which model you achieved the best results with.**
49
  | Model | HF | GPTQ | GGUF | AWQ |
50
  |-------|-------|-------|-------|-------|
51
  | FC SauerkrautLM-7b-beta - Laser 16 layers | [Link](https://huggingface.co/VAGOsolutions/FC-SauerkrautLM-7b-beta) | coming soon | coming soon | coming soon |
 
76
  Additionally, we integrated a novel training strategy on the SFT and DPO training process, where we partially freeze the model according to a laser-like analysis aiming to navigate and optimize the trade-offs highlighted by the no free lunch theorem. This innovative training method effectively prevents the significant problem of language models forgetting previously acquired knowledge.
77
  This aspect is particularly crucial when attempting to teach the model specific skills, such as a new language, where in general, the model might lose a considerable amount of its prior knowledge and exhibit a decline in overall intelligence.
78
 
79
+ **For function calling, we provide several branches with different versions of the model. Since Function Calling is currently still in beta status, we depend on your feedback. Please test each model extensively and let us know which model you achieved the best results with.**
80
 
81
  Detailed information on how the new training strategy works and the advantages it offers over conventional training methods will soon be published in a detailed paper by the LaserRMT research group.
82