kaiokendev commited on
Commit
59ae072
1 Parent(s): b0ecc1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -12
README.md CHANGED
@@ -7,24 +7,19 @@ SuperCOT is a LoRA I trained with the aim of making LLaMa follow prompts for Lan
7
  Trained against LLaMa 30B 4-bit for 3 epochs with cutoff length 1024, using a mixture of the following datasets:
8
 
9
  [https://huggingface.co/datasets/QingyiSi/Alpaca-CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
10
-
11
- Chain of thought QED
12
-
13
- Chain of thought Aqua
14
-
15
- CodeAlpaca
16
 
17
  [https://huggingface.co/datasets/neulab/conala](https://huggingface.co/datasets/neulab/conala)
18
-
19
- Code snippets
20
 
21
  [https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)
22
-
23
- Alpaca GPT4
24
 
25
  ### Merged Models
26
- [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
27
- [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
28
 
29
  ### Compatibility
30
  This LoRA is compatible with any 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
 
7
  Trained against LLaMa 30B 4-bit for 3 epochs with cutoff length 1024, using a mixture of the following datasets:
8
 
9
  [https://huggingface.co/datasets/QingyiSi/Alpaca-CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
10
+ - Chain of thought QED
11
+ - Chain of thought Aqua
12
+ - CodeAlpaca
 
 
 
13
 
14
  [https://huggingface.co/datasets/neulab/conala](https://huggingface.co/datasets/neulab/conala)
15
+ - Code snippets
 
16
 
17
  [https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)
18
+ - Alpaca GPT4
 
19
 
20
  ### Merged Models
21
+ - [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
22
+ - [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
23
 
24
  ### Compatibility
25
  This LoRA is compatible with any 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins