froggeric shafire commited on
Commit
9f01a42
1 Parent(s): 0fc7320

Update README.md (#1)

Browse files

- Update README.md (2f1c1499d4c7e3624c6f2c723a78d6a790de7d54)


Co-authored-by: Shafaet Brady Hussain <shafire@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -13,6 +13,8 @@ Native chain of thought approach means that Cerebrum is trained to devise a tact
13
 
14
  Zero-shot prompted Cerebrum significantly outperforms few-shot prompted Mistral 7b as well as much larger models (such as Llama 2 70b) on a range of tasks that require reasoning, including ARC Challenge, GSM8k, and Math.
15
 
 
 
16
  ## Benchmarking
17
  An overview of Cerebrum 7b performance compared to reported performance Mistral 7b and LLama 2 70b on selected benchmarks that require reasoning:
18
 
 
13
 
14
  Zero-shot prompted Cerebrum significantly outperforms few-shot prompted Mistral 7b as well as much larger models (such as Llama 2 70b) on a range of tasks that require reasoning, including ARC Challenge, GSM8k, and Math.
15
 
16
+ This LLM model works a lot better than any other mistral mixtral models for agent data, tested on 14th March 2024.
17
+
18
  ## Benchmarking
19
  An overview of Cerebrum 7b performance compared to reported performance Mistral 7b and LLama 2 70b on selected benchmarks that require reasoning:
20