migtissera commited on
Commit
c87658a
1 Parent(s): 14b10dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -9,6 +9,11 @@ library_name: transformers
9
  # Synthia-70B-v1.1
10
  SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
11
 
 
 
 
 
 
12
  <br>
13
 
14
  ![Synthia](https://huggingface.co/migtissera/Synthia-70B-v1.1/resolve/main/Synthia.jpeg)
@@ -32,11 +37,11 @@ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](htt
32
  ||||
33
  |:------:|:--------:|:-------:|
34
  |**Task**|**Metric**|**Value**|
35
- |*arc_challenge*|acc_norm||
36
- |*hellaswag*|acc_norm||
37
- |*mmlu*|acc_norm||
38
- |*truthfulqa_mc*|mc2||
39
- |**Total Average**|-|**<NUM>**||
40
 
41
  <br>
42
 
 
9
  # Synthia-70B-v1.1
10
  SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
11
 
12
+ This model has generalized "Tree of Thought" reasoning capabilities. Evoke it with the following system message:
13
+ ```
14
+ Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning
15
+ ```
16
+
17
  <br>
18
 
19
  ![Synthia](https://huggingface.co/migtissera/Synthia-70B-v1.1/resolve/main/Synthia.jpeg)
 
37
  ||||
38
  |:------:|:--------:|:-------:|
39
  |**Task**|**Metric**|**Value**|
40
+ |*arc_challenge*|acc_norm|70.05|
41
+ |*hellaswag*|acc_norm|87.12|
42
+ |*mmlu*|acc_norm|70.34|
43
+ |*truthfulqa_mc*|mc2|57.84|
44
+ |**Total Average**|-|**71.34**||
45
 
46
  <br>
47