migtissera commited on
Commit
525e873
1 Parent(s): 8aa2828

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -6,16 +6,16 @@ language:
6
  library_name: transformers
7
  ---
8
 
9
- Change from Synthia-7B-v1.2 -> Synthia-7B-v1.3: Base model was changed from LLaMA-2-7B to Mistral-7B-v0.1
10
 
11
- All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
12
 
13
  To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
14
  ```
15
  Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
16
  ```
17
 
18
- # Synthia-7B-v1.3
19
  SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
20
 
21
  <br>
@@ -35,7 +35,7 @@ This model is released under Apache 2.0, and comes with no warranty or gurantees
35
 
36
  ## Evaluation
37
 
38
- We evaluated Synthia-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
39
 
40
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
41
 
@@ -66,8 +66,8 @@ ASSISTANT:
66
  import torch, json
67
  from transformers import AutoModelForCausalLM, AutoTokenizer
68
 
69
- model_path = "migtissera/Synthia-7B-v1.3"
70
- output_file_path = "./Synthia-7B-conversations.jsonl"
71
 
72
  model = AutoModelForCausalLM.from_pretrained(
73
  model_path,
@@ -146,9 +146,9 @@ Exercise caution and cross-check information when necessary. This is an uncensor
146
  Please kindly cite using the following BibTeX:
147
 
148
  ```
149
- @misc{Synthia-7B-v1.3,
150
  author = {Migel Tissera},
151
- title = {Synthia-7B-v1.3: Synthetic Intelligent Agent},
152
  year = {2023},
153
  publisher = {GitHub, HuggingFace},
154
  journal = {GitHub repository, HuggingFace repository},
 
6
  library_name: transformers
7
  ---
8
 
9
+ SynthIA-7B-v1.3: Base model is Mistral-7B-v0.1
10
 
11
+ All SynthIA models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use SynthIA.
12
 
13
  To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
14
  ```
15
  Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
16
  ```
17
 
18
+ # SynthIA-7B-v1.3
19
  SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
20
 
21
  <br>
 
35
 
36
  ## Evaluation
37
 
38
+ We evaluated SynthIA-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
39
 
40
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
41
 
 
66
  import torch, json
67
  from transformers import AutoModelForCausalLM, AutoTokenizer
68
 
69
+ model_path = "migtissera/SynthIA-7B-v1.3"
70
+ output_file_path = "./SynthIA-7B-conversations.jsonl"
71
 
72
  model = AutoModelForCausalLM.from_pretrained(
73
  model_path,
 
146
  Please kindly cite using the following BibTeX:
147
 
148
  ```
149
+ @misc{SynthIA-7B-v1.3,
150
  author = {Migel Tissera},
151
+ title = {SynthIA-7B-v1.3: Synthetic Intelligent Agent},
152
  year = {2023},
153
  publisher = {GitHub, HuggingFace},
154
  journal = {GitHub repository, HuggingFace repository},