lvkaokao commited on
Commit
9b5f27c
1 Parent(s): 0ff1545
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -4,7 +4,7 @@ license: apache-2.0
4
 
5
  ## Finetuning on [habana](https://habana.ai/) HPU
6
 
7
- This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [NeuralChat: Simplifying Supervised Instruction Fine-Tuning and Reinforcement Aligning](https://medium.com/intel-analytics-software/neuralchat-simplifying-supervised-instruction-fine-tuning-and-reinforcement-aligning-for-chatbots-d034bca44f69).
8
 
9
  ## Model date
10
  Neural-chat-7b-v3 was trained between September and October, 2023.
@@ -43,14 +43,14 @@ The following hyperparameters were used during training:
43
  ```shell
44
  import transformers
45
  model = transformers.AutoModelForCausalLM.from_pretrained(
46
- 'Intel/neural-chat-7b-v3'
47
  )
48
  ```
49
 
50
  ## Ethical Considerations and Limitations
51
- neural-chat-7b-v3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v3 was trained on [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
52
 
53
- Therefore, before deploying any applications of neural-chat-7b-v3, developers should perform safety testing.
54
 
55
  ## Disclaimer
56
 
 
4
 
5
  ## Finetuning on [habana](https://habana.ai/) HPU
6
 
7
+ This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [NeuralChat: Simplifying Supervised Instruction Fine-Tuning and Reinforcement Aligning](https://medium.com/intel-analytics-software/neuralchat-simplifying-supervised-instruction-fine-tuning-and-reinforcement-aligning-for-chatbots-d034bca44f69) and [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
8
 
9
  ## Model date
10
  Neural-chat-7b-v3 was trained between September and October, 2023.
 
43
  ```shell
44
  import transformers
45
  model = transformers.AutoModelForCausalLM.from_pretrained(
46
+ 'Intel/neural-chat-7b-v3-1'
47
  )
48
  ```
49
 
50
  ## Ethical Considerations and Limitations
51
+ neural-chat-7b-v3-1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v3-1 was trained on [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
52
 
53
+ Therefore, before deploying any applications of neural-chat-7b-v3-1, developers should perform safety testing.
54
 
55
  ## Disclaimer
56