nayem-ng commited on
Commit
473f025
·
verified ·
1 Parent(s): 17fc029

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -71,7 +71,9 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
71
 
72
  ### Training Data
73
 
74
- The model was fine-tuned using the mlabonne/mini-platypus dataset, which consists of diverse text inputs designed to enhance the model's capabilities in conversational settings
 
 
75
 
76
  ### Training Procedure
77
 
@@ -91,12 +93,6 @@ The model was trained using bfloat16 (bf16) mixed precision, which allows for fa
91
  - Evaluation strategy: Evaluations are performed every 1000 steps to monitor the model's performance during training.
92
 
93
 
94
- ### Testing Data
95
-
96
- Dataset Used: The evaluation was conducted using the same dataset, mlabonne/mini-platypus, used for training. This dataset is suitable for assessing the model's performance on casual language generation tasks.
97
- [mlabonne/mini-platypus](https://huggingface.co/datasets/mlabonne/mini-platypus)
98
-
99
-
100
  ## Model Examination
101
 
102
  Further interpretability studies can be conducted to understand decision-making processes within the model's responses.
 
71
 
72
  ### Training Data
73
 
74
+ The model was fine-tuned using the mlabonne/mini-platypus dataset, which consists of diverse text inputs designed to enhance the model's capabilities in conversational settings.
75
+
76
+ [mlabonne/mini-platypus](https://huggingface.co/datasets/mlabonne/mini-platypus)
77
 
78
  ### Training Procedure
79
 
 
93
  - Evaluation strategy: Evaluations are performed every 1000 steps to monitor the model's performance during training.
94
 
95
 
 
 
 
 
 
 
96
  ## Model Examination
97
 
98
  Further interpretability studies can be conducted to understand decision-making processes within the model's responses.