AgeNtX071 commited on
Commit
d089816
1 Parent(s): 24564f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -63
README.md CHANGED
@@ -1,64 +1,54 @@
1
- ---
2
- base_model: ybelkada/falcon-7b-sharded-bf16
3
- tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational
7
- results: []
8
- license: mit
9
- datasets:
10
- - heliosbrahma/mental_health_chatbot_dataset
11
- language:
12
- - en
13
- metrics:
14
- - rouge
15
- pipeline_tag: conversational
16
- ---
17
-
18
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
- should probably proofread and complete it, then remove this comment. -->
20
-
21
- # falcon-7b-sharded-bf16-finetuned-mental-health-conversational
22
-
23
- This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on a custom [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset) dataset.
24
-
25
- ## Model description
26
-
27
- This model is fine-tuned on custom mental health conversational dataset. The rationale behind this is to answer mental health related queries that can be factually verified without responding gibberish words.
28
-
29
- ## Intended uses & limitations
30
-
31
- The model was trained on the dataset which may contain sensitive information related to mental health. It is important to note that while mental health chatbots built using this model can be helpful, they are not a replacement for professional mental health care.
32
-
33
- ## Training and evaluation data
34
-
35
- This model was trained on custom [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset) dataset which 172 rows of conversational pair of questions and answers.
36
-
37
- ## Training procedure
38
-
39
- This model was trained using QLoRA technique to fine-tune on a custom dataset on free-tier GPU available in Google Colab.
40
-
41
- ### Training hyperparameters
42
-
43
- The following hyperparameters were used during training:
44
- - learning_rate: 0.0002
45
- - train_batch_size: 16
46
- - eval_batch_size: 8
47
- - seed: 42
48
- - gradient_accumulation_steps: 4
49
- - total_train_batch_size: 64
50
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: cosine
52
- - lr_scheduler_warmup_ratio: 0.03
53
- - training_steps: 320
54
-
55
- ### Training results
56
-
57
-
58
-
59
- ### Framework versions
60
-
61
- - Transformers 4.31.0
62
- - Pytorch 2.0.1+cu118
63
- - Datasets 2.14.2
64
  - Tokenizers 0.13.3
 
1
+ ---
2
+ base_model: ybelkada/falcon-7b-sharded-bf16
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational
7
+ results: []
8
+ license: mit
9
+ datasets:
10
+ - heliosbrahma/mental_health_chatbot_dataset
11
+ language:
12
+ - en
13
+ metrics:
14
+ - rouge
15
+ pipeline_tag: conversational
16
+ ---
17
+
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+ # falcon-7b-sharded-bf16-finetuned-mental-health-conversational
22
+
23
+ This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on a custom [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset) dataset.
24
+
25
+ ## Model description
26
+
27
+ This model is fine-tuned on custom mental health conversational dataset. The rationale behind this is to answer mental health related queries that can be factually verified without responding gibberish words.
28
+
29
+ ## Intended uses & limitations
30
+
31
+ The model was trained on the dataset which may contain sensitive information related to mental health. It is important to note that while mental health chatbots built using this model can be helpful, they are not a replacement for professional mental health care.
32
+
33
+ ## Training and evaluation data
34
+
35
+ This model was trained on custom [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset) dataset which 172 rows of conversational pair of questions and answers.
36
+
37
+ ## Training procedure
38
+
39
+ This model was trained using QLoRA technique to fine-tune on a custom dataset on free-tier GPU available in Google Colab.
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+
45
+ ### Training results
46
+
47
+
48
+
49
+ ### Framework versions
50
+
51
+ - Transformers 4.31.0
52
+ - Pytorch 2.0.1+cu118
53
+ - Datasets 2.14.2
 
 
 
 
 
 
 
 
 
 
54
  - Tokenizers 0.13.3