Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

falcon-7b-sharded-bf16-finetuned-mental-health-conversational

This model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on a custom heliosbrahma/mental_health_chatbot_dataset dataset.

Model description

This model is fine-tuned on custom mental health conversational dataset. The rationale behind this is to answer mental health related queries that can be factually verified without responding gibberish words.

Intended uses & limitations

The model was trained on the dataset which may contain sensitive information related to mental health. It is important to note that while mental health chatbots built using this model can be helpful, they are not a replacement for professional mental health care.

Training and evaluation data

This model was trained on custom heliosbrahma/mental_health_chatbot_dataset dataset which 172 rows of conversational pair of questions and answers.

Training procedure

This model was trained using QLoRA technique to fine-tune on a custom dataset on free-tier GPU available in Google Colab.

Training hyperparameters

The following hyperparameters were used during training:

Training results

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.2
  • Tokenizers 0.13.3
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for AgeNtX071/Pefted

Finetuned
(93)
this model

Dataset used to train AgeNtX071/Pefted