--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.1 --- # Model Card for Mistral-7B-Instruct-v0.1-QLoRa-medical-QA ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/PUBFPpFxsrWRlkYzh7lwX.gif) This is a QA model for answering medical questions

Foundation Model : https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
Dataset : https://huggingface.co/datasets/Laurent1/MedQuad-MedicalQnADataset_128tokens_max
The model has been fine tuned with 2 x GPU T4 (RAM : 2 x 14.8GB) + CPU (RAM : 29GB).
## Model Details The model is based upon the foundation model : Mistral-7B-Instruct-v0.1.
It has been tuned with Supervised Fine-tuning Trainer and PEFT LoRa.
### Librairies ## Bias, Risks, and Limitations In order to reduce training duration, the model has been trained only with the first 5100 rows of the dataset.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models.
## Training Details ### Notebook used for the training You can find it in the files and versions tab ### Training Data https://huggingface.co/datasets/Laurent1/MedQuad-MedicalQnADataset_128tokens_max #### Training Hyperparameters ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/C6XTGVrn4D1Sj2kc9Dq2O.png) #### Times Training duration : 6287.4s