File size: 2,184 Bytes
deac3ad
e7e07c5
 
 
0fcc30c
e7e07c5
 
 
 
 
deac3ad
 
ab3a24e
deac3ad
ab3a24e
deac3ad
ab3a24e
deac3ad
ab3a24e
deac3ad
 
 
 
 
ab3a24e
deac3ad
ab3a24e
 
 
 
deac3ad
 
 
 
 
ab3a24e
deac3ad
 
 
ab3a24e
deac3ad
ab3a24e
deac3ad
 
 
ab3a24e
 
 
deac3ad
ab3a24e
47b10e2
 
deac3ad
ab3a24e
 
 
 
 
 
 
deac3ad
 
ab3a24e
deac3ad
ab3a24e
 
 
 
 
deac3ad
 
ab3a24e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
base_model: BioMistral/BioMistral-7B
library_name: peft
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- biology
- medical
---

# Model Card for BioMistral-7B-Finetuned

## Model Summary

**BioMistral-7B-Finetuned** is a biomedical language model adapted from the BioMistral-7B model. This fine-tuned model is tailored for biomedical question-answering tasks and optimized through LoRA (Low-Rank Adaptation) on a 4-bit quantized base. It is particularly useful for tasks that require understanding and generating biomedical text in English.

---

## Model Details

### Model Description

This model was fine-tuned for biomedical applications, primarily focusing on enhancing accuracy in question-answering tasks within this domain.

- **Base Model**: BioMistral-7B
- **License**: apache-2.0
- **Fine-tuned for Task**: Biomedical Q&A, text generation
- **Quantization**: 4-bit precision with BitsAndBytes for efficient deployment

## Uses

### Direct Use

The model is suitable for biomedical question-answering and other related language generation tasks. 

### Out-of-Scope Use

Not recommended for general-purpose NLP tasks outside the biomedical domain or for clinical decision-making.

---

## How to Get Started with the Model

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("BeastGokul/BioMistral-7B-Finetuned")
model = AutoModelForCausalLM.from_pretrained("BeastGokul/BioMistral-7B-Finetuned")

# Example usage
input_text = "What are the symptoms of diabetes?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Training Details
### Training Procedure
The model was fine-tuned using the LoRA (Low-Rank Adaptation) method, with a configuration set for biomedical question-answering.

Training Hyperparameters
Precision: 4-bit quantization with BitsAndBytes
Learning Rate: 2e-5
Batch Size: Effective batch size of 16 (4 per device, gradient accumulation steps of 4)
Number of Epochs: 3


## Framework versions
PEFT 0.13.2