File size: 6,561 Bytes
4bdc538
 
 
 
d24b989
da04278
 
 
 
 
 
4bdc538
 
 
 
 
 
 
d24b989
 
e6f5644
d3b0fa6
 
64d2b4d
 
 
 
 
 
 
 
 
 
 
 
d3b0fa6
4bdc538
7ed9024
5f39610
 
4bdc538
 
 
870c805
 
4514b36
 
4bdc538
7ed9024
 
ebe4439
 
 
 
22cfce6
 
 
 
 
 
 
 
 
 
 
 
 
f7ce86a
7ed9024
f7ce86a
 
7ed9024
 
 
 
f7ce86a
 
 
7ed9024
f7ce86a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7ed9024
f7ce86a
 
 
 
 
7ed9024
 
2a00e58
 
89cbdda
ccdb5e7
 
 
 
 
89cbdda
2a00e58
 
 
 
 
 
 
37b9a6a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2a00e58
4bdc538
 
d24b989
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
- fr
- de
- hi
- it
- pt
- es
- th
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
datasets:
- lavita/AlpaCare-MedInstruct-52k
metrics:
- accuracy
model-index:
- name: Llama-3.1-8B-AlpaCare-MedInstruct
  results:
  - task:
      type: text-generation
    dataset:
      name: GEval
      type: GEval
    metrics:
    - name: Medical Q&A
      type: Medical Q&A 20 shots
      value: 70
pipeline_tag: text-generation
---

# Llama-3.1-8B AlpaCare MediInstruct
<img src="https://cdn-uploads.huggingface.co/production/uploads/6168218a4ed0b975c18f82a8/bIta8beT_Sii8xp9uZ2A5.png" width="250">


- **Developed by:** Svngoku
- **License:** apache-2.0
- **Finetuned from model :** `unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit`
- **Max Context Windows :** `4096`
- **Function Calling :** The model support `Function calling`
- **Capacity :** Real-time and batch inference

## Inference with Unsloth

```py
max_seq_length = 4096 
dtype = None
load_in_4bit = True # Use 4bit quantization to reduce memory usage.

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""
```


```py
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = "Svngoku/Llama-3.1-8B-AlpaCare-MedInstruct",
        max_seq_length = max_seq_length,
        dtype = dtype,
        load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model)
```

```py
def generate_medical_answer(input: str = "", instruction: str = ""):
  inputs = tokenizer(
  [
      alpaca_prompt.format(
          instruction,
          input,
          "",
      )
  ], return_tensors = "pt").to("cuda")
  text_streamer = TextStreamer(tokenizer)
  # _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 800)
      # Generate the response
  output = model.generate(**inputs, max_new_tokens=1024)
    
    # Decode the generated response
  generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
    
    # Extract the response part if needed (assuming the response starts after "### Response:")
  response_start = generated_text.find("### Response:") + len("### Response:")
  response = generated_text[response_start:].strip()
    
    # Format the response in Markdown
  # markdown_response = f"{response}"
    
    # Render the markdown response
  # display(Markdown(markdown_response))    
  return response
```

```py
generate_medical_answer(
  instruction = "What are the pharmacodynamics of Omeprazole?",
  input="Writte the text in plain markdown."
)
```


## Evaluation

The model have been evaluated with `gpt-4o-mini` with `DeepEval`.
The prompt used is quite strict. This reassures us as to the robustness of the model and its ability to adapt to the new fine-tuned datas.

- Success Log : [test_case_0](https://app.confident-ai.com/project/clzbc1ind05qj8cmtfa3pjho7/unit-tests/clzbmmq330d5s8cmtdtpm888m/test-cases?pageNumber=1&pageSize=50&status=all&conversational=false&testCaseId=288507)
-  Failed Log : [test_case_7](https://app.confident-ai.com/project/clzbc1ind05qj8cmtfa3pjho7/unit-tests/clzbmmq330d5s8cmtdtpm888m/test-cases?pageNumber=1&pageSize=50&status=all&conversational=false&testCaseId=288532)

|          |   Answer Relevancy |   Correctness (GEval) |   Bias |   Toxicity | Test Result   |   % of Passing Tests |
|:---------|-------------------:|----------------------:|-------:|-----------:|:--------------|---------------------:|
| Dataset 1 |               0.89 |                  0.8  |      0 |          0 | 22 / 28 tests |                78.57 |
| Dataset 2 |               0.85 |                  0.83 |      0 |          0 | 8 / 20 tests  |                40    |
| lavita/MedQuAD |               0.95 |                  0.81 |      0 |          0 | 14 / 20 tests |                70    |


### Evaluation Code

```py

def evaluate_llama_alpacare_gpt4(medQA):
  # Define the metrics
  answer_relevancy_metric = AnswerRelevancyMetric(
    threshold=0.7,
    model="gpt-4o-mini",
    include_reason=True
  )

  bias = BiasMetric(
    model="gpt-4o-mini",
    include_reason=True,
    threshold=0.8
  )

  toxicity = ToxicityMetric(
    model="gpt-4o-mini",
    include_reason=True
  )

  correctness_metric = GEval(
    name="Correctness",
    threshold=0.7,
    model="gpt-4o-mini",
    criteria="Determine whether the actual output is factually correct based on the expected output, focusing on medical accuracy and adherence to established guidelines.",
    evaluation_steps=[
        "Check whether the facts in 'actual output' contradict any facts in 'expected output' or established medical guidelines.",
        "Penalizes the omission of medical details, depending on their criticality and especially those that could have an impact on the care provided to the patient or on his or her understanding.",
        "Ensure that medical terminology and language used are precise and appropriate for medical context.",
        "Assess whether the response adequately addresses the specific medical question posed.",
        "Vague language or contradicting opinions are acceptable in general contexts, but factual inaccuracies, especially regarding medical data or guidelines, are not."
    ],
    evaluation_params=[LLMTestCaseParams.INPUT, LLMTestCaseParams.ACTUAL_OUTPUT]
  )

  test_cases = []

  # metric = FaithfulnessMetric(
  #   model="gpt-4o-mini",
  #   include_reason=True
  # )

  # Loop through the dataset and evaluate
  for example in medQA:
    question = example['Question']
    expected_output = example['Answer']
    question_focus = example['instruction']


    # Generate the actual output
    actual_output = generate_medical_answer(
        instruction=question,
        input=question_focus,
    )

    # Define the test case
    test_case = LLMTestCase(
      input=question,
      actual_output=actual_output,
      expected_output=expected_output,
    )

    test_cases.append(test_case)

  evaluate(test_cases, [answer_relevancy_metric, correctness_metric, bias, toxicity])

```



This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)