File size: 3,107 Bytes
44b0a03
 
ace9069
 
 
 
 
44b0a03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ace9069
 
 
44b0a03
ace9069
44b0a03
 
 
 
 
 
 
 
 
ace9069
44b0a03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ace9069
44b0a03
ace9069
 
44b0a03
ace9069
 
44b0a03
ace9069
 
44b0a03
ace9069
 
 
 
 
44b0a03
ace9069
44b0a03
 
 
ace9069
 
 
44b0a03
ace9069
44b0a03
 
 
 
 
ace9069
 
 
 
 
44b0a03
 
 
ace9069
44b0a03
 
 
ace9069
44b0a03
 
 
 
ace9069
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
library_name: transformers
tags:
- medical
license: bsd-3-clause
language:
- en
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** Umar Igan
- **Model type:** VLM
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** Salesforce/blip-image-captioning-base

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [More Information Needed]

## Uses

This is a fine-tuned VLM on chest xray medicald dataset, the result shouldn't be used as an advice!!
### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

[More Information Needed]

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Example usage:

```python
from transformers import BlipForConditionalGeneration, AutoProcessor

model = BlipForConditionalGeneration.from_pretrained("umarigan/blip-image-captioning-base-chestxray-finetuned").to(device)
processor = AutoProcessor.from_pretrained("umarigan/blip-image-captioning-base-chestxray-finetuned")

inputs = processor(images=image, return_tensors="pt").to(device)
pixel_values = inputs.pixel_values

generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```
### Training Data

https://huggingface.co/datasets/Shrey-1329/cxiu_hf_dataset

#### Training Hyperparameters

- lr: 5e-5
- Epoch: 10
- Dataset size: 1k
#### Summary
A simple blip fine-tuned model on medical imaging

## Environmental Impact

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** GPU
- **Hours used:** 1
- **Cloud Provider:** Google
- **Compute Region:** Frankfurt
- **Carbon Emitted:** 

### Compute Infrastructure

Google Colab L4 GPU

#### Hardware

Google Colab L4 GPU


## Model Card Contact

Umar Igan