umarigan's picture
Update README.md
ace9069 verified
---
library_name: transformers
tags:
- medical
license: bsd-3-clause
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Umar Igan
- **Model type:** VLM
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** Salesforce/blip-image-captioning-base
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
## Uses
This is a fine-tuned VLM on chest xray medicald dataset, the result shouldn't be used as an advice!!
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Example usage:
```python
from transformers import BlipForConditionalGeneration, AutoProcessor
model = BlipForConditionalGeneration.from_pretrained("umarigan/blip-image-captioning-base-chestxray-finetuned").to(device)
processor = AutoProcessor.from_pretrained("umarigan/blip-image-captioning-base-chestxray-finetuned")
inputs = processor(images=image, return_tensors="pt").to(device)
pixel_values = inputs.pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```
### Training Data
https://huggingface.co/datasets/Shrey-1329/cxiu_hf_dataset
#### Training Hyperparameters
- lr: 5e-5
- Epoch: 10
- Dataset size: 1k
#### Summary
A simple blip fine-tuned model on medical imaging
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** GPU
- **Hours used:** 1
- **Cloud Provider:** Google
- **Compute Region:** Frankfurt
- **Carbon Emitted:**
### Compute Infrastructure
Google Colab L4 GPU
#### Hardware
Google Colab L4 GPU
## Model Card Contact
Umar Igan