|
--- |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
### Introduction |
|
|
|
This model is based on the research described in the paper titled "Enhancing Cervical Cancer Cytology Screening via Artificial Intelligence Innovation". The research discusses how the application of advanced AI techniques can significantly improve the accuracy and efficiency of cervical cancer screening, offering a more scalable and cost-effective solution compared to traditional methods. Specifically, this model focuses on classifying tile images from LBC cytology specimens at low magnification (x10), distinguishing between normal and abnormal categories, which is a departure from the high-magnification or single-cell focused approaches commonly used in cytology. |
|
|
|
### Model Description |
|
|
|
- **Paper**: https://www.nature.com/articles/s41598-024-70670-6 |
|
|
|
- **Repository**: https://github.com/kuri54/GynAIe |
|
|
|
- **License**: CC-BY-NC-SA-4.0 |
|
|
|
### Training Details |
|
|
|
- **Total Images**: 8000 |
|
|
|
- **Normal Images**: 4000 |
|
|
|
- **Abnormal Images**: 4000 |
|
|
|
- LSIL: 1000 |
|
- HSIL: 1000 |
|
- SCC: 1000 |
|
- ADC: 1000 |
|
|
|
- **Magnification Level**: x10 |
|
|
|
### Usage |
|
|
|
This model is not intended to be used in isolation. To fully utilize its capabilities and implement the techniques developed, please refer to the accompanying code available on our [GitHub repository](https://github.com/kuri54/GynAIe). The code provides necessary details on how to effectively use the model in your applications. |
|
|
|
For full documentation, example scripts, and more details, visit our GitHub repository. |
|
|
|
```python |
|
from PIL import Image |
|
from transformers import CLIPModel, CLIPProcessor |
|
device = "cuda:0" if torch.cuda.is_available() else "cpu" |
|
model = CLIPModel.from_pretrained(kuri54/GynAIe-B16-8k) |
|
processor = CLIPProcessor.from_pretrained(kuri54/GynAIe-B16-8k) |
|
image = Image.open(path/to/image) |
|
# normal or abnormal |
|
text = ['a image of a normal', 'a image of a anomaly'] |
|
inputs = processor(text=text, images=image, return_tensors='pt', padding=True).to(device) |
|
outputs = model(**inputs) |
|
probs = outputs.logits_per_image.softmax(dim=1).cpu().detach().numpy() |
|
predicted_class_idx = probs.argmax(-1).item() |
|
print('Class:', labels[predicted_class_idx]) |
|
print('Score:', probs) |
|
``` |