Model card for vit_base_patch16_224.owkin_pancancer_ft_lc25000_colon
A Vision Transformer (ViT) image classification model.
Trained by Owkin on 40M pan-cancer histology tiles from TCGA.
Fine-tuned on LC25000's colon subset.
Model Details
- Model Type: Image classification / feature backbone
- Model Stats:
- Params (M): 85.8
- Image size: 224 x 224 x 3
- Papers:
- Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling: https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2
- Pretrain Dataset: TGCA: https://portal.gdc.cancer.gov/
- Dataset: LC25000: https://huggingface.co/datasets/1aurent/LC25000
- Original: https://github.com/owkin/HistoSSLscaling/
- License: https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt
Model Usage
Image Classification
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/24999/image/image.jpg"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_colon",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
Image Embeddings
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/24999/image/image.jpg"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_colon",
pretrained=True,
num_classes=0,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
Citation
@article {Filiot2023.07.21.23292757,
author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
elocation-id = {2023.07.21.23292757},
year = {2023},
doi = {10.1101/2023.07.21.23292757},
publisher = {Cold Spring Harbor Laboratory Press},
URL = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
journal = {medRxiv}
}
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for 1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_colon
Base model
1aurent/vit_base_patch16_224.owkin_pancancer