Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ It was introduced in the paper [ColPali: Efficient Document Retrieval with Visio
|
|
14 |
|
15 |
## Model Description
|
16 |
|
17 |
-
This model is trained with an extra
|
18 |
|
19 |
This model is built iteratively starting from an off-the-shelf [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) model.
|
20 |
We finetuned it to create [BiSigLIP](https://huggingface.co/vidore/bisiglip) and fed the patch-embeddings output by SigLIP to an LLM, [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) to create [BiPali](https://huggingface.co/vidore/bipali).
|
@@ -58,7 +58,7 @@ def main() -> None:
|
|
58 |
"""Example script to run inference with ColPali"""
|
59 |
|
60 |
# Load model
|
61 |
-
model_name = "
|
62 |
model = ColPali.from_pretrained("google/paligemma-3b-mix-448", torch_dtype=torch.bfloat16, device_map="cuda").eval()
|
63 |
model.load_adapter(model_name)
|
64 |
processor = AutoProcessor.from_pretrained(model_name)
|
|
|
14 |
|
15 |
## Model Description
|
16 |
|
17 |
+
This model is trained with an extra 150k samples from the Docmatix dataset !
|
18 |
|
19 |
This model is built iteratively starting from an off-the-shelf [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) model.
|
20 |
We finetuned it to create [BiSigLIP](https://huggingface.co/vidore/bisiglip) and fed the patch-embeddings output by SigLIP to an LLM, [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) to create [BiPali](https://huggingface.co/vidore/bipali).
|
|
|
58 |
"""Example script to run inference with ColPali"""
|
59 |
|
60 |
# Load model
|
61 |
+
model_name = "manu/colpali-3b-mix-448-docmatix"
|
62 |
model = ColPali.from_pretrained("google/paligemma-3b-mix-448", torch_dtype=torch.bfloat16, device_map="cuda").eval()
|
63 |
model.load_adapter(model_name)
|
64 |
processor = AutoProcessor.from_pretrained(model_name)
|