File size: 371 Bytes
af4c24e
 
e87d712
 
af4c24e
e87d712
 
 
 
1
2
3
4
5
6
7
8
9
10
---
license: apache-2.0
datasets:
- Ejafa/ye-pop
---

A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and their alt-texts). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions).

Note: likely not directly useful as it is severely undertrained.