{"paper_url": "https://huggingface.co/papers/2103.00020", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Modeling Caption Diversity in Contrastive Vision-Language Pretraining](https://huggingface.co/papers/2405.00740) (2024)\n* [CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data](https://huggingface.co/papers/2404.15653) (2024)\n* [Adapting Dual-encoder Vision-language Models for Paraphrased Retrieval](https://huggingface.co/papers/2405.03190) (2024)\n* [RankCLIP: Ranking-Consistent Language-Image Pretraining](https://huggingface.co/papers/2404.09387) (2024)\n* [CLIP with Quality Captions: A Strong Pretraining for Vision Tasks](https://huggingface.co/papers/2405.08911) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"} |