BAAI
/

Is this model similar to OpenClip?

#1
by sannin - opened

Hi,

I don't understand how this model works, is this similar to OpenClip models?

Beijing Academy of Artificial Intelligence org

Thank you for your attention!

The usage of our model can be referred to here: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual.

The model architecture is quite different from the CLIP model, we will release our paper in a few days.

What about the use cases? I was wondering if the vectors can be used in a vector database to calculate similarity between documents. For example in ecommerce, can it be used to calculate similarity between products?

Beijing Academy of Artificial Intelligence org

What about the use cases? I was wondering if the vectors can be used in a vector database to calculate similarity between documents. For example in ecommerce, can it be used to calculate similarity between products?

Yes, I think you can have a try.

Thank you for your attention!

The usage of our model can be referred to here: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual.

The model architecture is quite different from the CLIP model, we will release our paper in a few days.

github page 404, FlagEmbedding has no module named visual;
The latest code not committed yet?

Beijing Academy of Artificial Intelligence org
edited Nov 7, 2024

Thank you for your attention!

The usage of our model can be referred to here: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual.

The model architecture is quite different from the CLIP model, we will release our paper in a few days.

github page 404, FlagEmbedding has no module named visual;
The latest code not committed yet?

Hello, the FlagEmbedding repository has just been refactored. Here is the new link. https://github.com/FlagOpen/FlagEmbedding/tree/master/research/visual_bge

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment