BAAI
/

Is this model similar to OpenClip?

#1
by sannin - opened

Hi,

I don't understand how this model works, is this similar to OpenClip models?

Thank you for your attention!

The usage of our model can be referred to here: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual.

The model architecture is quite different from the CLIP model, we will release our paper in a few days.

What about the use cases? I was wondering if the vectors can be used in a vector database to calculate similarity between documents. For example in ecommerce, can it be used to calculate similarity between products?

What about the use cases? I was wondering if the vectors can be used in a vector database to calculate similarity between documents. For example in ecommerce, can it be used to calculate similarity between products?

Yes, I think you can have a try.

Thank you for your attention!

The usage of our model can be referred to here: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual.

The model architecture is quite different from the CLIP model, we will release our paper in a few days.

github page 404, FlagEmbedding has no module named visual;
The latest code not committed yet?

Thank you for your attention!

The usage of our model can be referred to here: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual.

The model architecture is quite different from the CLIP model, we will release our paper in a few days.

github page 404, FlagEmbedding has no module named visual;
The latest code not committed yet?

Hello, the FlagEmbedding repository has just been refactored. Here is the new link. https://github.com/FlagOpen/FlagEmbedding/tree/master/research/visual_bge

Sign up or log in to comment