Using gte-large locally for embedding pdf documents for llama cpp model with langchain.

#18
by namantjeaswi - opened

Hello

I am building an open source Rag to run locally with llama cpp gguf models. I am able to embed documents using llamaindex with gte large and query them but I am facing trouble doing so with langchain. I have been able to use langchain with open ai models but not with open source model, is there some documentation regarding it.

Owner

from langchain_community.embeddings import HuggingFaceInferenceAPIEmbeddings

embeddings = HuggingFaceInferenceAPIEmbeddings(
api_key=inference_api_key, model_name="thenlper/gte-large"
)

query_result = embeddings.embed_query(text)
query_result[:3]

Thank you for your response I am able to use them.

how to use it with langchain locally instead of using api

Sign up or log in to comment