Prasant commited on
Commit
f05cdcd
1 Parent(s): b0dfe10

Upload article_6824809.json with huggingface_hub

Browse files
Files changed (1) hide show
  1. article_6824809.json +1 -0
article_6824809.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"text": "How can I tell how many tokens a string will have before I try to embed it?\n===========================================================================\n\n\nFor V2 embedding models, as of Dec 2022, there is not yet a way to split a string into tokens. The only way to get total token counts is to submit an API request.\n\n\n* If the request succeeds, you can extract the number of tokens from the response: `response[\u201cusage\u201d][\u201ctotal\\_tokens\u201d]`\n* If the request fails for having too many tokens, you can extract the number of tokens from the error message: `This model's maximum context length is 8191 tokens, however you requested 10000 tokens (10000 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.`\n\n\nFor V1 embedding models, which are based on GPT-2/GPT-3 tokenization, you can count tokens in a few ways:\n\n\n* For one-off checks, the [OpenAI tokenizer](https://beta.openai.com/tokenizer) page is convenient\n* In Python, [transformers.GPT2TokenizerFast](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) (the GPT-2 tokenizer is the same as GPT-3)\n* In JavaScript, [gpt-3-encoder](https://www.npmjs.com/package/gpt-3-encoder)\n\n\nHow can I retrieve K nearest embedding vectors quickly?\n=======================================================\n\n\nFor searching over many vectors quickly, we recommend using a vector database.\n\n\n\nVector database options include:\n\n\n* [Pinecone](https://www.pinecone.io/), a fully managed vector database\n* [Weaviate](https://weaviate.io/), an open-source vector search engine\n* [Faiss](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/), a vector search algorithm by Facebook\n\nWhich distance function should I use?\n=====================================\n\n\nWe recommend [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity). The choice of distance function typically doesn\u2019t matter much.\n\n\n\nOpenAI embeddings are normalized to length 1, which means that:\n\n\n* Cosine similarity can be computed slightly faster using just a dot product\n* Cosine similarity and Euclidean distance will result in the identical rankings\n", "title": "Embeddings - Frequently Asked Questions", "article_id": "6824809", "url": "https://help.openai.com/en/articles/6824809-embeddings-frequently-asked-questions"}