openai-dataset / article_6272952.json
Prasant's picture
Upload article_6272952.json with huggingface_hub
532b968 verified
raw
history blame
4.74 kB
{"text": "Introduction\n============\n\n\n\u200bSince releasing the Search endpoint, we\u2019ve developed new methods that achieve better results for this task. As a result, we\u2019ll be removing the Search endpoint from our documentation and removing access to this endpoint for all organizations on December 3, 2022. New accounts created after June 3rd will not have access to this endpoint.\n\n\n\nWe strongly encourage developers to switch over to newer techniques which produce better results, outlined below.\n\n\n\nCurrent documentation\n---------------------\n\n\n<https://beta.openai.com/docs/guides/search> \n\n\n<https://beta.openai.com/docs/api-reference/searches> \n\n\n\nOptions\n=======\n\n\nThis options are also outlined [here](https://github.com/openai/openai-cookbook/tree/main/transition_guides_for_deprecated_API_endpoints).\n\n\n\nOption 1: Transition to Embeddings-based search (recommended)\n-------------------------------------------------------------\n\n\nWe believe that most use cases will be better served by moving the underlying search system to use a vector-based embedding search. The major reason for this is that our current system used a bigram filter to narrow down the scope of candidates whereas our embeddings system has much more contextual awareness. Also, in general, using embeddings will be considerably lower cost in the long run. If you\u2019re not familiar with this, you can learn more by visiting our [guide to embeddings](https://beta.openai.com/docs/guides/embeddings/use-cases).\n\n\n\nIf you have a larger dataset (>10,000 documents), consider using a vector search engine like [Pinecone](https://www.pinecone.io) or [Weaviate](https://weaviate.io/developers/weaviate/current/retriever-vectorizer-modules/text2vec-openai.html) to power that search.\n\n\n\nOption 2: Reimplement existing functionality\n--------------------------------------------\n\n\nIf you\u2019re using the document parameter\n--------------------------------------\n\n\nThe current openai.Search.create and openai.Engine.search code can be replaced with this [snippet](https://github.com/openai/openai-cookbook/blob/main/transition_guides_for_deprecated_API_endpoints/search_functionality_example.py) (note this will only work with non-Codex engines since they use a different tokenizer.)\n\n\n\nWe plan to move this snippet into the openai-python repo under openai.Search.create\\_legacy.\n\n\n\nIf you\u2019re using the file parameter\n----------------------------------\n\n\nAs a quick review, here are the high level steps of the current Search endpoint with a file:\n\n\n\n\n![](https://openai.intercom-attachments-7.com/i/o/524222854/57382ab799ebe9bb988c0a1f/_y63ycSmtiFAS3slJdbfW0Mz-0nx2DP4gNAjyknMAmTT1fQUE9d7nha5yfsXJLkWRFmM41uvjPxi2ToSW4vrF7EcasiQDG51CrKPNOpXPVG4WZXI8jC8orWSmuGhAGGC4KoUYucwJOh0bH9Nzw)\n\n\nStep 1: upload a jsonl file\n\n\n\nBehind the scenes, we upload new files meant for file search to an Elastic search. Each line of the jsonl is then submitted as a document.\n\n\n\nEach line is required to have a \u201ctext\u201d field and an optional \u201cmetadata\u201d field.\n\n\n\nThese are the Elastic search settings and mappings for our index:\n\n\n\n[Elastic searching mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html): \n\n\n\n```\n{ \n \"properties\": { \n \"document\": {\"type\": \"text\", \"analyzer\": \"standard_bigram_analyzer\"}, -> the \u201ctext\u201d field \n \"metadata\": {\"type\": \"object\", \"enabled\": False}, -> the \u201cmetadata\u201d field \n } \n}\n```\n\n\n[Elastic search analyzer](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html):\n\n\n\n```\n{ \n \"analysis\": { \n \"analyzer\": { \n \"standard_bigram_analyzer\": { \n \"type\": \"custom\", \n \"tokenizer\": \"standard\", \n \"filter\": [\"lowercase\", \"english_stop\", \"shingle\"], \n } \n }, \n \"filter\": {\"english_stop\": {\"type\": \"stop\", \"stopwords\": \"_english_\"}}, \n } \n}\n```\n\n\nAfter that, we performed [standard Elastic search search calls](https://elasticsearch-py.readthedocs.io/en/v8.2.0/api.html#elasticsearch.Elasticsearch.search) and used `max\\_rerank` to determine the number of documents to return from Elastic search.\n\n\n\nStep 2: Search\n\n\nOnce you have the candidate documents from step 1, you could just make a standard openai.Search.create or openai.Engine.search call to rerank the candidates. See [Document](#h_f6ab294756)\n\n", "title": "Search Transition Guide", "article_id": "6272952", "url": "https://help.openai.com/en/articles/6272952-search-transition-guide"}