Chunk ID
stringlengths 5
184
| Chunk
stringlengths 20
3.59k
| Source
stringclasses 22
values |
---|---|---|
Retrieval augmented generation using Elasticsearch and OpenAI - Part 22 | Use Chat Completions API for retrieval augmented generation
Now we can send the question and the text to OpenAI's chat completion API.
Using a LLM model together with a retrieval model is known as retrieval augmented generation (RAG). We're using Elasticsearch to do what it does best, retrieve relevant documents. Then we use the LLM to do what it does best, tasks like generating summaries and answering questions, using the retrieved documents as context. | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation |
Retrieval augmented generation using Elasticsearch and OpenAI - Part 23 | The model will generate a response to the question, using the top kNN hit as context. Use the messages list to shape your prompt to the model. In this example, we're using the gpt-3.5-turbo model. | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation |
Retrieval augmented generation using Elasticsearch and OpenAI - Part 24 | summary = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Answer the following question:"
+ question
+ "by using the following text:"
+ top_hit_summary},
]
)
choices = summary.choices
for choice in choices:
print("------------------------------------------------------------")
print(choice.message.content)
print("------------------------------------------------------------") | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation |
Retrieval augmented generation using Elasticsearch and OpenAI - Part 25 | Here's what that code does:Uses OpenAI's model to generate a response, Sends a conversation containing a system message and a user message to the model. The system message sets the assistant's role as 'helpful assistant'. The user message contains a question as specified in the original kNN query and some input text. The response from the model is stored in the summary.choices variable. Next steps. That was just one example of how to combine Elasticsearch with the power of OpenAI's models, to enable retrieval augmented generation. RAG allows you to avoid the costly and complex process of training or fine-tuning models, by leveraging out-of-the-box models, enhanced with additional context. Use this as a blueprint for your own experiments. To adapt the conversation for different use cases, customize the system message to define the assistant's behavior or persona. Adjust the user message to specify the task, such as summarization or question answering, along with the desired format of the response. | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-retrieval-augmented-generation |
Semantic search using Elasticsearch and OpenAI (Part 1) | This notebook demonstrates how to:
Index the OpenAI Wikipedia vector dataset into Elasticsearch
Embed a question with the OpenAI embeddings endpoint
Perform semantic search on the Elasticsearch index using the encoded question
Install packages and import modules
# install packages
!python3 -m pip install -qU openai pandas wget elasticsearch
# import modules
from getpass import getpass
from elasticsearch import Elasticsearch, helpers
import wget
import zipfile
import pandas as pd
import json
import openai | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 2) | Connect to Elasticsearch
ℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook. If you don't already have an Elastic deployment, you can sign up for a free Elastic Cloud trial.
To connect to Elasticsearch, you need to create a client instance with the Cloud ID and password for your deployment.
Find the Cloud ID for your deployment by going to https://cloud.elastic.co/deployments and selecting your deployment.
CLOUD_ID = getpass("Elastic deployment Cloud ID")
CLOUD_PASSWORD = getpass("Elastic deployment Password")
client = Elasticsearch(
cloud_id = CLOUD_ID,
basic_auth=("elastic", CLOUD_PASSWORD) # Alternatively use `api_key` instead of `basic_auth`
) | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 3) | # Test connection to Elasticsearch
print(client.info())
Download the dataset
In this step we download the OpenAI Wikipedia embeddings dataset, and extract the zip file.
embeddings_url = 'https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip'
wget.download(embeddings_url)
with zipfile.ZipFile("vector_database_wikipedia_articles_embedded.zip",
"r") as zip_ref:
zip_ref.extractall("data") | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 4) | Read CSV file into a Pandas DataFrame
Next we use the Pandas library to read the unzipped CSV file into a DataFrame. This step makes it easier to index the data into Elasticsearch in bulk.
wikipedia_dataframe = pd.read_csv("data/vector_database_wikipedia_articles_embedded.csv") | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 5) | Create index with mapping
Now we need to create an Elasticsearch index with the necessary mappings. This will enable us to index the data into Elasticsearch.
We use the dense_vector field type for the title_vector and content_vector fields. This is a special field type that allows us to store dense vectors in Elasticsearch.
Later, we'll need to target the dense_vector field for kNN search.
index_mapping= {
"properties": {
"title_vector": {
"type": "dense_vector",
"dims": 1536,
"index": "true",
"similarity": "cosine"
}, | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 6) | "content_vector": {
"type": "dense_vector",
"dims": 1536,
"index": "true",
"similarity": "cosine"
},
"text": {"type": "text"},
"title": {"type": "text"},
"url": { "type": "keyword"},
"vector_id": {"type": "long"}
}
}
client.indices.create(index="wikipedia_vector_index", mappings=index_mapping)
Index data into Elasticsearch
The following function generates the required bulk actions that can be passed to Elasticsearch's Bulk API, so we can index multiple documents efficiently in a single request.
For each row in the DataFrame, the function yields a dictionary representing a single document to be indexed. | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 7) | def dataframe_to_bulk_actions(df):
for index, row in df.iterrows():
yield {
"_index": 'wikipedia_vector_index',
"_id": row['id'],
"_source": {
'url' : row["url"],
'title' : row["title"],
'text' : row["text"],
'title_vector' : json.loads(row["title_vector"],
'content_vector' : json.loads(row["content_vector"],
'vector_id' : row["vector_id"]
}
}
As the dataframe is large, we will index data in batches of 100. We index the data into Elasticsearch using the Python client's helpers for the bulk API. | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 8) | start = 0
end = len(wikipedia_dataframe)
batch_size = 100
for batch_start in range(start, end, batch_size):
batch_end = min(batch_start + batch_size, end)
batch_dataframe = wikipedia_dataframe.iloc[batch_start:batch_end]
actions = dataframe_to_bulk_actions(batch_dataframe)
helpers.bulk(client, actions)
Let's test the index with a simple match query.
print(client.search(index="wikipedia_vector_index", body={
"_source": {
"excludes": ["title_vector", "content_vector"]
},
"query": {
"match": {
"text": {
"query": "Hummingbird"
}
}
}
})) | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 9) | Encode a question with OpenAI embedding model
To perform semantic search, we need to encode queries with the same embedding model used to encode the documents at index time. In this example, we need to use the text-embedding-ada-002 model.
You'll need your OpenAI API key to generate the embeddings.
# Get OpenAI API key
OPENAI_API_KEY = getpass("Enter OpenAI API key")
# Set API key
openai.api_key = OPENAI_API_KEY
# Define model
EMBEDDING_MODEL = "text-embedding-ada-002"
# Define question
question = 'Is the Atlantic the biggest ocean in the world?'
# Create embedding
question_embedding = openai.Embedding.create(input=question, model=EMBEDDING_MODEL) | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 10) | Run semantic search queries
Now we're ready to run queries against our Elasticsearch index using our encoded question. We'll be doing a k-nearest neighbors search, using the Elasticsearch kNN query option.
First, we define a small function to pretty print the results.
# Function to pretty print Elasticsearch results
def pretty_response(response):
for hit in response['hits']['hits']:
id = hit['_id']
score = hit['_score']
title = hit['_source']['title']
text = hit['_source']['text']
pretty_output = (f"\nID: {id}\nTitle: {title}\nSummary: {text}\nScore: {score}")
print(pretty_output)
Now let's run our kNN query.
response = client.search(
index = "wikipedia_vector_index",
knn={
"field": "content_vector",
"query_vector": question_embedding["data"][0]["embedding"],
"k": 10,
"num_candidates": 100
}
)
pretty_response(response) | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Semantic search using Elasticsearch and OpenAI (Part 11) | Next steps
Success! Now you know how to use Elasticsearch as a vector database to store embeddings, encode queries by calling the OpenAI embeddings endpoint, and run semantic search.
Play around with different queries, and if you want to try with your own data, you can experiment with different embedding models.
ℹ️ Check out our other notebook Retrieval augmented generation using Elasticsearch and OpenAI. That notebook builds on this example to demonstrate how to use Elasticsearch together with the OpenAI chat completions API for retrieval augmented generation (RAG). | https://cookbook.openai.com/examples/vector_databases/elasticsearch/elasticsearch-semantic-search |
Data preparation and analysis for chat model fine-tuning | This notebook serves as a tool to preprocess and analyze the chat dataset used for fine-tuning a chat model. It checks for format errors, provides basic statistics, and estimates token counts for fine-tuning costs. The method shown here corresponds to legacy fine-tuning for models like babbage-002 and davinci-002. For fine-tuning gpt-3.5-turbo, see the current fine-tuning page. | https://cookbook.openai.com/examples/chat_finetuning_data_prep |
Data loading | We first load the chat dataset from an example JSONL file. data_path = "data/toy_chat_fine_tuning.jsonl" # Load the dataset with open(data_path, 'r', encoding='utf-8') as f: dataset = [json.loads(line) for line in f] # Initial dataset stats print("Num examples:", len(dataset)) print("First example:") for message in dataset[0]["messages"]: print(message) | https://cookbook.openai.com/examples/chat_finetuning_data_prep |
Format validation | We can perform a variety of error checks to validate that each conversation in the dataset adheres to the format expected by the fine-tuning API. Errors are categorized based on their nature for easier debugging. Data Type Check: Checks whether each entry in the dataset is a dictionary (dict). Error type: data_type. Presence of Message List: Checks if a messages list is present in each entry. Error type: missing_messages_list. Message Keys Check: Validates that each message in the messages list contains the keys role and content. Error type: message_missing_key. Unrecognized Keys in Messages: Logs if a message has keys other than role, content, and name. Error type: message_unrecognized_key. Role Validation: Ensures the role is one of "system", "user", or "assistant". Error type: unrecognized_role. Content Validation: Verifies that content has textual data and is a string. Error type: missing_content. Assistant Message Presence: Checks that each conversation has at least one message from the assistant. Error type: example_missing_assistant_message. The code below performs these checks, and outputs counts for each type of error found are printed. This is useful for debugging and ensuring the dataset is ready for the next steps. | https://cookbook.openai.com/examples/chat_finetuning_data_prep |
Token Counting Utilities | Lets define a few helpful utilities to be used in the rest of the notebook. encoding = tiktoken.get_encoding("cl100k_base") # not exact! # simplified from https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1): num_tokens = 0 for message in messages: num_tokens += tokens_per_message for key, value in message.items(): num_tokens += len(encoding.encode(value)) if key == "name": num_tokens += tokens_per_name num_tokens += 3 return num_tokens def num_assistant_tokens_from_messages(messages): num_tokens = 0 for message in messages: if message["role"] == "assistant": num_tokens += len(encoding.encode(message["content"])) return num_tokens def print_distribution(values, name): print(f"\n#### Distribution of {name}:") print(f"min / max: {min(values)}, {max(values)}") print(f"mean / median: {np.mean(values)}, {np.median(values)}") print(f"p5 / p95: {np.quantile(values, 0.1)}, {np.quantile(values, 0.9)}") | https://cookbook.openai.com/examples/chat_finetuning_data_prep |
Data Warnings and Token Counts | With some lightweight analysis we can identify potential issues in the dataset, like missing messages, and provide statistical insights into message and token counts. Missing System/User Messages: Counts the number of conversations missing a "system" or "user" message. Such messages are critical for defining the assistant's behavior and initiating the conversation. Number of Messages Per Example: Summarizes the distribution of the number of messages in each conversation, providing insight into dialogue complexity. Total Tokens Per Example: Calculates and summarizes the distribution of the total number of tokens in each conversation. Important for understanding fine-tuning costs. Tokens in Assistant's Messages: Calculates the number of tokens in the assistant's messages per conversation and summarizes this distribution. Useful for understanding the assistant's verbosity. Token Limit Warnings: Checks if any examples exceed the maximum token limit (4096 tokens), as such examples will be truncated during fine-tuning, potentially resulting in data loss. | https://cookbook.openai.com/examples/chat_finetuning_data_prep |
Warnings and tokens counts | n_missing_system = 0 n_missing_user = 0 n_messages = [] convo_lens = [] assistant_message_lens = [] for ex in dataset: messages = ex["messages"] if not any(message["role"] == "system" for message in messages): n_missing_system += 1 if not any(message["role"] == "user" for message in messages): n_missing_user += 1 n_messages.append(len(messages)) convo_lens.append(num_tokens_from_messages(messages)) assistant_message_lens.append(num_assistant_tokens_from_messages(messages)) print("Num examples missing system message:", n_missing_system) print("Num examples missing user message:", n_missing_user) print_distribution(n_messages, "num_messages_per_example") print_distribution(convo_lens, "num_total_tokens_per_example") print_distribution(assistant_message_lens, "num_assistant_tokens_per_example") n_too_long = sum(l > 4096 for l in convo_lens) print(f"\n{n_too_long} examples may be over the 4096 token limit, they will be truncated during fine-tuning") | https://cookbook.openai.com/examples/chat_finetuning_data_prep |
Cost Estimation | In this final section, we estimate the total number of tokens that will be used for fine-tuning, which allows us to approximate the cost. It is worth noting that the duration of the fine-tuning jobs will also increase with the token count. # Pricing and default n_epochs estimate MAX_TOKENS_PER_EXAMPLE = 4096 TARGET_EPOCHS = 3 MIN_TARGET_EXAMPLES = 100 MAX_TARGET_EXAMPLES = 25000 MIN_DEFAULT_EPOCHS = 1 MAX_DEFAULT_EPOCHS = 25 n_epochs = TARGET_EPOCHS n_train_examples = len(dataset) if n_train_examples * TARGET_EPOCHS < MIN_TARGET_EXAMPLES: n_epochs = min(MAX_DEFAULT_EPOCHS, MIN_TARGET_EXAMPLES // n_train_examples) elif n_train_examples * TARGET_EPOCHS > MAX_TARGET_EXAMPLES: n_epochs = max(MIN_DEFAULT_EPOCHS, MAX_TARGET_EXAMPLES // n_train_examples) n_billing_tokens_in_dataset = sum(min(MAX_TOKENS_PER_EXAMPLE, length) for length in convo_lens) print(f"Dataset has ~{n_billing_tokens_in_dataset} tokens that will be charged for during training") print(f"By default, you'll train for {n_epochs} epochs on this dataset") print(f"By default, you'll be charged for ~{n_epochs * n_billing_tokens_in_dataset} tokens") | https://cookbook.openai.com/examples/chat_finetuning_data_prep |
How to fine-tune chat models | This notebook provides a step-by-step guide for our new gpt-3.5-turbo fine-tuning. We'll perform entity extraction using the RecipeNLG dataset, which provides various recipes and a list of extracted generic ingredients for each. This is a common dataset for named entity recognition (NER) tasks. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Data preparation | We'll begin by preparing our data. When fine-tuning with the ChatCompletion format, each training example is a simple list of messages. For example, an entry could look like:
[{'role': 'system', 'content': 'You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided.'},
{'role': 'user', 'content': 'Title: No-Bake Nut Cookies\n\nIngredients: ["1 c. firmly packed brown sugar", "1/2 c. evaporated milk", "1/2 tsp. vanilla", "1/2 c. broken nuts (pecans)", "2 Tbsp. butter or margarine", "3 1/2 c. bite size shredded rice biscuits"]\n\nGeneric ingredients: '},
{'role': 'assistant', 'content': '["brown sugar", "milk", "vanilla", "nuts", "butter", "bite size shredded rice biscuits"]'}]
During the training process this conversation will be split, with the final entry being the completion that the model will produce, and the remainder of the messages acting as the prompt. Consider this when building your training examples - if your model will act on multi-turn conversations, then please provide representative examples so it doesn't perform poorly when the conversation starts to expand. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Upload files | You can now upload the files to our Files endpoint to be used by the fine-tuned model.
training_response = openai.File.create(
file=open(training_file_name, "rb"), purpose="fine-tune"
)
training_file_id = training_response["id"]
validation_response = openai.File.create(
file=open(validation_file_name, "rb"), purpose="fine-tune"
)
validation_file_id = validation_response["id"]
print("Training file ID:", training_file_id)
print("Validation file ID:", validation_file_id) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Fine-tuning | Now we can create our fine-tuning job with the generated files and an optional suffix to identify the model. The response will contain an id which you can use to retrieve updates on the job.
Note: The files have to first be processed by our system, so you might get a File not ready error. In that case, simply retry a few minutes later. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Check job status | You can make a GET request to the https://api.openai.com/v1/alpha/fine-tunes endpoint to list your alpha fine-tune jobs. In this instance you'll want to check that the ID you got from the previous step ends up as status: succeeded.
Once it is completed, you can use the result_files to sample the results from the validation set (if you uploaded one), and use the ID from the fine_tuned_model parameter to invoke your trained model. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Inference | The last step is to use your fine-tuned model for inference. Similar to the classic FineTuning, you simply call ChatCompletions with your new fine-tuned model name filling the model parameter.
test_df = recipe_df.loc[201:300]
test_row = test_df.iloc[0]
test_messages = []
test_messages.append({"role": "system", "content": system_message})
user_message = create_user_message(test_row)
test_messages.append({"role": "user", "content": create_user_message(test_row)})
pprint(test_messages)
response = openai.ChatCompletion.create(
model=fine_tuned_model_id, messages=test_messages, temperature=0, max_tokens=500
)
print(response["choices"][0]["message"]["content"]) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Conclusion | Congratulations, you are now ready to fine-tune your own models using the ChatCompletion format! We look forward to seeing what you build | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Evaluating Abstractive Summarization (Part 1) | In this notebook we delve into the evaluation techniques for abstractive summarization tasks using a simple example. We explore traditional evaluation methods like ROUGE and BERTScore, in addition to showcasing a more novel approach using LLMs as evaluators. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Part 2) | Evaluating the quality of summaries is a time-consuming process, as it involves different quality metrics such as coherence, conciseness, readability and content. Traditional automatic evaluation metrics such as ROUGE and BERTScore and others are concrete and reliable, but they may not correlate well with the actual quality of summaries. They show relatively low correlation with human judgments, especially for open-ended generation tasks (Liu et al., 2023). There's a growing need to lean on human evaluations, user feedback, or model-based metrics while being vigilant about potential biases. While human judgment provides invaluable insights, it is often not scalable and can be cost-prohibitive. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Part 3) | In addition to these traditional metrics, we showcase a method (G-Eval) that leverages Large Language Models (LLMs) as a novel, reference-free metric for assessing abstractive summaries. In this case, we use gpt-4 to score candidate outputs. gpt-4 has effectively learned an internal model of language quality that allows it to differentiate between fluent, coherent text and low-quality text. Harnessing this internal scoring mechanism allows auto-evaluation of new candidate outputs generated by an LLM. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Setup (Part 1) | For the purposes of this notebook we'll use the example summarization below. Notice that we provide two generated summaries to compare, and a reference human-written summary, which evaluation metrics like ROUGE and BERTScore require. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Setup (Part 2) | Excerpt (excerpt):
OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI will build safe and beneficial AGI directly, but will also consider its mission fulfilled if its work aids others to achieve this outcome. OpenAI follows several key principles for this purpose. First, broadly distributed benefits - any influence over AGI's deployment will be used for the benefit of all, and to avoid harmful uses or undue concentration of power. Second, long-term safety - OpenAI is committed to doing the research to make AGI safe, and to promote the adoption of such research across the AI community. Third, technical leadership - OpenAI aims to be at the forefront of AI capabilities. Fourth, a cooperative orientation - OpenAI actively cooperates with other research and policy institutions, and seeks to create a global community working together to address AGI's global challenges. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Summaries (Part 1) | Reference Summary /ref_summary (human generated) Eval Summary 1 / eval_summary_1 (system generated) Eval Summary 2 / eval_summary_2 (system generated) | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Summaries (Part 2) | OpenAI aims to ensure artificial general intelligence (AGI) is used for everyone's benefit, avoiding harmful uses or undue power concentration. It is committed to researching AGI safety, promoting such studies among the AI community. OpenAI seeks to lead in AI capabilities and cooperates with global research and policy institutions to address AGI's challenges. OpenAI aims to AGI benefits all humanity, avoiding harmful uses and power concentration. It pioneers research into safe and beneficial AGI and promotes adoption globally. OpenAI maintains technical leadership in AI while cooperating with global institutions to address AGI challenges. It seeks to lead a collaborative worldwide effort developing AGI for collective good. OpenAI aims to ensure AGI is for everyone's use, totally avoiding harmful stuff or big power concentration. Committed to researching AGI's safe side, promoting these studies in AI folks. OpenAI wants to be top in AI things and works with worldwide research, policy groups to figure AGI's stuff. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using ROUGE | ROUGE, which stands for Recall-Oriented Understudy for Gisting Evaluation, primarily gauges the overlap of words between a generated output and a reference text. It's a prevalent metric for evaluating automatic summarization tasks. Among its variants, ROUGE-L offers insights into the longest contiguous match between system-generated and reference summaries, gauging how well the system retains the original summary's essence. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using BERTScore | ROUGE relies on the exact presence of words in both the predicted and reference texts, failing to interpret the underlying semantics. This is where BERTScore comes in and leverages the contextual embeddings from the BERT model, aiming to evaluate the similarity between a predicted and a reference sentence in the context of machine-generated text. By comparing embeddings from both sentences, BERTScore captures semantic similarities that might be missed by traditional n-gram based metrics. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 1) | Here we implement an example reference-free text evaluator using gpt-4, inspired by the G-Eval framework which evaluates the quality of generated text using large language models. Unlike metrics like ROUGE or BERTScore that rely on comparison to reference summaries, the gpt-4 based evaluator assesses the quality of generated content based solely on the input prompt and text, without any ground truth references. This makes it applicable to new datasets and tasks where human references are sparse or unavailable. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 2) | We craft prompts for each of these criteria, taking the original document and the summary as inputs, and leveraging chain-of-thought generation and guiding the model to output a numeric score from 1-5 for each criteria. We generate scores from gpt-4 with the defined prompts, comparing them across summaries. In this demonstration, we're using a direct scoring function where gpt-4 generates a discrete score (1-5) for each metric. Normalizing the scores and taking a weighted sum could result in more robust, continuous scores that better reflect the quality and diversity of the summaries. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 3) | Evaluation prompt template based on G-Eval
EVALUATION_PROMPT_TEMPLATE = """
You will be given one summary written for an article. Your task is to rate the summary on one metric.
Please make sure you read and understand these instructions very carefully.
Please keep this document open while reviewing, and refer to it as needed. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 4) | Metric 1: Relevance
RELEVANCY_SCORE_CRITERIA = """
Relevance(1-5) - selection of important content from the source. The summary should include only important information from the source document. Annotators were instructed to penalize summaries which contained redundancies and excess information.""" | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 5) | Metric 2: Coherence
COHERENCE_SCORE_CRITERIA = """
Coherence(1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby "the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.""" | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 6) | Metric 3: Consistency
CONSISTENCY_SCORE_CRITERIA = """
Consistency(1-5) - the factual alignment between the summary and the summarized source. A factually consistent summary contains only statements that are entailed by the source document. Annotators were also asked to penalize summaries that contained hallucinated facts.""" | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 7) | Metric 4: Fluency
FLUENCY_SCORE_CRITERIA = """
Fluency(1-3): the quality of the summary in terms of grammar, spelling, punctuation, word choice, and sentence structure.
1: Poor. The summary has many errors that make it hard to understand or sound unnatural.
2: Fair. The summary has some errors that affect the clarity or smoothness of the text, but the main points are still comprehensible.
3: Good. The summary has few or no errors and is easy to read and follow.""" | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 8) | def get_geval_score(
criteria: str, steps: str, document: str, summary: str, metric_name: str
):
prompt = EVALUATION_PROMPT_TEMPLATE.format(
criteria=criteria,
steps=steps,
metric_name=metric_name,
document=document,
summary=summary,
) | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 9) | response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
temperature=0,
max_tokens=5,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
) | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 10) | return response.choices[0].message.content | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 11) | evaluation_metrics = {
"Relevance": (RELEVANCY_SCORE_CRITERIA, RELEVANCY_SCORE_STEPS),
"Coherence": (COHERENCE_SCORE_CRITERIA, COHERENCE_SCORE_STEPS),
"Consistency": (CONSISTENCY_SCORE_CRITERIA, CONSISTENCY_SCORE_STEPS),
"Fluency": (FLUENCY_SCORE_CRITERIA, FLUENCY_SCORE_STEPS),
} | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 12) | summaries = {"Summary 1": eval_summary_1, "Summary 2": eval_summary_2} | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 13) | data = {"Evaluation Type": [], "Summary Type": [], "Score": []} | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 14) | for eval_type, (criteria, steps) in evaluation_metrics.items():
for summ_type, summary in summaries.items():
data["Evaluation Type"].append(eval_type)
data["Summary Type"].append(summ_type)
result = get_geval_score(criteria, steps, excerpt, summary, eval_type)
score_num = int(result.strip())
data["Score"].append(score_num) | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 15) | pivot_df = pd.DataFrame(data, index=None).pivot(
index="Evaluation Type", columns="Summary Type", values="Score"
) | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating using GPT-4 (Part 16) | styled_pivot_df = pivot_df.style.apply(highlight_max, axis=1)
display(styled_pivot_df) | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Limitations | Note that LLM-based metrics could have a bias towards preferring LLM-generated texts over human-written texts. Additionally LLM based metrics are sensitive to system messages/prompts. We recommend experimenting with other techniques that can help improve performance and/or get consistent scores, striking the right balance between high-quality expensive evaluation and automated evaluations. It is also worth noting that this scoring methodology is currently limited by gpt-4's context window. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Conclusion | Conclusion
Evaluating abstractive summarization remains an open area for further improvement. Traditional metrics like ROUGE, BLEU, and BERTScore provide useful automatic evaluation but have limitations in capturing semantic similarity and nuanced aspects of summarization quality. Moreover, they require reference outputs which can be expensive to collect/label. LLM-based metrics offer promise as a reference-free method of evaluating coherence, fluency, and relevance. However, they too have potential biases favoring text generated by LLMs. Ultimately, a combination of automatic metrics and human evaluation is ideal for reliably assessing abstractive summarization systems. While human evaluation is indispensable for gaining a comprehensive understanding of summary quality, it should be complemented with automated evaluation to enable efficient, large-scale testing. The field will continue to evolve more robust evaluation techniques, balancing quality, scalability, and fairness. Advancing evaluation methods is crucial for driving progress in production applications. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Function calling for nearby places: Leveraging the Google Places API and customer profiles | This notebook is centered around the integration of the Google Places API and custom user profiles to enhance location-based searches. Our approach involves using the Google Places API in combination with user preferences, aiming to make location discovery more personal and relevant. Please note that while we focus on the Google Places API in this instance, there are numerous other APIs you could explore and apply in a similar fashion. | https://cookbook.openai.com/examples/function_calling_finding_nearby_places |
Customer profile: This mock profile captures individual preferences for types of places (e.g., restaurants, parks, museums), budget, preferred ratings, and other specific requirements. | We'll explore the application of three main components:
Customer profile: This mock profile captures individual preferences for types of places (e.g., restaurants, parks, museums), budget, preferred ratings, and other specific requirements.
Google Places API: This API provides real-time data about nearby places. It factors in various data points such as ratings, types of venues, costs, and more from the locations around you.
Function calling: A single command such as 'I'm hungry' or 'I want to visit a museum' activates the function which combines the user profile data and Google Places API to identify suitable venues. | https://cookbook.openai.com/examples/function_calling_finding_nearby_places |
Profile-based recommendations: Learn how to create a user profile and make place recommendations based on individual preferences. | This notebook introduces two primary use cases:
Profile-based recommendations: Learn how to create a user profile and make place recommendations based on individual preferences.
API integration with function calling: Understand how to integrate and call Google Places API effectively to source real-time data of various places using function calling.
Please note that while this system is highly versatile, its effectiveness may vary based on user preferences and available place data. For the purposes of this notebook, the customer data is fake and the location is hardcoded. | https://cookbook.openai.com/examples/function_calling_finding_nearby_places |
Setup Google Places API | To use the Google Places API, you'll need two things:
Google Account: If you don't already have one, you will need to create a Google account.
Google Places API Key: The API key is a unique identifier that is used to authenticate requests associated with your project for usage and billing purposes. You can get your API key from the Google Cloud Console.
Please note that Google Places API is a paid service, and the cost is associated with the number of API calls made. Keep track of your usage to avoid any unexpected charges.
The requests library is also needed, you can download it by using the following command: | https://cookbook.openai.com/examples/function_calling_finding_nearby_places |
In this code snippet, we are defining a function fetch_customer_profile that accepts a user_id and returns a mock user profile. | In this code snippet, we are defining a function fetch_customer_profile that accepts a user_id and returns a mock user profile.
This function simulates an API call that fetches user data from a database. For this demo, we're using hard-coded data. The user profile contains various details such as the user's location (set to the coordinates of the Golden Gate Bridge for this example), preferences in food and activities, app usage metrics, recent interactions, and user rank.
In a production environment, you would replace this hard-coded data with a real API call to your user database. | https://cookbook.openai.com/examples/function_calling_finding_nearby_places |
Using Qdrant for Embeddings Search | This notebook takes you through a simple flow to download some data, embed it, and then index and search it using a selection of vector databases. This is a common requirement for customers who want to store and search our embeddings with their own data in a secure environment to support production use cases such as chatbots, topic modelling and more. | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
What is a Vector Database | A vector database is a database made to store, manage and search embedding vectors. The use of embeddings to encode unstructured data (text, audio, video and more) as vectors for consumption by machine-learning models has exploded in recent years, due to the increasing effectiveness of AI in solving use cases involving natural language, image recognition and other unstructured forms of data. Vector databases have emerged as an effective solution for enterprises to deliver and scale these use cases. | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
Why use a Vector Database | Vector databases enable enterprises to take many of the embeddings use cases we've shared in this repo (question and answering, chatbot and recommendation services, for example), and make use of them in a secure, scalable environment. Many of our customers make embeddings solve their problems at small scale but performance and security hold them back from going into production - we see vector databases as a key component in solving that, and in this guide we'll walk through the basics of embedding text data, storing it in a vector database and using it for semantic search. | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
Demo Flow | The demo flow is:
Setup: Import packages and set any required variables
Load data: Load a dataset and embed it using OpenAI embeddings
Qdrant
Setup: Here we'll set up the Python client for Qdrant. For more details go here
Index Data: We'll create a collection with vectors for titles and content
Search Data: We'll run a few searches to confirm it works
Once you've run through this notebook you should have a basic understanding of how to setup and use vector databases, and can move on to more complex use cases making use of our embeddings. | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
Setup | Import the required libraries and set the embedding model that we'd like to use.
# We'll need to install Qdrant client
!pip install qdrant-client
#Install wget to pull zip file
!pip install wget
import openai
from typing:... | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
Load data | In this section we'll load embedded data that we've prepared previous to this session.
embeddings_url = 'https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip'
# The file is ~700 MB so this will take some time
wget.download(embeddings_url)... | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
Qdrant | Qdrant. is a high-performant vector search database written in Rust. It offers both on-premise and cloud version, but for the purposes of that example we're going to use the local deployment mode.
Setting everything up will require:
Spinning up a local instance of Qdrant
Configuring the collection and storing the data in it
Trying out with some queries
Setup
For the local deployment, we are going to use Docker, according to the Qdrant documentation: https://qdrant.tech/documentation/quick_start/. Qdrant requires just a single container, but an example of the docker-compose.yaml file is available at ./qdrant/docker-compose.yaml in this repo.
You can start Qdrant instance locally by navigating to this directory and running docker-compose up -d
qdrant = qdrant_client.QdrantClient(host='localhost', prefer_grpc=True)
qdrant.get_collections() | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
Index data | Qdrant stores data in collections where each object is described by at least one vector and may contain an additional metadata called payload. Our collection will be called Articles and each object will be described by both title and content vectors.
We'll be using an official qdrant-client package that has all the utility methods already built-in.
from qdrant_client.http import models as rest
vector_size = len(article_df['content_vector'][0])
qdrant.recreate_collection(
collection_name='Articles',
vectors_config={
'title': rest.VectorParams(
distance=rest.Distance.COSINE,
size=vector_size,
),
'content': rest.VectorParams(
distance=rest.Distance.COSINE,
size=vector_size,
),
}
)
qdrant.upsert(
collection_name='Articles',
points=[
rest.PointStruct(
id=k,
vector={
'title': v['title_vector'],
'content': v['content_vector'],
},
payload=v.to_dict(),
)
for k, v in article_df.iterrows()
],
)
# Check the collection size to make sure all the points have been stored
qdrant.count(collection_name='Articles') | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |
Search Data | Once the data is put into Qdrant we will start querying the collection for the closest vectors. We may provide an additional parameter vector_name to switch from title to content based search.
def query_qdrant(query, collection_name, vector_name='title', top_k=20):
# Creates embedding vector from user query
embedded_query = openai.Embedding.create(
input=query,
model=EMBEDDING_MODEL,
)['data'][0]['embedding']
query_results = qdrant.search(
collection_name=collection_name,
query_vector=(
vector_name, embedded_query
),
limit=top_k,
)
return query_results
query_results = query_qdrant('modern art in Europe', 'Articles')
for i, article in enumerate(query_results):
print(f'{i + 1}. {article.payload['title']} (Score: {round(article.score, 3)})')
# This time we'll query using content vector
query_results = query_qdrant('Famous battles in Scottish history', 'Articles', 'content')
for i, article in enumerate(query_results):
print(f'{i + 1}. {article.payload['title']} (Score: {round(article.score, 3)})') | https://cookbook.openai.com/examples/vector_databases/qdrant/using_qdrant_for_embeddings_search |