Prasant commited on
Commit
28ff1a5
1 Parent(s): 7acfd35

Upload article_6811186.json with huggingface_hub

Browse files
Files changed (1) hide show
  1. article_6811186.json +6 -0
article_6811186.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "text": "Fine-tuning with GPT-3.5\n========================\n\n\nFine-tuning data provides models with examples of how it should respond do a given conversation. We'll want these examples to match the input that the model will see in production as closely as possible. \n\n\n\n#### First, system instructions.\n\n\nThese tell the model how to act, and supply any contextual information. You should use the prompt used in the training dataset when calling the fine-tuned model.\n\n\n\n\n```\n{\"role\": \"system\", \"content\": \"Marv is a factual chatbot that is also sarcastic.\"}\n```\n\n#### \n**Second,** conversation **data.**\n\n\nWe'll want to provide varied examples of conversations that the model may run into, such as \"What's the capital of France?\" and \"Who wrote 'Romeo and Juliet'?\"\n\n\n\n\n```\n{\"role\": \"user\", \"content\": \"What's the capital of France?\"}\n```\n\n#### Next, the agent response.\n\n\nHere, we present the model with an example of how to respond to the previous message, given the system instruction. For our snarky agent, we may choose a response like this:\n\n\n\n\n```\n{\"role\": \"agent\", \"content\": \"Paris, as if everyone doesn't know that already.\"}\n```\n\n\n#### Finally, putting it all together.\n\n\nOnce we have many examples, we can put these all together and begin training. Our dataset should look like follows:\n\n\n\n\n```\n{\"messages\": [{\"role\": \"system\", \"content\": \"Marv is a factual chatbot that is also sarcastic.\"}, \n{\"role\": \"user\", \"content\": \"What's the capital of France?\"} \n{\"role\": \"agent\", \"content\": \"Paris, as if everyone doesn't know that already.\"}]} \n \n{\"messages\": [{\"role\": \"system\", \"content\": \"Marv is a factual chatbot that is also sarcastic.\"}, \n{\"role\": \"user\", \"content\": \"Who wrote 'Romeo and Juliet'?\"}, \n{\"role\": \"agent\", \"content\": \"Oh, just some guy named William Shakespeare. Ever heard of him?\"}]} \n \n{\"messages\": [{\"role\": \"system\", \"content\": \"Marv is a factual chatbot that is also sarcastic.\"}, \n{\"role\": \"user\", \"content\": \"How far is the Moon from Earth?\"}, \n{\"role\": \"agent\", \"content\": \"Around 384,400 kilometers. Give or take a few, like that really matters.\"}]}\n```\n\n\n\nFine-tuning with babbage and davinci\n====================================\n\n\nTo fine-tune effectively without ChatCompletions, you need to format your data properly to provide clues to the model about where to start and stop generating text. \n\n\n\n**Indicator String** \n\n\nThe indicator string is a symbol or sequence of symbols that you append to the end of your prompt to tell the model that you want it to start generating text after this string. \n\n\n\nFor example, if you want the model to categorize items as colors, you can use an indicator string like '->'. The prompts in your dataset would look like this:\n\n\n* 'banana ->'\n* 'lime ->'\n* 'tomato ->'\n\nYou can use any string as an indicator string as long as it doesn't appear anywhere else in the dataset. We recommend using '\\n###\\n'.\n\n\n\n**Stop Sequence**\n\n\nThe stop sequence is another special symbol or sequence of symbols that you use to tell the model that you want it to stop generating text after that point. \n\n\n\nFor example, if you want the model to generate one word as a completion, you can use a stop sequence such as \"\\n\" (newline) or \".\" (period) to mark the end of the completion, like this: \n\n\n* 'prompt' : 'banana ->', 'completion' : ' yellow \\n'\n* 'prompt' : 'lime ->', 'completion' : ' green \\n'\n* 'prompt' : 'tomato ->', 'completion' : ' red \\n'\n\n\n**Calling the model**\n\n\nYou should use the same symbols used in your dataset when calling the model. If you used the dataset above, you should use '\\n' as a stop sequence. You should also append '->' to your prompts as an indicator string (e.g. prompt: 'lemon -> ')\n\n\n\nIt is important that you use consistent and unique symbols for the indicator string and the stop sequence, and that they don't appear anywhere else in your data. Otherwise, the model might get confused and generate unwanted or incorrect text. \n\n\n\n**Extra Recommendations**\n\n\nWe also recommend appending a single space character at the beginning of your outputs. \n\n\n\nYou can also use our [command line tool](https://beta.openai.com/docs/guides/fine-tuning/cli-data-preparation-tool) to help format your dataset, after you have prepared it.\n\n",
3
+ "title": "How do I format my fine-tuning data?",
4
+ "article_id": "6811186",
5
+ "url": "https://help.openai.com/en/articles/6811186-how-do-i-format-my-fine-tuning-data"
6
+ }