bitext-innovations commited on
Commit
3e36210
1 Parent(s): 8d98cdb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ size_categories:
24
 
25
  ## Overview
26
 
27
- This dataset is designed to train Large Language Models such as GPT, Llama3, and Mistral, aimed at both Fine Tuning and Domain Adaptation specific to the Retail Banking sector.
28
 
29
  The dataset has the following specifications:
30
 
 
24
 
25
  ## Overview
26
 
27
+ This hybrid synthetic dataset is designed to be used to fine-tune Large Language Models such as GPT, Mistral and OpenELM, and has been generated using our NLP/NLG technology and our automated Data Labeling (DAL) tools. The goal is to demonstrate how Verticalization/Domain Adaptation for the [Retail Banking] sector can be easily achieved using our two-step approach to LLM Fine-Tuning. For example, if you are [ACME Bank], you can create your own customized LLM by first training a fine-tuned model using this dataset, and then further fine-tuning it with a small amount of your own data. An overview of this approach can be found at: [From General-Purpose LLMs to Verticalized Enterprise Models](https://www.bitext.com/blog/general-purpose-models-verticalized-enterprise-genai/)
28
 
29
  The dataset has the following specifications:
30