Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,9 @@ tags:
|
|
22 |
|
23 |
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
|
24 |
|
|
|
|
|
|
|
25 |
|
26 |
## Training Data
|
27 |
the model is trained on 'b-mc2/sql-create-context' dataset upto 5000rows
|
|
|
22 |
|
23 |
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
|
24 |
|
25 |
+
## Parameter-Efficient Fine-tuning (PEFT)
|
26 |
+
|
27 |
+
Parameter-Efficient Fine-tuning (PEFT) is a technique used to improve the performance of pre-trained language models (LLMs) on specific downstream tasks without fine-tuning all the model's parameters. This is done by freezing most of the model's parameters and only fine-tuning a small number of parameters that are specific to the downstream task.
|
28 |
|
29 |
## Training Data
|
30 |
the model is trained on 'b-mc2/sql-create-context' dataset upto 5000rows
|