Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,11 +1,12 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
3 |
tags:
|
4 |
-
-
|
5 |
---
|
6 |
|
7 |
## Sharing prompts linked to datasets
|
8 |
-
This repo illustrates how you can use the `
|
9 |
|
10 |
LLMs are increasingly used to help create datasets, for example for quality filtering or synthetic text generation.
|
11 |
The prompts used for creating a dataset are currently unsystematically shared on GitHub ([example](https://github.com/huggingface/cosmopedia/tree/main/prompts)),
|
@@ -18,11 +19,9 @@ To facilitate reproduction, these dataset prompts can be shared in YAML files in
|
|
18 |
|
19 |
### Example: FineWeb-Edu
|
20 |
|
21 |
-
The FineWeb-Edu dataset was created by prompting `Meta-Llama-3-70B-Instruct` to score the educational value of web texts.
|
22 |
-
The authors <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu#annotation">provide the prompt</a> in a .txt file.
|
23 |
|
24 |
-
When provided in a YAML file in the dataset repo, the prompt can easily be loaded and supplemented with metadata
|
25 |
-
like the model_id or generation parameters for easy reproducibility.
|
26 |
|
27 |
```python
|
28 |
#!pip install hf_hub_prompts
|
@@ -34,7 +33,7 @@ prompt_template = download_prompt(repo_id="MoritzLaurer/dataset_prompts", filena
|
|
34 |
|
35 |
# populate the prompt
|
36 |
text_to_score = "The quick brown fox jumps over the lazy dog"
|
37 |
-
messages = prompt_template.
|
38 |
|
39 |
# test prompt with local llama
|
40 |
model_id = "meta-llama/Llama-3.2-1B-Instruct" # prompt was original created for meta-llama/Meta-Llama-3-70B-Instruct
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
library_name: prompt-templates
|
4 |
tags:
|
5 |
+
- prompts
|
6 |
---
|
7 |
|
8 |
## Sharing prompts linked to datasets
|
9 |
+
This repo illustrates how you can use the `prompt_templates` library to load prompts from YAML files in dataset repositories.
|
10 |
|
11 |
LLMs are increasingly used to help create datasets, for example for quality filtering or synthetic text generation.
|
12 |
The prompts used for creating a dataset are currently unsystematically shared on GitHub ([example](https://github.com/huggingface/cosmopedia/tree/main/prompts)),
|
|
|
19 |
|
20 |
### Example: FineWeb-Edu
|
21 |
|
22 |
+
The FineWeb-Edu dataset was created by prompting `Meta-Llama-3-70B-Instruct` to score the educational value of web texts. The authors <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu#annotation">provide the prompt</a> in a .txt file.
|
|
|
23 |
|
24 |
+
When provided in a YAML file in the dataset repo, the prompt can easily be loaded and supplemented with metadata like the model_id or generation parameters for easy reproducibility.
|
|
|
25 |
|
26 |
```python
|
27 |
#!pip install hf_hub_prompts
|
|
|
33 |
|
34 |
# populate the prompt
|
35 |
text_to_score = "The quick brown fox jumps over the lazy dog"
|
36 |
+
messages = prompt_template.populate_template(text_to_score=text_to_score)
|
37 |
|
38 |
# test prompt with local llama
|
39 |
model_id = "meta-llama/Llama-3.2-1B-Instruct" # prompt was original created for meta-llama/Meta-Llama-3-70B-Instruct
|