Commit
•
badff7b
1
Parent(s):
685270e
Create repo card with prompt-templates library
Browse files
README.md
CHANGED
@@ -1,56 +1,9 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
library_name: prompt-templates
|
4 |
tags:
|
5 |
- prompts
|
|
|
6 |
---
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
LLMs are increasingly used to help create datasets, for example for quality filtering or synthetic text generation.
|
12 |
-
The prompts used for creating a dataset are currently unsystematically shared on GitHub ([example](https://github.com/huggingface/cosmopedia/tree/main/prompts)),
|
13 |
-
referenced in dataset cards ([example](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu#annotation)), or stored in .txt files ([example](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/blob/main/utils/prompt.txt)),
|
14 |
-
hidden in paper appendices or not shared at all.
|
15 |
-
This makes reproducibility unnecessarily difficult.
|
16 |
-
|
17 |
-
To facilitate reproduction, these dataset prompts can be shared in YAML files in HF dataset repositories together with metadata on generation parameters, model_ids etc.
|
18 |
-
|
19 |
-
|
20 |
-
### Example: FineWeb-Edu
|
21 |
-
|
22 |
-
The FineWeb-Edu dataset was created by prompting `Meta-Llama-3-70B-Instruct` to score the educational value of web texts. The authors <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu#annotation">provide the prompt</a> in a .txt file.
|
23 |
-
|
24 |
-
When provided in a YAML file in the dataset repo, the prompt can easily be loaded and supplemented with metadata like the model_id or generation parameters for easy reproducibility.
|
25 |
-
|
26 |
-
```python
|
27 |
-
#!pip install hf_hub_prompts
|
28 |
-
from hf_hub_prompts import download_prompt
|
29 |
-
import torch
|
30 |
-
from transformers import pipeline
|
31 |
-
|
32 |
-
prompt_template = download_prompt(repo_id="MoritzLaurer/dataset_prompts", filename="fineweb-edu-prompt.yaml", repo_type="dataset")
|
33 |
-
|
34 |
-
# populate the prompt
|
35 |
-
text_to_score = "The quick brown fox jumps over the lazy dog"
|
36 |
-
messages = prompt_template.populate_template(text_to_score=text_to_score)
|
37 |
-
|
38 |
-
# test prompt with local llama
|
39 |
-
model_id = "meta-llama/Llama-3.2-1B-Instruct" # prompt was original created for meta-llama/Meta-Llama-3-70B-Instruct
|
40 |
-
|
41 |
-
pipe = pipeline(
|
42 |
-
"text-generation",
|
43 |
-
model=model_id,
|
44 |
-
torch_dtype=torch.bfloat16,
|
45 |
-
device_map="auto",
|
46 |
-
)
|
47 |
-
|
48 |
-
outputs = pipe(
|
49 |
-
messages,
|
50 |
-
max_new_tokens=512,
|
51 |
-
)
|
52 |
-
|
53 |
-
print(outputs[0]["generated_text"][-1])
|
54 |
-
```
|
55 |
-
|
56 |
-
|
|
|
1 |
---
|
|
|
2 |
library_name: prompt-templates
|
3 |
tags:
|
4 |
- prompts
|
5 |
+
- prompt-templates
|
6 |
---
|
7 |
+
This repository was created with the `prompt-templates` library and contains
|
8 |
+
prompt templates in the `Files` tab.
|
9 |
+
For easily reusing these templates, see the [prompt-templates documentation](https://github.com/MoritzLaurer/prompt-templates).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|