Datasets:

Modalities:
Text
Formats:
text
Languages:
English
Size:
< 1K
Tags:
Not-For-All-Audiences
Libraries:
Datasets
License:
SicariusSicariiStuff commited on
Commit
0859e62
·
verified ·
1 Parent(s): 1ea1eb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -17,6 +17,9 @@ This is the Reddit_Dirty_Writing_Prompts dataset, which I further cleaned and so
17
  - ShareGPT Format
18
  - Each entry contains the number of tokens in both LLAMA1 and LLAMA3 tokenizers
19
  - Each entry contains the number of characters
 
 
 
20
 
21
  I hope this helps as many people as possible, let's make AI with less slop, and make AI accessible for everyone 🤗
22
 
 
17
  - ShareGPT Format
18
  - Each entry contains the number of tokens in both LLAMA1 and LLAMA3 tokenizers
19
  - Each entry contains the number of characters
20
+ - Longest entry in tokens: TOKENS_LLAMA1: 7545 \ TOKENS_LLAMA3: 6397
21
+ - Shortest entry in tokens: TOKENS_LLAMA1: 98 \ TOKENS_LLAMA3: 83
22
+ - Total_TOKENS_LLAMA1: 12874614 (12M), Total_TOKENS_LLAMA3: 10892913 (10M)
23
 
24
  I hope this helps as many people as possible, let's make AI with less slop, and make AI accessible for everyone 🤗
25