Updated
From far corners of the interwebs, I summon you 3!
Anywho... New dataset is out that should perform better then last when doing your quantization on models. Unlike last time where i just skimmed the dataset to find out of place stuff at the top of columns, like editors notes and weird symbols that could mess up the model's performance, this time I actually did the unthinkable...I sat down and read this crap to a degree during the week. Fixing spelling errors, spacing words that were slammed totherlikeso, some basic grammar tweaks, spacing franchises out more between them (though the proper spacing is mostly limited to the first 500 rows) and just overall more cleanup, plus even deleting sections that were either too samey in structure, or were just flat out useless in my eyes like a long trivia section about someone's OCs. Now, I doubt I got everything, and I'm not insane enough to go through all the theirs/there's to make sure each is correctly placed and such, but this should give your models at least an extra 0.00002% boost when using them!
I doubt I'd do anymore updates for this particular dataset, so consider this one to be the final version. Peace out!
Barely even 24 hours and I made a small update pruning 2 broken rows. Should of checked them with the viewer beforehand. Eh... Now I'd say it's (mostly) finished for those who see this.
Further pruning 3 extra broken row at the tail end. Triple checked now to make sure i didn't miss anything this time. God damn...
Hey, @ParasiticRogue , thank you for your hard work! I will definitely be using this set for quants!
I’m in the middle of doing some tests of your RPStew model and came up with some cool samplers and prompts for it (for ST), building upon what you’ve created for it initially.
Mind if I hit you up on Discord about them and my overall thoughts about the model?
https://discord.gg/2QFt358H
Wish you an amazing day!
New update, hopefully the last. And yes, this one is worth the ping. The parquets now work with quants 4 bit and under without being lobotomized. Main changes? Extra stop tokens on both sides now, and each row is under or just about at 2k tokens. 2k I'm not sure if it was mandatory for fixing it, but the stop tokens on both sides was. Gods I hope I never look at this dataset again...