Kemsekov's picture
Create README.md
3e5f549 verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - ru

This is synthetic text-corruption dataset, based on Russian open official/government documents text.

This dataset is intended to be used to train LLM to perform text-recovery task.

All the errors in text is made solely in Russian sentences, hence ignoring any English sentence. Texts contains complex formatting, which is common for documents.

Each line contains json object that have array messages value, which consists of role-based conversation. Each first message is randomly chosen system prompt, that describes task.

Each such object describes content of a single document, so each next message contains text that is relevant to previous one.

Each user message begins from %% grammar_fix or %% grammar_fix_table, and contains corrupted text/markdown table, and each assistant response is original correct text.

I advice you to train model on each document(message object) in a single pass(if you have enough memory), because by doing so model will learn better to connect previous corrected text with next one, and so it will be more stable in reconstructing abbreviations and objects/persons names.

Also I advice you to train first 1/5 of dataset with provided prompts, and later on 4/5 of dataset train without prompt. Prompt provided is just a hint I made for base LLM, so it could start learning from higher accuracy, and later on when it have learnt what it need to do, I advice you to drop prompt altogether to speed up training and better let it learn how to actually reconstruct original text from corrupted one.

concat_*.json files is separated train and test files with just text or table contents.

test.json and train.json is combined text and table datasets