StackingExchange / README.md
KaraKaraWitch's picture
Update README.md
b7331ca verified
|
raw
history blame
10.5 kB
metadata
annotations_creators:
  - no-annotation
language_creators:
  - crowdsourced
license:
  - cc-by-sa-3.0
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - language-modeling
  - masked-language-modeling
source_datasets:
  - original
language:
  - en
configs:
  - config_name: default
    data_files:
      - split: final
        path: data/*.jsonl
pretty_name: Stacking Exchange

Dataset Card for StackingExchange

StackStackStackStackStackStackStack

Waifu to catch your attention.

Dataset Description

StackingExchange is a ~31.17B Tokens (llama-2-7b-chat-tokenizer) / ~27.17B Tokens (RWKV Tokenizer) of Stack Exchange. It serves as a training resource for large language models and other NLP tasks. This card details the dataset's origin, content, and limitations.

  • Curated by: KaraKaraWitch
  • Funded by [optional]: Recursal.ai (I work there lol)
  • Shared by [optional]: KaraKaraWitch
  • Language(s) (NLP): English
  • License: cc-by-sa-4.0

Stacking Exchange was created under time constraints for the release of EagleX v1, and may contain biases in selection.

Supported Tasks and Leaderboards

Primarily used for language modeling

Languages

While the dataset is focused on English. Keep in mind there are other languages as well.

Filtering

The filtering process is documented in code but not well-organized. We recommend reviewing the code directly for details.

The files are split into 2:

  • stack_parser.py
  • stack_parser_large.py

The first parser is for smaller Stack Exchange websites which doesn't take too much time. While the latter is run separately for large stack exchange websites.

In general, the entire processing chain has around 3 steps. Excluding the initial download and extraction from 7z files into their own respective folders from Internet Archive.

We will annotate the flow based on the stack_parser_large.py variant of the filtering.

  1. Reconstruction of posts from PostHistory.xml into a json compatible format stored within SqliteDict. (decadence_fp inside _large.py)
  • For caching, we use SqliteDict from the library SqliteDict.
  • This is for preservation of Markdown and to avoid parsing HTML that is found in Posts.xml
  • Sometimes some posts are updated without any creation history stored within PostHistory. The same goes for Migration post history types. They sometimes don't point to posts that exist (At the time.)
  1. We compare with Posts.xml and get a list of posts with their answers paired to it. (decadence)
  • This took the longest time.
  • We first gather a list of main posts before adding all the answers to the main posts.
  • Similar to step 1. Sometimes a post id is in Posts.xml while it doesn't exist in PostsHistory.xml
  • After this step, we export into _raw.jsonl files.
  1. Process into QA and Conversation formats. (qa_processor,convo_processor)
  • We process the data into 2 formats. QA (Question: {question}\nAnswer: {answer}\n\n, see staccato_format.json for a list of formats) and conversational (Similar to Conversation with Sender and user.)
  • The post must have an answer with a score >= 0. Else the answer is not added into the data.
  • We recommend using the Conversation formats for ease of use.
  • EDIT: On further inspection, the conversational format is not fully similar to openai. Read the notes under the data instances. KaraKaraWitch apologizes for any inconvenience this caused ahead of time.
  1. For huggingface, we have compressed the files with gzip.
  • We have noted an issue where there could be a false positive virus-detection on 10 of the compressed files.
  • We have used the dataset in it's uncompressed and form and didn't encounter any issues.

As we are dealing with many files at once, we also have the following helper functions to run various functions in parallel over all StackExchange sites:

  • fp-stackstack (Runs decadence_fp)
  • stackstack (Runs decadence)
  • convo_stackstack (Runs convo_processor)
  • qa_stackstack (Runs qa_processor)

Data Instances

Refer to this sample to see all the fields for the Conversation format.

{
    "conversation": [
        {
            "sender": "user",
            "message": "Were there two Black Neuroi Hives?\n\nAt the end of *Strike Witches*, not only did the Warlock destroy all the Neuroi (including the humanoid one), but after the Warlock's destruction, the Black Neuroi Hive disappeared.\n\nHowever, at the start of *Strike Witches 2* which begins slightly before the final scene of the previous season (where Yoshika receives a second letter from her father), the 504th is sent to a Black Neuroi Hive where a Humanoid Neuroi was waiting in a hope to strike peace with the Neuroi. However, the Humanoid Neuroi was destroyed by an attack from the Green Neuroi Hive and subsequently, the Black Hive is never seen again.\n\nNow the second season affirms the ending of the first season (Gallia being freed, the Warlock turning into a Neuroi and losing control), so I am wondering was there two Black Neuroi Hives and if so, where was the second one?"
        },
        {
            "sender": "assistant",
            "message": {
                "body": "**Yes, there's another Black Neuroi Hive that was destroyed at the beginning of *Strike Witches 2.***\n\nAccording to a more general question asking the total number of hives on [Yahoo! Chiebukuro (Japanese)](https://detail.chiebukuro.yahoo.co.jp/qa/question_detail/q12100997307), which was answered by mr_hungry01:\n\n> The new hive that occurred at the beginning of season 2, episode 1 is right above Venezia facing the Adriatic Sea.\n>\n> At the same time, the destroyed hive was displayed on the screen as if it was on the city facing the sea, **but in setting it is said to be \"Hive of the South Karlsland\"**. Together with the fact that the 504th use this strategy against Neuroi crossing the Alps mountains until it was destroyed, **it seems it's around in the inland north of the Alps mountains**.\n>\n> <sup> (partial answer focusing on the 2nd season only, emphasis mine)</sup>\n\nThe [Wikia](http://strikewitches.wikia.com/wiki/Neuroi) also mentions this:\n\n> As of 1945, Neuroi hives have been confirmed in the following regions.\n>\n> - [...]\n> - **South Karlsland** - apparently located near the Rhine river, destroyed by a stronger hive that later established itself in Venezia;\n> - [...]",
                "Id": 42727,
                "Score": 1,
                "Counts": {
                    "Views": 0,
                    "Comments": 0
                }
            }
        }
    ],
    "meta": {
        "q_score": 2,
        "a_score": 1,
        "s_score": 3
    }
}

The format has the following keys:

- "conversation" (list) [The actual Question and answer pair.]
  - Each value in the list is a dict with the following keys:
    - sender (str) [Either "user" or "assistant"]
    - message (str OR dict) [Markdown content. It's a string for questions, while a dictionary for answers.]
      - "body" (str) the answer's markdown content
      - "Id" (int) the ID of the answer
      - "Score" (int) the score of the answer
      - "Counts" (View and Comment counts for the answer.)
- "meta" (dict) [Contains additional metadata about the question and answer pair]
    - "q_score" (int) the question's score
    - "a_score" (int) the answer's score
    - "s_score" (int) the summed up score of the question and answer respectively.

Refer to this sample to see all the fields for the QA format.

{
    "text": "Q: What animals are represented by the members of the 501st Joint Fighter Wing?\n\nIn the ending credits of the second season of *Strike Witches*, it shows that each member of the 501st has a coat of arms which an image of the animal they represent when they use their magic.\n\nSome are easy to tell while for others I'm not 100% sure (apart from 2), half are cats and half are dogs, the 2 odd ones are a rabbit and what I think is supposed to be a tanuki.\n\nI am wondering, what animals the 501st JFW members represent.\n\nA: **Commanding Officer**                          \n\nMinna-Dietlinde Wilcke - Gray Wolf\n\n**Commanding Officer in battle**\n\nSakamoto Mio - Doberman\n\n**Members**\n\nGertrud Barkhorn - German Pointer\n\nErica Hartmann - Dachshund\n\nPerrine H. Clostermann - Chartreux\n\nFrancesca Lucchini - Black panther\n\nEila Ilmatar Juutilainen - Black Fox\n\nCharlotte E. Yeager - White Rabbit\n\nSanya V. Litvyak - Black cat\n\nLynette Bishop - Scottish Fold\n\nMiyafuji Yoshika - Mameshiba\n\n",
    "meta": {
        "q_score": 4,
        "a_score": 1
    }
}

The format has the following keys:

- "text" (str) [The Question and answer pair.]
- "meta" (dict) [Contains additional metadata about the question and answer pair]
    - "q_score" (int) the question's score
    - "a_score" (int) the answer's score

Suggested improvements for Stack Exchange

  1. We would like to request Stack Exchange to upload their data as jsonl (JSON Lines) instead of xml. With xml, a state needs to be tracked while jsonl files can be loaded on a line basis. When states need to be tracked, there is a potential for a memory leak to happen. (LXML doesn't free memory for example)
  2. Store Markdown alongside HTML. I think this would be reasonable enough to implement and allows us to skip step 1 entirely.

Dataset Curators

KaraKaraWitch. (I typically hang out in PygmalionAI discord, sometimes EleutherAI. If something is wrong, @karakarawitch on discord.)

I'd be happy if you could spread the word and recommend this dataset.

Licensing Information

Stack Exchange lists their license as CC-BY-SA. While the final data does not contain the question or answer ID's, the answer does contain the id to the answer on StackExchange... So...

We decided not to include the author names inside the final dataset. However we have provided the results from step 2 which includes all the data that can be used to identify a post and it's author respectively.

Citation Information

@misc{StackingExchange,
  title         = {Stacking Exchange},
  author        = {KaraKaraWitch, recursal.ai},
  year          = {2023},
  howpublished  = {\url{https://huggingface.co/datasets/recursal/StackingExchange}},
}