Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'final' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/conversation/[]/message) changed from string to object in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 965)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/conversation/[]/message) changed from string to object in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for StackingExchange

Waifu to catch your attention.

Dataset Description

StackingExchange is a ~31.17B Tokens (llama-2-7b-chat-tokenizer) / ~27.17B Tokens (RWKV Tokenizer) of Stack Exchange. It serves as a training resource for large language models and other NLP tasks. This card details the dataset's origin, content, and limitations.

  • Curated by: KaraKaraWitch
  • Funded by: Recursal.ai (I work there lol)
  • Shared by: KaraKaraWitch
  • Language(s) (NLP): English
  • License: cc-by-sa-4.0

Stacking Exchange was created under time constraints for the release of EagleX v1, and may contain biases in selection.

Supported Tasks and Leaderboards

Primarily used for language modeling

Languages

While the dataset is focused on English. Keep in mind there are other languages as well.

Filtering

The filtering process is documented in code but not well-organized. We recommend reviewing the code directly for details.

The files are split into 2:

  • stack_parser.py
  • stack_parser_large.py

The first parser is for smaller Stack Exchange websites which doesn't take too much time. While the latter is run separately for large stack exchange websites.

In general, the entire processing chain has around 3 steps. Excluding the initial download and extraction from 7z files into their own respective folders from Internet Archive.

We will annotate the flow based on the stack_parser_large.py variant of the filtering.

  1. Reconstruction of posts from PostHistory.xml into a json compatible format stored within SqliteDict. (decadence_fp inside _large.py)
  • For caching, we use SqliteDict from the library SqliteDict.
  • This is for preservation of Markdown and to avoid parsing HTML that is found in Posts.xml
  • Sometimes some posts are updated without any creation history stored within PostHistory. The same goes for Migration post history types. They sometimes don't point to posts that exist (At the time.)
  1. We compare with Posts.xml and get a list of posts with their answers paired to it. (decadence)
  • This took the longest time.
  • We first gather a list of main posts before adding all the answers to the main posts.
  • Similar to step 1. Sometimes a post id is in Posts.xml while it doesn't exist in PostsHistory.xml
  • After this step, we export into _raw.jsonl files.
  1. Process into QA and Conversation formats. (qa_processor,convo_processor)
  • We process the data into 2 formats. QA (Question: {question}\nAnswer: {answer}\n\n, see staccato_format.json for a list of formats) and conversational (Similar to Conversation with Sender and user.)
  • The post must have an answer with a score >= 0. Else the answer is not added into the data.
  • We recommend using the Conversation formats for ease of use.
  • EDIT: On further inspection, the conversational format is not fully similar to openai. Read the notes under the data instances. KaraKaraWitch apologizes for any inconvenience this caused ahead of time.
  1. For huggingface, we have compressed the files with gzip.
  • We have noted an issue where there could be a false positive virus-detection on 10 of the compressed files.
  • We have used the dataset in it's uncompressed form and didn't encounter any issues.

As we are dealing with many files at once, we also have the following helper functions to run various functions in parallel over all StackExchange sites:

  • fp-stackstack (Runs decadence_fp)
  • stackstack (Runs decadence)
  • convo_stackstack (Runs convo_processor)
  • qa_stackstack (Runs qa_processor)

Data Instances

Refer to this sample to see all the fields for the Conversation format.

{
    "conversation": [
        {
            "sender": "user",
            "message": "Were there two Black Neuroi Hives?\n\nAt the end of *Strike Witches*, not only did the Warlock destroy all the Neuroi (including the humanoid one), but after the Warlock's destruction, the Black Neuroi Hive disappeared.\n\nHowever, at the start of *Strike Witches 2* which begins slightly before the final scene of the previous season (where Yoshika receives a second letter from her father), the 504th is sent to a Black Neuroi Hive where a Humanoid Neuroi was waiting in a hope to strike peace with the Neuroi. However, the Humanoid Neuroi was destroyed by an attack from the Green Neuroi Hive and subsequently, the Black Hive is never seen again.\n\nNow the second season affirms the ending of the first season (Gallia being freed, the Warlock turning into a Neuroi and losing control), so I am wondering was there two Black Neuroi Hives and if so, where was the second one?"
        },
        {
            "sender": "assistant",
            "message": {
                "body": "**Yes, there's another Black Neuroi Hive that was destroyed at the beginning of *Strike Witches 2.***\n\nAccording to a more general question asking the total number of hives on [Yahoo! Chiebukuro (Japanese)](https://detail.chiebukuro.yahoo.co.jp/qa/question_detail/q12100997307), which was answered by mr_hungry01:\n\n> The new hive that occurred at the beginning of season 2, episode 1 is right above Venezia facing the Adriatic Sea.\n>\n> At the same time, the destroyed hive was displayed on the screen as if it was on the city facing the sea, **but in setting it is said to be \"Hive of the South Karlsland\"**. Together with the fact that the 504th use this strategy against Neuroi crossing the Alps mountains until it was destroyed, **it seems it's around in the inland north of the Alps mountains**.\n>\n> <sup> (partial answer focusing on the 2nd season only, emphasis mine)</sup>\n\nThe [Wikia](http://strikewitches.wikia.com/wiki/Neuroi) also mentions this:\n\n> As of 1945, Neuroi hives have been confirmed in the following regions.\n>\n> - [...]\n> - **South Karlsland** - apparently located near the Rhine river, destroyed by a stronger hive that later established itself in Venezia;\n> - [...]",
                "Id": 42727,
                "Score": 1,
                "Counts": {
                    "Views": 0,
                    "Comments": 0
                }
            }
        }
    ],
    "meta": {
        "q_score": 2,
        "a_score": 1,
        "s_score": 3
    }
}

The format has the following keys:

- "conversation" (list) [The actual Question and answer pair.]
  - Each value in the list is a dict with the following keys:
    - sender (str) [Either "user" or "assistant"]
    - message (str OR dict) [Markdown content. It's a string for questions, while a dictionary for answers.]
      - "body" (str) the answer's markdown content
      - "Id" (int) the ID of the answer
      - "Score" (int) the score of the answer
      - "Counts" (View and Comment counts for the answer.)
- "meta" (dict) [Contains additional metadata about the question and answer pair]
    - "q_score" (int) the question's score
    - "a_score" (int) the answer's score
    - "s_score" (int) the summed up score of the question and answer respectively.

Refer to this sample to see all the fields for the QA format.

{
    "text": "Q: What animals are represented by the members of the 501st Joint Fighter Wing?\n\nIn the ending credits of the second season of *Strike Witches*, it shows that each member of the 501st has a coat of arms which an image of the animal they represent when they use their magic.\n\nSome are easy to tell while for others I'm not 100% sure (apart from 2), half are cats and half are dogs, the 2 odd ones are a rabbit and what I think is supposed to be a tanuki.\n\nI am wondering, what animals the 501st JFW members represent.\n\nA: **Commanding Officer**                          \n\nMinna-Dietlinde Wilcke - Gray Wolf\n\n**Commanding Officer in battle**\n\nSakamoto Mio - Doberman\n\n**Members**\n\nGertrud Barkhorn - German Pointer\n\nErica Hartmann - Dachshund\n\nPerrine H. Clostermann - Chartreux\n\nFrancesca Lucchini - Black panther\n\nEila Ilmatar Juutilainen - Black Fox\n\nCharlotte E. Yeager - White Rabbit\n\nSanya V. Litvyak - Black cat\n\nLynette Bishop - Scottish Fold\n\nMiyafuji Yoshika - Mameshiba\n\n",
    "meta": {
        "q_score": 4,
        "a_score": 1
    }
}

The format has the following keys:

- "text" (str) [The Question and answer pair.]
- "meta" (dict) [Contains additional metadata about the question and answer pair]
    - "q_score" (int) the question's score
    - "a_score" (int) the answer's score

Suggested improvements for Stack Exchange

  1. We would like to request Stack Exchange to upload their data as jsonl (JSON Lines) instead of xml. With xml, a state needs to be tracked while jsonl files can be loaded on a line basis. When states need to be tracked, there is a potential for a memory leak to happen. (LXML doesn't free memory for example)
  2. Store Markdown alongside HTML. I think this would be reasonable enough to implement and allows us to skip step 1 entirely.

Recursal's Vision

To make AI accessible to everyone, regardless of language, or economical status

This is the collective goal of the RWKV Open Source foundation and Recursal AI, the commercial entity who backs it.

We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.

About RWKV

RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.

The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.

You can find out more about the project, and latest models, at the following

About Recursal AI

Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.

As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets.

The following dataset/models provided here, is part of that commitment.

You can find out more about recursal AI here

Dataset Curators

KaraKaraWitch. (I typically hang out in PygmalionAI discord, sometimes EleutherAI. If something is wrong, @karakarawitch on discord.)

I'd be happy if you could spread the word and recommend this dataset.

Licensing Information

Stack Exchange lists their license as CC-BY-SA. While the final data does not contain the question or answer ID's, the answer does contain the id to the answer on StackExchange... So...

We decided not to include the author names inside the final dataset. However we have provided the results from step 2 which includes all the data that can be used to identify a post and it's author respectively.

Recursal Waifus (The banner image) are licensed under CC-BY-SA. They do not represent the related websites in any official capacity unless otherwise or announced by the website. You may use them as a banner image. However, you must always link back to the dataset.

Citation Information

@misc{StackingExchange,
  title         = {Stacking Exchange},
  author        = {KaraKaraWitch, recursal.ai},
  year          = {2024},
  howpublished  = {\url{https://huggingface.co/datasets/recursal/StackingExchange}},
}
Downloads last month
112