--- language: - en tags: - retrieval --- # Dataset used for Stackexchange in MTEB/arena ## Overview The `mteb/arena-stackexchange` dataset is a curated collection of Stack Exchange questions and answers, designed for use in the MTEB (Massive Text Embedding Benchmark) Arena. This dataset allows various embedding models to compete and be ranked based on their performance on Stack Exchange content. ## What is Stack Exchange? Stack Exchange is a network of question-and-answer (Q&A) websites on topics in diverse fields, each site covering a specific topic, where questions, answers, and users are subject to a reputation award process. The most well-known of these sites is Stack Overflow, which focuses on computer programming questions. ## Dataset Structure Each instance in the dataset represents a question-answer pair from Stack Exchange and contains the following fields: 1. **id** (string): A unique identifier for the question-answer pair. 2. **text** (string): The processed content, including the question and the top-scoring answer. 3. **original_text** (string): The original, unprocessed content of the question. 4. **subdomain** (string): The specific Stack Exchange site the question came from (e.g., "apple" for Apple Stack Exchange). 5. **metadata** (dict): Additional information about the post, including language, length, provenance, and question score. ## Dataset Creation Process 1. The dataset is derived from the Stack Exchange data dump available on the Internet Archive. 2. Only posts from the 25 largest Stack Exchange sites are included. 3. HTML tags are removed from the content. 4. Questions and answers are grouped into pairs. 5. Only questions with a score of 3 or higher are retained. 6. Only the top-scoring answer for each question is included. 7. Non-English Stack Exchange sites are excluded. 8. The subdomain (Stack Exchange site name) is added to the beginning of each document. 9. Questions and Answers that are more than 200 words or 2000 chars are excluded. ## Example Instance Here's an example of what a single instance in the dataset might look like: ```json { "id": "69fa4eabe8a1513845e0d82f945947dedba685d0", "text": "Apple Stackexchange Q: Why doesn't Microsoft Office/2008(& later) support RTL languages? I have Microsoft Office/2008 on my...", "original_text": "Q: Why doesn't Microsoft Office/2008(& later) support RTL languages? I have Microsoft Office/2008 on my...", "subdomain": "apple", "metadata": { "language": "en", "length": 304, "provenance": "stackexchange_00000.jsonl.gz:3", "question_score": 5 } } ``` ## Ethical Considerations When using this dataset, please be aware of potential biases, including: 1. Selection bias due to the inclusion criteria (score ≥ 3, English-only). 2. Domain bias, as only the 25 largest Stack Exchange sites are represented. 3. Temporal bias, as the dataset represents Stack Exchange content up to a specific date, as released by RedPajamas/Dolma. 4. Possible biases in the original Stack Exchange communities themselves. ## Updates and Maintenance This dataset is based on a specific snapshot of Stack Exchange data. For instructions on how to create this dataset again with newer data, please refer to the [create_index_chunks.py script](https://github.com/embeddings-benchmark/arena/blob/main/retrieval/create_index_chunks.py#L107) in the embeddings-benchmark/arena repository. ## License and Citation The dataset is subject to Stack Exchange's licensing terms. Users should comply with these terms when using the dataset. This dataset is derived from the RedPajama dataset. To cite RedPajama, please use: ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` This dataset was also included in Dolma. To cite Dolma, please use: ``` @article{dolma, title = {{Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}}, author={ Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Nathan Lambert and Ian Magnusson and Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and Oyvind Tafjord and Pete Walsh and Luke Zettlemoyer and Noah A. Smith and Hannaneh Hajishirzi and Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo }, year = {2024}, journal={arXiv preprint}, } ```