--- size_categories: n<1K tags: - rlfh - argilla - human-feedback --- # Dataset Card for matlab-dataset This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Using this dataset with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.Dataset.from_hub("lecheuklun/matlab-dataset") ``` This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation. ## Using this dataset with `datasets` To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("lecheuklun/matlab-dataset") ``` This will only load the records of the dataset, but not the Argilla settings. ## Dataset Structure This dataset repo contains: * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. * A dataset configuration folder conforming to the Argilla dataset format in `.argilla`. The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. ### Fields The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | code | Code for autocompletion | text | True | True | ### Questions The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | span_label | Select lines of text to be removed | span | True | N/A | N/A | ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "_server_id": "bf8e3b61-a7c2-4ac9-bb2e-18f886ac2acd", "fields": { "code": "function [paragraphs, wordCounts, maxWordCount, maxParagraph] = processTextDocument(str)\n % Counts words in each paragraph and identifies the paragraph with the maximum number of words.\n\n % Split the document into paragraphs\n paragraphs = splitParagraphs(str);\n \n % Initialize an array to store word counts\n wordCounts = zeros(1, length(paragraphs));\n \n % Loop through each paragraph to count the number of words\n for i = 1:length(paragraphs)\n words = strsplit(paragraphs{i});\n wordCounts(i) = length(words);\n end\n \n [maxWordCount, idx] = max(wordCounts);\n \n % Identify the paragraph with the maximum word count\n if isempty(idx)\n maxParagraph = \"\";\n wordCounts = 0;\n maxWordCount = 0;\n else\n maxParagraph = paragraphs(idx);\n end\nend" }, "id": "599f6f88-38e3-4275-b2ca-a75827a1b8ae", "metadata": {}, "responses": { "span_label": [ { "user_id": "344fe7e0-81e4-443c-a7de-a35222018436", "value": [ { "end": 46, "label": "REMOVE", "start": 9 }, { "end": 825, "label": "REMOVE", "start": 723 } ] } ] }, "status": "completed", "suggestions": {}, "vectors": {} } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "_server_id": "bf8e3b61-a7c2-4ac9-bb2e-18f886ac2acd", "code": "function [paragraphs, wordCounts, maxWordCount, maxParagraph] = processTextDocument(str)\n % Counts words in each paragraph and identifies the paragraph with the maximum number of words.\n\n % Split the document into paragraphs\n paragraphs = splitParagraphs(str);\n \n % Initialize an array to store word counts\n wordCounts = zeros(1, length(paragraphs));\n \n % Loop through each paragraph to count the number of words\n for i = 1:length(paragraphs)\n words = strsplit(paragraphs{i});\n wordCounts(i) = length(words);\n end\n \n [maxWordCount, idx] = max(wordCounts);\n \n % Identify the paragraph with the maximum word count\n if isempty(idx)\n maxParagraph = \"\";\n wordCounts = 0;\n maxWordCount = 0;\n else\n maxParagraph = paragraphs(idx);\n end\nend", "id": "599f6f88-38e3-4275-b2ca-a75827a1b8ae", "span_label.responses": [ [ { "end": 46, "label": "REMOVE", "start": 9 }, { "end": 825, "label": "REMOVE", "start": 723 } ] ], "span_label.responses.status": [ "submitted" ], "span_label.responses.users": [ "344fe7e0-81e4-443c-a7de-a35222018436" ], "status": "completed" } ``` ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines Highlight the lines of code you wish to remove. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]