datasetId
stringlengths
5
121
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
2.63M
likes
int64
0
6.49k
tags
sequencelengths
1
7.92k
task_categories
sequencelengths
0
47
createdAt
unknown
card
stringlengths
15
1M
jerseyjey/iva_1
jerseyjey
"2024-12-03T13:58:50Z"
7
0
[ "language:it", "license:unknown", "size_categories:10K<n<100K", "region:us" ]
null
"2024-12-03T13:56:13Z"
--- license: unknown language: - it pretty_name: iva_it size_categories: - 10K<n<100K ---
nschulze1/tagging_df
nschulze1
"2024-12-03T14:47:06Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T14:46:45Z"
--- dataset_info: features: - name: text dtype: string - name: tags sequence: string - name: tag_ids sequence: int64 - name: tag_names sequence: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 339323480 num_examples: 2584 download_size: 96655297 dataset_size: 339323480 configs: - config_name: default data_files: - split: train path: data/train-* ---
jmgauzan/persoJMG
jmgauzan
"2024-12-03T14:48:09Z"
7
0
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
"2024-12-03T14:48:09Z"
--- license: bigscience-bloom-rail-1.0 ---
kelbrown20/barbite
kelbrown20
"2024-12-03T15:00:56Z"
7
0
[ "license:apache-2.0", "region:us" ]
null
"2024-12-03T15:00:56Z"
--- license: apache-2.0 ---
syedkhalid076/Sentiment-Analysis
syedkhalid076
"2024-12-03T15:20:52Z"
7
0
[ "task_categories:text-classification", "task_ids:sentiment-classification", "language:en", "size_categories:100K<n<1M", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "sentiment-analysis", "text-classification", "balanced-dataset", "oversampling", "csv" ]
[ "text-classification" ]
"2024-12-03T15:07:52Z"
--- datasets: - sentiment-analysis-dataset language: - en task_categories: - text-classification task_ids: - sentiment-classification tags: - sentiment-analysis - text-classification - balanced-dataset - oversampling - csv pretty_name: Sentiment Analysis Dataset (Imbalanced) dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_examples: 83989 - name: validation num_examples: 10499 - name: test num_examples: 10499 format: csv --- # Sentiment Analysis Dataset ## Overview This dataset is designed for sentiment analysis tasks, providing labeled examples across three sentiment categories: - **0**: Negative - **1**: Neutral - **2**: Positive It is suitable for training, validating, and testing text classification models in tasks such as social media sentiment analysis, customer feedback evaluation, and opinion mining. --- ## Dataset Details ### Key Features - **Type**: CSV - **Language**: English - **Labels**: - `0`: Negative - `1`: Neutral - `2`: Positive - **Pre-processing**: - Duplicates removed - Null values removed - Cleaned for consistency ### Dataset Split | Split | Rows | |--------------|--------| | **Train** | 83,989 | | **Validation** | 10,499 | | **Test** | 10,499 | ### Format Each row in the dataset consists of the following columns: - `text`: The input text data (e.g., sentences, comments, or tweets). - `label`: The corresponding sentiment label (`0`, `1`, or `2`). --- ## Usage ### Installation Download the dataset from the [Hugging Face Hub](https://huggingface.co/datasets/your-dataset-path) or your preferred storage location. ### Loading the Dataset #### Using Pandas ```python import pandas as pd # Load the train dataset train_df = pd.read_csv("path_to_train.csv") print(train_df.head()) # Columns: text, label ``` #### Using Hugging Face's `datasets` Library ```python from datasets import load_dataset # Load the dataset dataset = load_dataset("your-dataset-path") # Access splits train_data = dataset["train"] validation_data = dataset["validation"] test_data = dataset["test"] # Example: Printing a sample print(train_data[0]) ``` --- ## Example Usage Here’s an example of using the dataset to fine-tune a sentiment analysis model with the [Hugging Face Transformers](https://huggingface.co/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset # Load dataset dataset = load_dataset("your-dataset-path") # Load model and tokenizer model_name = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3) # Tokenize dataset def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) # Prepare training arguments training_args = TrainingArguments( output_dir="./results", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, num_train_epochs=3, weight_decay=0.01, load_best_model_at_end=True, ) # Initialize Trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], ) # Train model trainer.train() ``` --- ## Applications This dataset can be used for: 1. **Social Media Sentiment Analysis**: Understand the sentiment of posts or tweets. 2. **Customer Feedback Analysis**: Evaluate reviews or feedback. 3. **Product Sentiment Trends**: Monitor public sentiment about products or services. --- ## License This dataset is released under the **[Insert Your Chosen License Here]**. Ensure proper attribution if used in academic or commercial projects. --- ## Citation If you use this dataset, please cite it as follows: ``` @dataset{your_name_2024, title = {Sentiment Analysis Dataset}, author = {Syed Khalid Hussain}, year = {2024}, url = {https://huggingface.co/datasets/syedkhalid076/Sentiment-Analysis} } ``` --- ## Acknowledgments This dataset was curated and processed by **Syed Khalid Hussain**. The author takes care to ensure high-quality data, enabling better model performance and reproducibility. --- **Author**: Syed Khalid Hussain
laudite-ufg/preview-dialogos-sinteticos-toucantts
laudite-ufg
"2024-12-03T15:44:42Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T15:44:29Z"
--- dataset_info: features: - name: audio dtype: audio - name: Dialog dtype: int64 - name: Turn dtype: int64 - name: Speaker dtype: string - name: Voice dtype: float64 - name: Sentence dtype: string splits: - name: train num_bytes: 40183930.0 num_examples: 204 download_size: 39053433 dataset_size: 40183930.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
laudite-ufg/preview-dialogos-sinteticos-yourtts
laudite-ufg
"2024-12-03T15:46:03Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T15:45:49Z"
--- dataset_info: features: - name: audio dtype: audio - name: Dialog dtype: int64 - name: Turn dtype: int64 - name: Speaker dtype: string - name: Voice dtype: float64 - name: Sentence dtype: string splits: - name: train num_bytes: 33497754.0 num_examples: 204 download_size: 28546030 dataset_size: 33497754.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
zccneil/codelingua_dataset
zccneil
"2024-12-03T16:13:50Z"
7
0
[ "license:apache-2.0", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T16:02:54Z"
--- license: apache-2.0 ---
lazarerd/ClassicInstruments
lazarerd
"2024-12-03T16:17:56Z"
7
0
[ "license:mit", "region:us" ]
null
"2024-12-03T16:17:56Z"
--- license: mit ---
dgambettaphd/D_gen10_run2_llama2-7b_wiki_doc1000_real32_synt96
dgambettaphd
"2024-12-03T16:21:49Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T16:21:46Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 523819 num_examples: 1000 download_size: 288479 dataset_size: 523819 configs: - config_name: default data_files: - split: train path: data/train-* ---
tel4vn/data
tel4vn
"2024-12-03T16:33:44Z"
7
0
[ "license:apache-2.0", "region:us" ]
null
"2024-12-03T16:33:44Z"
--- license: apache-2.0 ---
juliadollis/teste3_personal_gpt
juliadollis
"2024-12-03T17:03:31Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T17:03:28Z"
--- dataset_info: features: - name: Texto dtype: string - name: Estrategia de Prompt dtype: string splits: - name: train num_bytes: 107011 num_examples: 225 download_size: 46854 dataset_size: 107011 configs: - config_name: default data_files: - split: train path: data/train-* ---
iamwillferguson/StockSensei_Ground_Truth
iamwillferguson
"2024-12-03T18:02:11Z"
7
1
[ "size_categories:1K<n<10K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "financial", "stock", "prototype", "ground_truth" ]
null
"2024-12-03T17:44:32Z"
--- tags: - financial - stock - prototype - ground_truth pretty_name: 'StockSensei: AI-Powered Stock Analysis' --- # Financial Advice Finetuning Ground Truth Dataset Georgia Institute of Technology, College of Computing Authors: Hersh Dhillon, Mathan Mahendran, Will Ferguson, Ayushi Mathur, Dorsa Ajami December 2024 ## Motivation Given the unprecendented rise of day trading, social-media based financial advice, and trading apps, more people then ever are buying and selling stocks without proper financial literacy. Oftentimes, people make high-risk trades with little more quantitative evidence than a social media endorsement or an online trend. Therefore, it is more important than ever that access to factual, accurate, and easy-to-understand financial information. However, there are significant ethical concerns in providing financial advice over the internet, leaving many consumers without access to clear information. AI and Large Language Models provide an interesting opportunity to optimize a consumer's information access without providing direct financial advice. Unfortunately, developers have little to no open-source resources to train models towards this specific task. It is for this reason that we are publishing this preliminary ground-truth dataset as a resource for future developers. This was made in conjuction with our project [StockSensei: AI-Powered Stock Analysis](https://www.stocksenseiapp.xyz), and is intended to serve as a benchmark evaluation suite for LLMs fine-tuned on financial contexts. ## Overview This dataset currently contains two file: - The ground truth dataset of question and answer pairs - Created using Gemini-1.5-Flash and Langchain - Our evaluation results after fine-tuning the following LLMs on a collection of SEC 10-K filings: - LLaMa 3.2 (3B, 1B) - Phi3-Mini (0.5B Parameters) - Mistral-7B - Gemini 1.5-Flash This dataset can be used as a starting point for projects of larger scope or more depth, or serve as a readymade ground truth source for similar applications. ## Future Work This project field would immediately benefit from the following improvements on our work - A larger, more in-depth ground truth dataset that is less bound by budget constraints and rate limits - More research into a bespoke numerical metric for applications relation to financial advice - Published results against this ground-truth of significantly larger models ## Note The source code of our project is currently unavailable to release, as it is relevant to an ongoing course at Georgia Institute of Technology. If it is approved for relase at any point, a link to the open-source code will be included on this model card.
davidfred/data_stress.csv
davidfred
"2024-12-03T18:43:01Z"
7
0
[ "license:mit", "size_categories:n<1K", "format:csv", "modality:tabular", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T18:39:44Z"
--- license: mit --- input_features = { "snoring range": "60", "respiration rate": "20", "body temperature": "96", "limb movement": "10", "blood oxygen ": "95", "eye movement": "85", "hours of sleep": "7", "heart rate ": "60", } These are the physiological parameters for which you want to predict the stress level. Creating a Text Representation: The features are converted into a text string that the model can process: python input_text = ", ".join([f"{key}: {value}" for key, value in input_features.items()]) This results in: javascript snoring range: 60, respiration rate: 20, body temperature: 96, limb movement: 10, blood oxygen : 95, eye movement: 85, hours of sleep: 7, heart rate : 60 Generating the Prediction: The generate_prediction function uses the trained model to predict the stress level: python prediction = generate_prediction(model, tokenizer, input_text) Inside this function: The input text is tokenized and converted into tensors. The model processes the input and outputs a numerical value representing the predicted stress level. Displaying the Output: The script then prints: python print("Input Features:") print(input_text) print("\nGenerated Prediction:") print(prediction) For example, the output might be: javascript Input Features: snoring range: 60, respiration rate: 20, body temperature: 96, limb movement: 10, blood oxygen : 95, eye movement: 85, hours of sleep: 7, heart rate : 60 Generated Prediction: 2.5 Interpretation Generated Prediction (2.5 in the example): This numerical value is the stress level predicted by your model for the given input features. Scale of Stress Levels: The exact meaning depends on how stress levels are defined in your dataset. Assuming your stress levels are on a scale from 0 to 4: 0 could represent no stress. 4 could represent high stress. Predicted Stress Level 2.5: This would indicate a moderate stress level according to your model's prediction. What This Means Personalized Assessment: The model provides an automated assessment of stress level based on physiological data. Potential Applications: Health Monitoring: Can be used in wellness apps to monitor user stress levels. Stress Management: Helps in identifying stress triggers and managing stress proactively. Important Considerations Model Accuracy: The reliability of the prediction depends on the quality and size of your training data. If the dataset is small or not representative, predictions may not be accurate. Data Consistency: Ensure that the input features are in the same format and units as those used during training. Any discrepancy can affect the prediction accuracy. Model Evaluation: Evaluate your model's performance using metrics like Mean Squared Error (MSE) or R-squared on a test dataset. Cross-validation can help in assessing how well the model generalizes to unseen data. Ethical and Privacy Concerns: Be mindful of user privacy when collecting physiological data. Ensure compliance with data protection regulations.
k4d3/lindong
k4d3
"2024-12-05T10:31:02Z"
7
1
[ "license:wtfpl", "size_categories:n<1K", "format:json", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T19:13:48Z"
--- license: wtfpl ---
taufiqsyed/salami_neural_demo_enriched
taufiqsyed
"2024-12-03T19:19:33Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T19:19:20Z"
--- dataset_info: features: - name: audio dtype: audio - name: song_id dtype: string - name: structure dtype: string - name: start_time dtype: float64 - name: end_time dtype: float64 - name: tempos dtype: string - name: keys dtype: string - name: instruments dtype: string - name: genres dtype: string - name: moods dtype: string - name: metadata dtype: string splits: - name: train num_bytes: 26462307.0 num_examples: 10 - name: eval num_bytes: 84679315.0 num_examples: 32 download_size: 108368116 dataset_size: 111141622.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: eval path: data/eval-* ---
weqweasdas/new_8b_self_corr_standard
weqweasdas
"2024-12-03T19:54:48Z"
7
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T19:46:05Z"
--- dataset_info: features: - name: idx dtype: int64 - name: gt dtype: string - name: my_solu dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: conversations list: - name: content dtype: string - name: role dtype: string - name: turn dtype: int64 - name: self_correct dtype: bool - name: ans_correct dtype: bool splits: - name: train num_bytes: 33092312445 num_examples: 2573667 download_size: 12815699361 dataset_size: 33092312445 configs: - config_name: default data_files: - split: train path: data/train-* ---
mlgawd/final_dpo_nemo_v5
mlgawd
"2024-12-03T19:48:50Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T19:48:47Z"
--- dataset_info: features: - name: questions dtype: string - name: accepted dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 28102869 num_examples: 5877 download_size: 15924726 dataset_size: 28102869 configs: - config_name: default data_files: - split: train path: data/train-* ---
hayleecs/PhySO-expressions-100K
hayleecs
"2024-12-03T19:54:52Z"
7
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T19:54:51Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 5508882 num_examples: 104289 download_size: 1352922 dataset_size: 5508882 configs: - config_name: default data_files: - split: train path: data/train-* ---
aamixsh/GENEEG
aamixsh
"2024-12-03T20:42:43Z"
7
0
[ "license:mit", "region:us" ]
null
"2024-12-03T20:15:38Z"
--- license: mit --- The raw files contain raw EEG values (`.eeg` files) for patients, with corresponding: 1. `.art` files that denote level of artifact at a given time position. 0 -- No noise, 1 and more -- Noise. 2. `.evt` files that denote external stimulus at a given time position. The `.pkl` files contain filtered data from patients as a dictionary of numpy arrays which have been artifact filtered (only 0 noise level), and contig length (200 here) chopped. The dictionary has two keys "wmci" and "wctrl", representing the two classes MCI and Control for the WAVi dataset. Each element in the dictionary is a list of patients data stored as a list of arrays. ``` -wmci |-patient 1 ||-contig 1 (a [17 x 200] array) ||-contig 2 ||... ||-contig N_1 (number of contigs for patient 1) |-patient 2 ||-contig 1 (a [17 x 200] array) ||-contig 2 ||... ||-contig N_2 |... -wctrl |-patient 1 |... ```
all-oj-gen/ds_chat_pos_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin
all-oj-gen
"2024-12-03T21:17:56Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T21:17:54Z"
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string - name: rejected_traceback dtype: string - name: chosen_probs dtype: float64 - name: chosen_probs_win dtype: float64 - name: chosen_probs_lose dtype: float64 splits: - name: train num_bytes: 15411027 num_examples: 5281 download_size: 6477945 dataset_size: 15411027 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ds_chat_pos_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
all-oj-gen/ds_chat_pos_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-full_resp_trace
all-oj-gen
"2024-12-03T21:17:58Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T21:17:56Z"
--- dataset_info: features: - name: prompt dtype: string - name: test dtype: string - name: tag dtype: string - name: id dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string - name: text_prompt dtype: string - name: text_chosen dtype: string - name: text_rejected dtype: string - name: generate_0 dtype: string - name: generate_0_score dtype: int64 - name: traceback_0 dtype: string - name: generate_1 dtype: string - name: generate_1_score dtype: int64 - name: traceback_1 dtype: string - name: generate_2 dtype: string - name: generate_2_score dtype: int64 - name: traceback_2 dtype: string - name: generate_3 dtype: string - name: generate_3_score dtype: int64 - name: traceback_3 dtype: string - name: probability sequence: sequence: float64 - name: rm_scores sequence: int64 splits: - name: train num_bytes: 38877377 num_examples: 5281 download_size: 15014539 dataset_size: 38877377 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ds_chat_pos_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-full_resp_trace" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
all-oj-gen/ds_chat_pos_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin_all_pairs
all-oj-gen
"2024-12-03T21:18:00Z"
7
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T21:17:59Z"
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string - name: rejected_traceback dtype: string - name: test dtype: string splits: - name: train num_bytes: 31899339 num_examples: 10535 download_size: 9045135 dataset_size: 31899339 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ds_chat_pos_reflct_rmsprop_iter2_sppo_hard_new_all_oj_iter2-bin_all_pairs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
miguelribeirokk/crime_tweets_in_portuguese
miguelribeirokk
"2024-12-06T12:47:03Z"
7
0
[ "language:pt", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T21:20:26Z"
--- license: cc-by-4.0 language: - pt pretty_name: ' CrimeTrack: A Comprehensive Dataset of Crime-Related Tweets in Portuguese with Sentiment, Toxicity, and Location Information' size_categories: - 10K<n<100K --- ## CrimeTrack: A Comprehensive Dataset of Crime-Related Tweets in Portuguese with Sentiment, Toxicity, and Location Information This dataset contains 61.715 tweets related to possible crime reports, labeled with categories such as "Assalto", "Roubo", "Furto", "Assédio", "Segurança Pública", "Homicídio, and "Outros", along with sentiment analysis, toxicity analysis, and location identification. A particular feature in the Portuguese language is that many words potentially related to crimes are used in non-criminal contexts, such as "O árbitro assaltou meu time ontem" (which translates to "The referee attacked my team yesterday" but in this case refers to something like a sports context, not an actual crime). Therefore, the dataset includes a KEYWORD column: if the keyword column is marked as 1 and the CRIME column is 0, this suggests the phrase may be ambiguous. The dataset contains 5.000 tweets with actual crime reports, 5.000 with ambiguous expressions, and the remaining tweets are unrelated. - **Sentiment Analysis 🙂**: The sentiment analysis was performed using PYsentimiento, which categorizes sentiments as Negative, Neutral, or Positive. More information can be found here: [Pysentimiento](https://pypi.org/project/pysentimiento/0.5.2rc3/) - **Toxicity Analysis ☠️**: The toxicity analysis covers categories such as toxicity, severe toxicity, insult, profanity, threat, and identity attack, performed using the [Perspective API](https://perspectiveapi.com/). - **Local ⛱️**: The entity recognition model is focused on identifying named entities, specifically locations mentioned in the tweets. The approach used for this can be found here: [Named Entity Recognition for Legal Domain](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb). All tweets were preprocessed by removing special characters, converting to lowercase, and eliminating links. ### About this file This dataset contains 61.715 tweets, organized into 21 columns: 1. **id**: A unique identifier for each tweet, ranging from 1 to 61.715. 2. **text**: The preprocessed text of the tweet. 3. **Perspective API Columns**: - INSULT: A score from 0 to 1 indicating how insulting the tweet is. - IDENTITY_ATTACK: A score from 0 to 1 indicating the presence of identity attacks. - SEVERE_TOXICITY: A score from 0 to 1 indicating the severity of toxicity in the tweet. - THREAT: A score from 0 to 1 indicating the presence of threats in the tweet. - PROFANITY: A score from 0 to 1 indicating the presence of profanity in the tweet. - TOXICITY: A score from 0 to 1 indicating the overall toxicity of the tweet. 4.**Sentiment Analysis Columns**: - POSITIVE: A score from 0 to 1 indicating how positive the tweet is. - NEUTRAL: A score from 0 to 1 indicating how neutral the tweet is. - NEGATIVE: A score from 0 to 1 indicating how negative the tweet is. 5. **Crime Labels (1 or 0, indicating whether the tweet relates to a specific crime)**: - ASSALTO (Assault) - ROUBO (Robbery) - FURTO (Theft) - ASSEDIO (Harassment) - SPUBLICA (Public Security) - HOMICIDIO (Homicide) - OUTRO (Other) 6. **CRIME**: A binary value (1 or 0) indicating whether the tweet contains a reference to a crime. 7. **KEYWORD**: A binary value (1 or 0) indicating the presence of a keyword potentially related to a crime. 8. **LOCAL**: A binary value (1 or 0) indicating whether a location is mentioned in the tweet. ### Sources NTSScraper, Twitter API, [Kaggle public dataset](https://www.kaggle.com/datasets/augustop/portuguese-tweets-for-sentiment-analysis) ### Collection Methodology - **5000** tweets from [NTSScraper](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://pypi.org/project/ntscraper/&ved=2ahUKEwiWwMmrlpOKAxWhQ7gEHYPyEt4QFnoECA0QAQ&usg=AOvVaw31NZ3phyA6R_9I324h6UQd): crime-related keywords (robbery, assault, etc.) and locations (streets, squares, etc); - **5000** tweets from [Twitter API](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://developer.x.com/en/docs/x-api&ved=2ahUKEwiPzsa8lpOKAxWmr5UCHdVwMLgQFnoECA0QAQ&usg=AOvVaw3JjDYY5eHRQYvC_sxMBI-2): crime-related keywords (when it was freely available for academic research); - **51715** tweets from [Portuguese Tweets for Sentiment Analysis](https://www.kaggle.com/datasets/augustop/portuguese-tweets-for-sentiment-analysis): crime-related keywords and other tweets without context
jvandomburgh/study_skills
jvandomburgh
"2024-12-03T21:52:36Z"
7
0
[ "license:apache-2.0", "region:us" ]
null
"2024-12-03T21:52:36Z"
--- license: apache-2.0 ---
mathreward/new_llama3_8b_3epoch
mathreward
"2024-12-03T22:28:26Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T22:28:24Z"
--- dataset_info: features: - name: idx dtype: int64 - name: gt dtype: string - name: my_solu dtype: string splits: - name: train num_bytes: 23357379 num_examples: 5000 download_size: 8366148 dataset_size: 23357379 configs: - config_name: default data_files: - split: train path: data/train-* ---
theojiang/CIVETv2_key_idea_retrieval_dataset_v3.2_gtebase_msmarco
theojiang
"2024-12-03T22:48:01Z"
7
0
[ "size_categories:10K<n<100K", "format:parquet", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T22:45:25Z"
--- dataset_info: config_name: data features: - name: passage_input_ids sequence: sequence: int64 - name: passage_attention_mask sequence: sequence: int64 - name: question_embeddings sequence: sequence: sequence: float32 splits: - name: train num_bytes: 291107008 num_examples: 55491 - name: validation num_bytes: 2681698.951096581 num_examples: 500 download_size: 222612190 dataset_size: 293788706.9510966 configs: - config_name: data data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
julianamarques/tabela-taco
julianamarques
"2024-12-03T23:31:26Z"
7
0
[ "language:pt", "license:cc0-1.0", "size_categories:n<1K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "food", "brazil" ]
null
"2024-12-03T22:47:28Z"
--- license: cc0-1.0 language: - pt tags: - food - brazil pretty_name: tabela-taco --- # Dataset: TACO - Tabela Brasileira de Composição de Alimentos The TACO Table is the reference nutritional table for foods consumed in Brazil. The information contained in this dataset was taken from the excel file made available by NEPA - Center for Studies and Research in Food at UNICAMP, through the link: https://nepa.unicamp.br/publicacoes/tabela-taco-excel/
dakies/verilog_dataset_near_dedup
dakies
"2024-12-03T23:24:45Z"
7
0
[ "task_categories:text-generation", "license:mit", "region:us" ]
[ "text-generation" ]
"2024-12-03T23:03:54Z"
--- license: mit task_categories: - text-generation --- Original dataset size: 21725 Number of duplicate clusters: 2951 Files in duplicate cluster: 8054 Unique files in duplicate cluster: 3781 Filtered dataset size: 17452 Time to deduplicate dataset: 7.37 Size of deduplicated dataset: 17452, old dataset size 21725
mathreward/new_8b_llama31_3epoch_selfcorr_horizon2_tmp07
mathreward
"2024-12-03T23:17:13Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T23:17:01Z"
--- dataset_info: features: - name: idx dtype: int64 - name: gt dtype: string - name: level dtype: string - name: type dtype: string - name: my_solu dtype: string - name: pred sequence: string splits: - name: train num_bytes: 23759204 num_examples: 5000 download_size: 7318365 dataset_size: 23759204 --- # Dataset Card for "new_8b_llama31_3epoch_selfcorr_horizon2_tmp0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dakies/OSS_Verilog_Near_Dedup
dakies
"2024-12-03T23:47:05Z"
7
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-03T23:35:45Z"
--- dataset_info: features: - name: module dtype: string splits: - name: train num_bytes: 87198656 num_examples: 17473 download_size: 23240516 dataset_size: 87198656 configs: - config_name: default data_files: - split: train path: data/train-* ---
CambioMoney/ami-speaker-analysis_deepgram_run_validation
CambioMoney
"2024-12-04T00:32:55Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T00:32:53Z"
--- dataset_info: features: - name: meeting_id dtype: string - name: audio_id dtype: string - name: text dtype: string - name: audio struct: - name: array sequence: float64 - name: path dtype: string - name: sampling_rate dtype: int64 - name: begin_time dtype: float64 - name: end_time dtype: float64 - name: microphone_id dtype: string - name: speaker_id dtype: string - name: is_complete dtype: bool - name: original_segment dtype: bool - name: confidence dtype: float64 splits: - name: train num_bytes: 364165 num_examples: 2 download_size: 73560 dataset_size: 364165 configs: - config_name: default data_files: - split: train path: data/train-* ---
ramdaZ/HoshinoAI
ramdaZ
"2024-12-04T01:03:53Z"
7
0
[ "license:llama3.2", "region:us" ]
null
"2024-12-04T01:03:53Z"
--- license: llama3.2 ---
juliadollis/mistral_gptv3_2fewshot_limiar2
juliadollis
"2024-12-04T01:24:32Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T01:24:30Z"
--- dataset_info: features: - name: text dtype: string - name: target_group dtype: string - name: factual? dtype: string - name: ingroup_effect dtype: string - name: lewd dtype: string - name: framing dtype: string - name: predicted_group dtype: string - name: stereotyping dtype: string - name: intent dtype: float64 - name: toxicity_ai dtype: float64 - name: toxicity_human dtype: float64 - name: predicted_author dtype: string - name: actual_method dtype: string - name: is_toxic dtype: int64 - name: predicted_is_toxic dtype: int64 - name: y_true dtype: int64 splits: - name: train num_bytes: 393176 num_examples: 940 download_size: 83147 dataset_size: 393176 configs: - config_name: default data_files: - split: train path: data/train-* ---
infinite-dataset-hub/DocNER_German
infinite-dataset-hub
"2024-12-04T01:26:51Z"
7
0
[ "license:mit", "size_categories:n<1K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "infinite-dataset-hub", "synthetic" ]
null
"2024-12-04T01:26:50Z"
--- license: mit tags: - infinite-dataset-hub - synthetic --- # DocNER_German tags: Named Entity Recognition, German, Corporate, Legal Documents _Note: This is an AI-generated dataset so its content may be inaccurate or false_ **Dataset Description:** The 'DocNER_German' dataset is a collection of German corporate and legal documents annotated for Named Entity Recognition (NER) purposes. It contains texts extracted from real-world scenarios including contracts, agreements, and corporate communications. Each document has been labeled with entities such as 'Person', 'Organization', 'Location', 'Date', 'Money', 'LegalTerm', and 'Event'. **CSV Content Preview:** ```csv Document ID,Text,Labels 001,Dies ist eine Referenzzahlung an die Beteiligten des Kartells. Die Zahlung soll im Juni erfolgen. Labels: Person, Event, Money 002,Unser neues Büro befindet sich in der Berliner Straße 123. Es wird am 1. Juli bezogen. Labels: Location, Date 003,Der Vertrag wurde unterzeichnet von Herrn Müller und Frau Schmidt am 5. April. Labels: Person 004,Die GmbH hat kürzlich einen Vertrag mit dem Partnerunternehmen für den Vertrieb abgeschlossen. Labels: Organization, Event 005,Das Kapitalanteil wurde letztes Jahr im Mai aufgestockt. Der neue Anteil beträgt 35%. Labels: Money, Event ``` **Source of the data:** The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'German Doc': - **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=German+Doc&dataset=DocNER_German&tags=Named+Entity+Recognition,+German,+Corporate,+Legal+Documents - **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct - **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
Cyberz/DatensatzTextErkennung
Cyberz
"2024-12-04T01:54:01Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:distilabel", "region:us", "synthetic", "distilabel", "rlaif", "datacraft" ]
null
"2024-12-04T01:53:59Z"
--- size_categories: n<1K dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': persönlich '1': e-mail '2': interne-mitteilung '3': technischer-bericht '4': protokoll '5': marketingmaterial '6': wichtig '7': ausarbeit '8': auftrag '9': kundenbeschwerde '10': geschäftsbrief '11': information '12': behörden '13': pressemitteilung '14': projektplan '15': amt '16': vertrag '17': rechnung splits: - name: train num_bytes: 4108 num_examples: 10 download_size: 6199 dataset_size: 4108 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif - datacraft --- <p align="left"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> # Dataset Card for DatensatzTextErkennung This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/Cyberz/DatensatzTextErkennung/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/Cyberz/DatensatzTextErkennung/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration: <details><summary> Configuration: default </summary><hr> ```json { "label": 10, "text": "Dear Sir/Madam, I am writing to inform you that the delivery of goods has been postponed due to unforeseen circumstances. The new estimated date of delivery is now set for the 15th of next month. Please note that we will provide an updated delivery schedule in due course. Thank you for your understanding and cooperation." } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("Cyberz/DatensatzTextErkennung", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("Cyberz/DatensatzTextErkennung") ``` </details>
dgambettaphd/D_gen3_run2_llama2-7b_wiki_doc1000_real64_synt64
dgambettaphd
"2024-12-04T02:01:27Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T02:01:24Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 586437 num_examples: 1000 download_size: 355187 dataset_size: 586437 configs: - config_name: default data_files: - split: train path: data/train-* ---
julia-se/tracka_qwen_fewshot_disgust
julia-se
"2024-12-04T03:00:21Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T02:42:57Z"
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: Anger dtype: int64 - name: Disgust dtype: int64 - name: Fear dtype: int64 - name: Joy dtype: int64 - name: Sadness dtype: int64 - name: Surprise dtype: int64 - name: predicted_is_disgust dtype: int64 - name: y_disgust dtype: int64 splits: - name: train num_bytes: 472807 num_examples: 2226 download_size: 216687 dataset_size: 472807 configs: - config_name: default data_files: - split: train path: data/train-* ---
dgambettaphd/D_gen4_run2_llama2-7b_wiki_doc1000_real64_synt64
dgambettaphd
"2024-12-04T03:08:18Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T03:08:16Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 586820 num_examples: 1000 download_size: 355310 dataset_size: 586820 configs: - config_name: default data_files: - split: train path: data/train-* ---
iammytoo/vl-mia
iammytoo
"2024-12-04T03:17:32Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T03:17:11Z"
--- dataset_info: features: - name: image dtype: image - name: label dtype: int64 splits: - name: train num_bytes: 46806497.0 num_examples: 596 download_size: 46384186 dataset_size: 46806497.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
slbimp/sdkdateset
slbimp
"2024-12-04T03:49:58Z"
7
0
[ "license:mit", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "region:us" ]
null
"2024-12-04T03:20:34Z"
--- license: mit ---
julia-se/tracka_qwen_zeroshot_anger
julia-se
"2024-12-04T03:30:05Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T03:30:04Z"
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: Anger dtype: int64 - name: Disgust dtype: int64 - name: Fear dtype: int64 - name: Joy dtype: int64 - name: Sadness dtype: int64 - name: Surprise dtype: int64 - name: predicted_is_anger dtype: int64 - name: y_anger dtype: int64 splits: - name: train num_bytes: 472807 num_examples: 2226 download_size: 220741 dataset_size: 472807 configs: - config_name: default data_files: - split: train path: data/train-* ---
dgambettaphd/D_gen5_run2_llama2-7b_wiki_doc1000_real64_synt64
dgambettaphd
"2024-12-04T04:16:15Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T04:16:12Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 586818 num_examples: 1000 download_size: 355110 dataset_size: 586818 configs: - config_name: default data_files: - split: train path: data/train-* ---
all-oj-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_all_oj_iter4-full_resp_trace
all-oj-gen
"2024-12-04T04:28:04Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T04:28:03Z"
--- dataset_info: features: - name: prompt dtype: string - name: test dtype: string - name: tag dtype: string - name: id dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string - name: text_prompt dtype: string - name: text_chosen dtype: string - name: text_rejected dtype: string - name: generate_0 dtype: string - name: generate_0_score dtype: int64 - name: traceback_0 dtype: string - name: generate_1 dtype: string - name: generate_1_score dtype: int64 - name: traceback_1 dtype: string - name: generate_2 dtype: string - name: generate_2_score dtype: int64 - name: traceback_2 dtype: string - name: generate_3 dtype: string - name: generate_3_score dtype: int64 - name: traceback_3 dtype: string - name: probability sequence: sequence: float64 - name: rm_scores sequence: int64 splits: - name: train num_bytes: 36570548 num_examples: 4898 download_size: 14321011 dataset_size: 36570548 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_all_oj_iter4-full_resp_trace" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
all-oj-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_all_oj_iter4-bin_all_pairs
all-oj-gen
"2024-12-04T04:28:07Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T04:28:05Z"
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string - name: rejected_traceback dtype: string - name: test dtype: string splits: - name: train num_bytes: 29905524 num_examples: 9774 download_size: 8573243 dataset_size: 29905524 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_all_oj_iter4-bin_all_pairs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lomit/B3M5d3_Fundamentals_of_Financial_Management
lomit
"2024-12-04T04:43:44Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T04:43:39Z"
--- dataset_info: features: - name: input dtype: string - name: actual_output dtype: 'null' - name: expected_output dtype: string - name: context sequence: string - name: retrieval_context dtype: 'null' - name: n_chunks_per_context dtype: int64 - name: context_length dtype: int64 - name: evolutions sequence: string - name: context_quality dtype: 'null' - name: synthetic_input_quality dtype: float64 - name: source_file dtype: 'null' splits: - name: train num_bytes: 2008107 num_examples: 1144 download_size: 818026 dataset_size: 2008107 configs: - config_name: default data_files: - split: train path: data/train-* ---
1231czx/orm_5e6_bz128_head_test_mistral_math
1231czx
"2024-12-04T05:15:55Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T05:15:51Z"
--- dataset_info: features: - name: prompt dtype: string - name: answers sequence: string - name: rewards sequence: float64 - name: label sequence: int64 splits: - name: train num_bytes: 318457011 num_examples: 500 download_size: 71900914 dataset_size: 318457011 configs: - config_name: default data_files: - split: train path: data/train-* ---
dgambettaphd/D_gen6_run2_llama2-7b_wiki_doc1000_real64_synt64
dgambettaphd
"2024-12-04T05:23:04Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T05:23:01Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 586785 num_examples: 1000 download_size: 354942 dataset_size: 586785 configs: - config_name: default data_files: - split: train path: data/train-* ---
dgambettaphd/D_gen7_run2_llama2-7b_wiki_doc1000_real64_synt64
dgambettaphd
"2024-12-04T06:30:02Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T06:29:59Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 586792 num_examples: 1000 download_size: 354965 dataset_size: 586792 configs: - config_name: default data_files: - split: train path: data/train-* ---
dgambettaphd/D_gen8_run2_llama2-7b_wiki_doc1000_real64_synt64
dgambettaphd
"2024-12-04T07:36:11Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T07:36:08Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 586816 num_examples: 1000 download_size: 354920 dataset_size: 586816 configs: - config_name: default data_files: - split: train path: data/train-* ---
helper2424/eval_koch_move_obj
helper2424
"2024-12-04T07:59:38Z"
7
0
[ "task_categories:robotics", "region:us", "LeRobot", "tutorial", "eval" ]
[ "robotics" ]
"2024-12-04T07:44:55Z"
--- task_categories: - robotics tags: - LeRobot - tutorial - eval --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
dgambettaphd/D_gen9_run2_llama2-7b_wiki_doc1000_real64_synt64
dgambettaphd
"2024-12-04T08:43:40Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T08:43:37Z"
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 586813 num_examples: 1000 download_size: 354940 dataset_size: 586813 configs: - config_name: default data_files: - split: train path: data/train-* ---
mlgawd/final_dpo_nemo_v16
mlgawd
"2024-12-04T08:50:54Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T08:50:51Z"
--- dataset_info: features: - name: questions dtype: string - name: accepted dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 27178334 num_examples: 5845 download_size: 15360617 dataset_size: 27178334 configs: - config_name: default data_files: - split: train path: data/train-* ---
davidberenstein1957/my-distiset-35e8144f
davidberenstein1957
"2024-12-04T09:48:37Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:distilabel", "region:us", "synthetic", "distilabel", "rlaif", "datacraft" ]
null
"2024-12-04T09:48:32Z"
--- size_categories: n<1K dataset_info: features: - name: prompt dtype: string - name: completion dtype: string - name: system_prompt dtype: string splits: - name: train num_bytes: 26054 num_examples: 10 download_size: 21240 dataset_size: 26054 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif - datacraft --- <p align="left"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> # Dataset Card for my-distiset-35e8144f This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-35e8144f/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-35e8144f/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration: <details><summary> Configuration: default </summary><hr> ```json { "completion": "To find the value of the given expression, we need to follow the order of operations (PEMDAS):\n\n1. Evaluate the exponent `x^2`\n2. Perform the subtraction `2x^2 - 5y`\n3. Square the result of the subtraction\n4. Evaluate the addition `3x` and division `2y / (x + y)`\n5. Add the squared result with the outcome of the addition and division\n\nHere\u0027s how you can implement it using Python:\n\n```python\ndef calculate_expression(x, y):\n # Follow the order of operations\n squared_term = (2 * x ** 2 - 5 * y) ** 2\n division_term = (2 * y) / (x + y)\n addition_term = 3 * x\n final_result = squared_term + addition_term + division_term\n \n return final_result\n\n# Example usage:\nx = 2\ny = 1\nresult = calculate_expression(x, y)\nprint(\"The final result is: \", result)\n```\n\nHowever, we should handle potential division by zero errors by adding a condition to the code:\n\n```python\ndef calculate_expression(x, y):\n # Check for potential division by zero error\n if (x + y) == 0:\n return \"Error: Division by zero\"\n \n # Follow the order of operations\n squared_term = (2 * x ** 2 - 5 * y) ** 2\n division_term = (2 * y) / (x + y)\n addition_term = 3 * x\n final_result = squared_term + addition_term + division_term\n \n return final_result\n\n# Example usage:\nx = 2\ny = 1\nresult = calculate_expression(x, y)\nprint(\"The final result is: \", result)\n```", "prompt": "Let\u0027s start with a simple math problem. If I have two numbers x and y and some operations performed on them, how can I find the value of the expression: (2x^2 - 5y)^2 + 3x + 2y / (x + y)?", "system_prompt": "You are a Python-based AI assistant designed to help users solve various mathematical puzzles and problems. Your purpose is to assist in breaking down complex mathematical operations into manageable steps, applying relevant Python coding principles, and providing solutions to a wide range of mathematics-based challenges. Ensure your responses are clear, concise, and accurate in explaining mathematical concepts and Python code. User questions are direct and concise." } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-35e8144f", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-35e8144f") ``` </details>
mlgawd/final_dpo_nemo_v22
mlgawd
"2024-12-04T10:00:01Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T09:59:59Z"
--- dataset_info: features: - name: questions dtype: string - name: accepted dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 27177865 num_examples: 5845 download_size: 15359684 dataset_size: 27177865 configs: - config_name: default data_files: - split: train path: data/train-* ---
kowndinya23/flan2022-zeroshot-wrong-instr-inpt-outp-3200000
kowndinya23
"2024-12-04T10:03:24Z"
7
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T10:02:19Z"
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string splits: - name: train num_bytes: 3080231214 num_examples: 3200000 - name: validation num_bytes: 23794055 num_examples: 24202 download_size: 1700291301 dataset_size: 3104025269 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
davidberenstein1957/my-distiset-a77821df
davidberenstein1957
"2024-12-04T10:03:35Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:distilabel", "region:us", "synthetic", "distilabel", "rlaif", "datacraft" ]
null
"2024-12-04T10:03:33Z"
--- size_categories: n<1K dataset_info: features: - name: prompt dtype: string - name: completion dtype: string - name: system_prompt dtype: string splits: - name: train num_bytes: 30888 num_examples: 10 download_size: 21590 dataset_size: 30888 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif - datacraft --- <p align="left"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> # Dataset Card for my-distiset-a77821df This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-a77821df/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-a77821df/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration: <details><summary> Configuration: default </summary><hr> ```json { "completion": "### Calculating the Average of an Array of Numbers\n\nTo calculate the average of an array of numbers, we can use the following Python function:\n\n```python\ndef calculate_average(numbers):\n \"\"\"\n Calculate the average of a list of numbers.\n\n Args:\n numbers (list): A list of numbers.\n\n Returns:\n float: The average of the numbers in the list.\n \"\"\"\n if len(numbers) == 0:\n return 0 # or raise an exception, depending on your requirements\n\n return sum(numbers) / len(numbers)\n```\n\nHere\u0027s a breakdown of the function:\n\n* We first check if the input list is empty. If it is, we return 0. You can also choose to raise a `ValueError` or `ZeroDivisionError` instead, depending on your requirements.\n* If the list is not empty, we use the built-in `sum()` function to calculate the sum of all numbers in the list.\n* We then divide the sum by the length of the list using the `len()` function to get the average.\n\nHere\u0027s an example usage of the function:\n\n```python\nnumbers = [1, 2, 3, 4, 5]\naverage = calculate_average(numbers)\nprint(\"Average:\", average) # Output: 3.0\n```\n\nAlternatively, you can use the `statistics` module, which provides a `mean()` function to calculate the average:\n\n```python\nimport statistics\n\nnumbers = [1, 2, 3, 4, 5]\naverage = statistics.mean(numbers)\nprint(\"Average:\", average) # Output: 3.0\n```\n\nBoth methods will produce the same result, but using the `statistics` module is generally more concise and Pythonic.", "prompt": "What is the average of an array of numbers?\n### Define a function in Python that takes an array of numbers as an input and returns the average of the numbers in the array. \n\n", "system_prompt": "You are a Python-based AI assistant designed to assist users in solving a wide range of mathematical puzzles and problems. Your purpose is to help users work through complex mathematical concepts, analyze logical relationships, and arrive at accurate solutions using Python code. Provide step-by-step solutions, explain coding concepts, and offer helpful hints and clarification when needed. Offer a variety of Python libraries and techniques for problem-solving, from basic arithmetic to advanced calculus and numerical methods. User questions are direct and concise." } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-a77821df", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-a77821df") ``` </details>
kowndinya23/flan2022-zeroshot-task-token-inpt-outp-3200000
kowndinya23
"2024-12-04T10:05:31Z"
7
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T10:04:43Z"
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string splits: - name: train num_bytes: 2160393197 num_examples: 3200000 - name: validation num_bytes: 17110871 num_examples: 24202 download_size: 1197003393 dataset_size: 2177504068 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
1231czx/dstrain_orm_2e6_bz128_head_test_ds_math
1231czx
"2024-12-04T10:16:00Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T10:15:53Z"
--- dataset_info: features: - name: prompt dtype: string - name: answers sequence: string - name: rewards sequence: float64 - name: label sequence: int64 splits: - name: train num_bytes: 521535090 num_examples: 500 download_size: 146150994 dataset_size: 521535090 configs: - config_name: default data_files: - split: train path: data/train-* ---
1231czx/dstrain_orm_2e6_bz128_head_test_ms_math
1231czx
"2024-12-04T10:17:01Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T10:16:57Z"
--- dataset_info: features: - name: prompt dtype: string - name: answers sequence: string - name: rewards sequence: float64 - name: label sequence: int64 splits: - name: train num_bytes: 318457011 num_examples: 500 download_size: 71900257 dataset_size: 318457011 configs: - config_name: default data_files: - split: train path: data/train-* ---
davidberenstein1957/my-distiset-61eea3bb
davidberenstein1957
"2024-12-04T10:32:22Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:distilabel", "region:us", "synthetic", "distilabel", "rlaif", "datacraft" ]
null
"2024-12-04T10:32:19Z"
--- size_categories: n<1K dataset_info: features: - name: prompt dtype: string - name: completion dtype: string - name: system_prompt dtype: string splits: - name: train num_bytes: 32289 num_examples: 10 download_size: 28849 dataset_size: 32289 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif - datacraft --- <p align="left"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> # Dataset Card for my-distiset-61eea3bb This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-61eea3bb/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-61eea3bb/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration: <details><summary> Configuration: default </summary><hr> ```json { "completion": "The equation for the sum of the series 1 + 2 + 3 +... + n is known as the arithmetic series sum. The equation is given by:\n\nS = n * (n + 1) / 2\n\nwhere n is the number of terms in the series, and S is the sum of the series.\n\nHere\u0027s a Python function that calculates the sum of an arithmetic series:\n\n```python\ndef arithmetic_series_sum(n):\n \"\"\"\n Calculate the sum of the first n terms of the arithmetic series 1 + 2 + 3 +... + n.\n\n Args:\n n (int): The number of terms in the series.\n\n Returns:\n int: The sum of the series.\n \"\"\"\n return n * (n + 1) // 2\n\n# Example usage:\nn = 10\nprint(arithmetic_series_sum(n)) # Output: 55\n```\n\nIn this code, the function `arithmetic_series_sum(n)` takes an integer `n` as input and returns the sum of the first `n` terms of the arithmetic series. The `//` operator is used for integer division, which returns the quotient in which the digits after the decimal point are not taken into account.\n\nThis equation can be proven using the formula for the sum of an arithmetic series. If you\u0027re interested, I can provide more information on the proof.", "prompt": "What is the equation for the sum of the series 1 + 2 + 3 +... + n?", "system_prompt": "You are a Python-based AI assistant designed to solve a wide range of mathematical puzzles and problems. Your purpose is to assist users in solving equation-based, logic-based, and number theory-based puzzles using Python programming language. Provide step-by-step solutions, share relevant code snippets, and offer explanations for the underlying mathematical concepts. Utilize popular Python libraries and frameworks such as SymPy, NumPy, and Matplotlib as needed. Guide users through the problem-solving process, break down complex concepts into manageable parts, and offer tips for improving their math problem-solving skills in Python. User questions are direct and concise." } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-61eea3bb", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-61eea3bb") ``` </details>
davidberenstein1957/my-distiset-d61f0a8f
davidberenstein1957
"2024-12-04T10:38:21Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:distilabel", "region:us", "synthetic", "distilabel", "rlaif", "datacraft" ]
null
"2024-12-04T10:38:18Z"
--- size_categories: n<1K dataset_info: features: - name: prompt dtype: string - name: completion dtype: string - name: system_prompt dtype: string splits: - name: train num_bytes: 29835 num_examples: 10 download_size: 26650 dataset_size: 29835 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif - datacraft --- <p align="left"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> # Dataset Card for my-distiset-d61f0a8f This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-d61f0a8f/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-d61f0a8f/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration: <details><summary> Configuration: default </summary><hr> ```json { "completion": "You haven\u0027t provided the equation yet. Please provide the equation you\u0027d like help solving, and I\u0027ll be happy to assist you in finding the value of x.", "prompt": "What is x in the following equation: \n\n", "system_prompt": "You are an AI assistant designed to assist with solving a wide range of mathematical puzzles and problems using Python programming language. Your purpose is to help users break down complex mathematical concepts into executable code, providing step-by-step solutions and clear explanations for various mathematical puzzles and problems. Guide users through the process of translating mathematical concepts into Python code, offer suggestions for different approaches and techniques, and provide code snippets and examples to illustrate key concepts. User questions are direct and concise." } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-d61f0a8f", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/my-distiset-d61f0a8f") ``` </details>
siqi00/llama3_gsm8k_question_gsmlike_unhelpful2_0.6_0.9_50_256
siqi00
"2024-12-04T11:02:14Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T11:02:13Z"
--- dataset_info: features: - name: real list: - name: content dtype: string - name: role dtype: string - name: generated_0 list: - name: content dtype: string - name: role dtype: string - name: generated_1 list: - name: content dtype: string - name: role dtype: string - name: generated_2 list: - name: content dtype: string - name: role dtype: string - name: generated_3 list: - name: content dtype: string - name: role dtype: string - name: generated_4 list: - name: content dtype: string - name: role dtype: string - name: generated_5 list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 35644746 num_examples: 7473 download_size: 14237532 dataset_size: 35644746 configs: - config_name: default data_files: - split: train path: data/train-* ---
engrodawood/segment_rand
engrodawood
"2024-12-04T11:48:46Z"
7
0
[ "license:mit", "size_categories:n<1K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T11:47:08Z"
--- license: mit ---
AsmaaMahmoudSaeddd/testdataset1
AsmaaMahmoudSaeddd
"2024-12-04T11:57:18Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T11:57:13Z"
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 90910.0 num_examples: 3 download_size: 90456 dataset_size: 90910.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
1231czx/ds_data_trained_orm_test_on_mistral_math
1231czx
"2024-12-04T12:10:02Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T12:09:58Z"
--- dataset_info: features: - name: prompt dtype: string - name: answers sequence: string - name: label sequence: int64 - name: step_scores sequence: float64 splits: - name: train num_bytes: 318457011 num_examples: 500 download_size: 71568686 dataset_size: 318457011 configs: - config_name: default data_files: - split: train path: data/train-* ---
FacuNdito/MPRAM
FacuNdito
"2024-12-04T20:56:20Z"
7
0
[ "license:apache-2.0", "region:us" ]
null
"2024-12-04T12:21:20Z"
--- license: apache-2.0 --- configs: - config_name: default data_files: - split: train path: "data.csv" ---
not-lain/distiset-arabic
not-lain
"2024-12-06T15:01:12Z"
7
0
[ "language:en", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:distilabel", "region:us", "synthetic", "distilabel", "rlaif", "datacraft" ]
null
"2024-12-04T12:31:20Z"
--- language: - en size_categories: n<1K dataset_info: features: - name: prompt dtype: string - name: completion dtype: string - name: system_prompt dtype: string splits: - name: train num_bytes: 326188 num_examples: 100 download_size: 143758 dataset_size: 326188 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif - datacraft --- <p align="left"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> # Dataset Card for my-distiset-8f4c564d This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/not-lain/my-distiset-8f4c564d/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/not-lain/my-distiset-8f4c564d/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration: <details><summary> Configuration: default </summary><hr> ```json { "completion": "Ahmed ibn Abu Jafar Muhammad ibn Jarir al-Tabari (839-923 CE), commonly known as al-Tabari, was a renowned Persian historian, scholar, and theologian from the Abbasid Caliphate, based in present-day Baghdad. He is considered one of the most important historians of the Islamic Golden Age. Al-Tabari\u0027s works had a profound impact on the field of history, particularly in the Middle East and North Africa.\n\n**Significance of Al-Tabari\u0027s Works**\n\nAl-Tabari wrote extensively on various subjects, including history, Quranic exegesis, theology, and hadith (sayings and actions of the Prophet Muhammad). His magnum opus, \"The History of the Prophets and Kings\" (Tarikh al-Rusul wa\u0027l-Muluk), is a comprehensive 32-volume history of the world, from the creation of Adam to the year 915 CE. This work provides a detailed account of the history of the Middle East, North Africa, and Central Asia, covering the rise and fall of empires, dynasties, and important events.\n\n**Impact on History**\n\nAl-Tabari\u0027s works had a significant impact on the development of Islamic historiography and the recording of historical events. His voluminous writings:\n\n1. **Standardized Islamic historiography**: Al-Tabari\u0027s approach to documenting historical events helped establish a standardized methodology for recording and interpreting the past in the Islamic world. His works became a model for later historians, influencing the development of Islamic historiography.\n2. **Provided a comprehensive timeline**: Al-Tabari\u0027s \"History of the Prophets and Kings\" offered a detailed chronological framework of the history of the Middle East and North Africa, stretching back to ancient times. This comprehensive timeline helped establish a shared understanding of the region\u0027s past among scholars and historians.\n3. **Interpreted the Quran and hadith**: Al-Tabari\u0027s exegesis of the Quran and collection of hadith (Prophetic traditions) provided valuable insights into the Islamic faith, its principles, and the lives of the Prophet Muhammad and his companions.\n4. **Documented the rise of Islam**: Al-Tabari\u0027s work shed light on the early days of Islam, providing a detailed account of the Prophet Muhammad\u0027s life, the life of the Companions, and the early Muslim community.\n\n**Legacy**\n\nAl-Tabari\u0027s works have been translated into various languages, including Arabic, Persian, Turkish, and European languages. His impact on Islamic historiography has been profound, and his writings continue to be studied by scholars, historians, and researchers around the world.\n\nThe significance of al-Tabari\u0027s works is not limited to the Islamic world. His comprehensive historical accounts and chronological framework have contributed to a broader understanding of the history of the Middle East, North Africa, and Central Asia, making him an essential figure in the study of world history.\n\nIn the words of the renowned historian, Albert Hourani, \"Al-Tabari was the greatest of all the medieval Islamic historians, and perhaps the greatest of all the medieval historians of the East.\"", "prompt": "What was the significance of the Tabari, a Persian historian from the ninth century CE, and his impact on history? \n\n", "system_prompt": "You are an AI assistant dedicated to providing accurate historical information and context about the Middle East and North Africa, with a focus on Arabic history and culture, from the earliest civilizations to the present day. Your purpose is to assist users in understanding key events, figures, and social movements that have shaped the Arab world, as well as the complex interactions and influences between Arab and non-Arab cultures. Provide clear, concise, and well-researched information, and offer additional context or insights that might help users deepen their understanding of these topics. User questions are direct and concise." } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("not-lain/my-distiset-8f4c564d", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("not-lain/my-distiset-8f4c564d") ``` </details>
AsmaaMahmoudSaeddd/testdataset5
AsmaaMahmoudSaeddd
"2024-12-04T13:08:04Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T13:08:00Z"
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 90910.0 num_examples: 3 download_size: 90474 dataset_size: 90910.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
julia-se/tracka_qwen_fewshot_fear
julia-se
"2024-12-04T14:26:30Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T14:26:28Z"
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: Anger dtype: int64 - name: Disgust dtype: int64 - name: Fear dtype: int64 - name: Joy dtype: int64 - name: Sadness dtype: int64 - name: Surprise dtype: int64 - name: predicted_is_fear dtype: int64 - name: y_fear dtype: int64 splits: - name: train num_bytes: 472807 num_examples: 2226 download_size: 220728 dataset_size: 472807 configs: - config_name: default data_files: - split: train path: data/train-* ---
open-llm-leaderboard/meta-llama__Llama-3.3-70B-Instruct-details
open-llm-leaderboard
"2024-12-06T14:57:49Z"
7
0
[ "size_categories:10K<n<100K", "format:json", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T14:39:18Z"
--- pretty_name: Evaluation run of meta-llama/Llama-3.3-70B-Instruct dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)\n\ The dataset is composed of 38 configuration(s), each one corresponding to one of\ \ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\ \ be found as a specific split in each configuration, the split being named using\ \ the timestamp of the run.The \"train\" split is always pointing to the latest\ \ results.\n\nAn additional configuration \"results\" store all the aggregated results\ \ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\ ```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/meta-llama__Llama-3.3-70B-Instruct-details\"\ ,\n\tname=\"meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_boolean_expressions\"\ ,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\ \ from run 2024-12-04T14-39-18.066641](https://huggingface.co/datasets/open-llm-leaderboard/meta-llama__Llama-3.3-70B-Instruct-details/blob/main/meta-llama__Llama-3.3-70B-Instruct/results_2024-12-04T14-39-18.066641.json)\ \ (note that there might be results for other tasks in the repos if successive evals\ \ didn't cover the same tasks. You find each in the results and the \"latest\" split\ \ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\ \ \"acc,none\": 0.5331615691489362,\n \"acc_stderr,none\"\ : 0.004548433564124718,\n \"prompt_level_loose_acc,none\": 0.8927911275415896,\n\ \ \"prompt_level_loose_acc_stderr,none\": 0.01331353600556568,\n \ \ \"acc_norm,none\": 0.6112336230380075,\n \"acc_norm_stderr,none\"\ : 0.004909869995586035,\n \"prompt_level_strict_acc,none\": 0.8798521256931608,\n\ \ \"prompt_level_strict_acc_stderr,none\": 0.01399155582686473,\n \ \ \"exact_match,none\": 0.0022658610271903325,\n \"exact_match_stderr,none\"\ : 0.001305703553131165,\n \"inst_level_strict_acc,none\": 0.919664268585132,\n\ \ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"inst_level_loose_acc,none\"\ : 0.9280575539568345,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\ ,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\ : {\n \"acc_norm,none\": 0.6915466064919285,\n \"acc_norm_stderr,none\"\ : 0.00548646607671526,\n \"alias\": \" - leaderboard_bbh\"\n },\n\ \ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" \ \ - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.916,\n\ \ \"acc_norm_stderr,none\": 0.017578738526776348\n },\n \ \ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\ ,\n \"acc_norm,none\": 0.679144385026738,\n \"acc_norm_stderr,none\"\ : 0.03422783320926161\n },\n \"leaderboard_bbh_date_understanding\"\ : {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \ \ \"acc_norm,none\": 0.704,\n \"acc_norm_stderr,none\": 0.028928939388379697\n\ \ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\ : \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.664,\n\ \ \"acc_norm_stderr,none\": 0.029933259094191533\n },\n \ \ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\ ,\n \"acc_norm,none\": 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\ \ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\ : \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.352,\n\ \ \"acc_norm_stderr,none\": 0.030266288057359866\n },\n \ \ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\ ,\n \"acc_norm,none\": 0.708,\n \"acc_norm_stderr,none\":\ \ 0.028814320402205634\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\ : {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\ ,\n \"acc_norm,none\": 0.656,\n \"acc_norm_stderr,none\":\ \ 0.03010450339231644\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\ : {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\ ,\n \"acc_norm,none\": 0.624,\n \"acc_norm_stderr,none\":\ \ 0.03069633626739458\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\ : {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\ ,\n \"acc_norm,none\": 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n\ \ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\ alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\ : 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n },\n\ \ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\ ,\n \"acc_norm,none\": 0.696,\n \"acc_norm_stderr,none\":\ \ 0.029150213374159652\n },\n \"leaderboard_bbh_object_counting\"\ : {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \ \ \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\ \ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\ alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\ : 0.6438356164383562,\n \"acc_norm_stderr,none\": 0.03976754138601307\n\ \ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \ \ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\ \ \"acc_norm,none\": 0.892,\n \"acc_norm_stderr,none\": 0.019669559381568776\n\ \ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\ \ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.852,\n \ \ \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \"\ leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\ : \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\ : 0.672,\n \"acc_norm_stderr,none\": 0.029752391824475363\n },\n\ \ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\ ,\n \"acc_norm,none\": 0.8033707865168539,\n \"acc_norm_stderr,none\"\ : 0.029874139553421764\n },\n \"leaderboard_bbh_sports_understanding\"\ : {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \ \ \"acc_norm,none\": 0.952,\n \"acc_norm_stderr,none\": 0.013546884228085683\n\ \ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\ : \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 1.0,\n\ \ \"acc_norm_stderr,none\": 0.0\n },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\ : {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\ ,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\":\ \ 0.029658294924545567\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\ : {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\ ,\n \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\ \ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\ : {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\ ,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\ \ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\ \ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.744,\n\ \ \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \ \ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3288590604026846,\n\ \ \"acc_norm_stderr,none\": 0.013615253350667551,\n \"alias\"\ : \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\ \ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\ : 0.29292929292929293,\n \"acc_norm_stderr,none\": 0.032424979581788145\n\ \ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\ \ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3424908424908425,\n\ \ \"acc_norm_stderr,none\": 0.02032718003704072\n },\n \ \ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\ ,\n \"acc_norm,none\": 0.328125,\n \"acc_norm_stderr,none\"\ : 0.0222080353262888\n },\n \"leaderboard_ifeval\": {\n \ \ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\ : 0.8798521256931608,\n \"prompt_level_strict_acc_stderr,none\": 0.013991555826864732,\n\ \ \"inst_level_strict_acc,none\": 0.919664268585132,\n \"\ inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\ : 0.8927911275415896,\n \"prompt_level_loose_acc_stderr,none\": 0.01331353600556568,\n\ \ \"inst_level_loose_acc,none\": 0.9280575539568345,\n \"\ inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\ : {\n \"exact_match,none\": 0.0022658610271903325,\n \"exact_match_stderr,none\"\ : 0.001305703553131165,\n \"alias\": \" - leaderboard_math_hard\"\n \ \ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\ : \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.0,\n\ \ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_counting_and_prob_hard\"\ : {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \ \ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\ : 0.0\n },\n \"leaderboard_math_geometry_hard\": {\n \"\ alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\ : 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\ : {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\ \ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\ : 0.0\n },\n \"leaderboard_math_num_theory_hard\": {\n \ \ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\ : 0.006493506493506494,\n \"exact_match_stderr,none\": 0.006493506493506494\n\ \ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\"\ : \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\": 0.010362694300518135,\n\ \ \"exact_match_stderr,none\": 0.007308424386792209\n },\n \ \ \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" - leaderboard_math_precalculus_hard\"\ ,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\ : 0.0\n },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \"\ \ - leaderboard_mmlu_pro\",\n \"acc,none\": 0.5331615691489362,\n \ \ \"acc_stderr,none\": 0.004548433564124718\n },\n \"leaderboard_musr\"\ : {\n \"acc_norm,none\": 0.4444444444444444,\n \"acc_norm_stderr,none\"\ : 0.017253293554553326,\n \"alias\": \" - leaderboard_musr\"\n \ \ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\ \ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.536,\n\ \ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \ \ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\ ,\n \"acc_norm,none\": 0.234375,\n \"acc_norm_stderr,none\"\ : 0.02652733398834892\n },\n \"leaderboard_musr_team_allocation\"\ : {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \ \ \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n\ \ }\n },\n \"leaderboard\": {\n \"acc,none\": 0.5331615691489362,\n\ \ \"acc_stderr,none\": 0.004548433564124718,\n \"prompt_level_loose_acc,none\"\ : 0.8927911275415896,\n \"prompt_level_loose_acc_stderr,none\": 0.01331353600556568,\n\ \ \"acc_norm,none\": 0.6112336230380075,\n \"acc_norm_stderr,none\"\ : 0.004909869995586035,\n \"prompt_level_strict_acc,none\": 0.8798521256931608,\n\ \ \"prompt_level_strict_acc_stderr,none\": 0.01399155582686473,\n \ \ \"exact_match,none\": 0.0022658610271903325,\n \"exact_match_stderr,none\"\ : 0.001305703553131165,\n \"inst_level_strict_acc,none\": 0.919664268585132,\n\ \ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"inst_level_loose_acc,none\"\ : 0.9280575539568345,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\ \ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \ \ \"acc_norm,none\": 0.6915466064919285,\n \"acc_norm_stderr,none\": 0.00548646607671526,\n\ \ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\ : {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\ acc_norm,none\": 0.916,\n \"acc_norm_stderr,none\": 0.017578738526776348\n\ \ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\ ,\n \"acc_norm,none\": 0.679144385026738,\n \"acc_norm_stderr,none\"\ : 0.03422783320926161\n },\n \"leaderboard_bbh_date_understanding\": {\n \ \ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\ : 0.704,\n \"acc_norm_stderr,none\": 0.028928939388379697\n },\n \"\ leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\ ,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n\ \ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\ ,\n \"acc_norm,none\": 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\ \ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\ ,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\ \ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\ ,\n \"acc_norm,none\": 0.708,\n \"acc_norm_stderr,none\": 0.028814320402205634\n\ \ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\ : \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\ : 0.656,\n \"acc_norm_stderr,none\": 0.03010450339231644\n },\n \"\ leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\ ,\n \"acc_norm,none\": 0.624,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\ \ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\ alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\ : 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n },\n \"\ leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\ ,\n \"acc_norm,none\": 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\ \ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\ ,\n \"acc_norm,none\": 0.696,\n \"acc_norm_stderr,none\": 0.029150213374159652\n\ \ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\ ,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\ \ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \ \ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6438356164383562,\n\ \ \"acc_norm_stderr,none\": 0.03976754138601307\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\ : {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\ ,\n \"acc_norm,none\": 0.892,\n \"acc_norm_stderr,none\": 0.019669559381568776\n\ \ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\ ,\n \"acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\ \ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \ \ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \ \ \"acc_norm,none\": 0.672,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\ \ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\ ,\n \"acc_norm,none\": 0.8033707865168539,\n \"acc_norm_stderr,none\"\ : 0.029874139553421764\n },\n \"leaderboard_bbh_sports_understanding\": {\n\ \ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\ : 0.952,\n \"acc_norm_stderr,none\": 0.013546884228085683\n },\n \"\ leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\ ,\n \"acc_norm,none\": 1.0,\n \"acc_norm_stderr,none\": 0.0\n },\n\ \ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"alias\"\ : \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \"acc_norm,none\"\ : 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n },\n \"\ leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \"alias\"\ : \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\",\n \"acc_norm,none\"\ : 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n \"\ leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \"alias\"\ : \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\",\n \"acc_norm,none\"\ : 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n },\n \"leaderboard_bbh_web_of_lies\"\ : {\n \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\ : 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \"\ leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3288590604026846,\n \"\ acc_norm_stderr,none\": 0.013615253350667551,\n \"alias\": \" - leaderboard_gpqa\"\ \n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\": \" - leaderboard_gpqa_diamond\"\ ,\n \"acc_norm,none\": 0.29292929292929293,\n \"acc_norm_stderr,none\"\ : 0.032424979581788145\n },\n \"leaderboard_gpqa_extended\": {\n \"\ alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3424908424908425,\n\ \ \"acc_norm_stderr,none\": 0.02032718003704072\n },\n \"leaderboard_gpqa_main\"\ : {\n \"alias\": \" - leaderboard_gpqa_main\",\n \"acc_norm,none\"\ : 0.328125,\n \"acc_norm_stderr,none\": 0.0222080353262888\n },\n \"\ leaderboard_ifeval\": {\n \"alias\": \" - leaderboard_ifeval\",\n \ \ \"prompt_level_strict_acc,none\": 0.8798521256931608,\n \"prompt_level_strict_acc_stderr,none\"\ : 0.013991555826864732,\n \"inst_level_strict_acc,none\": 0.919664268585132,\n\ \ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\ : 0.8927911275415896,\n \"prompt_level_loose_acc_stderr,none\": 0.01331353600556568,\n\ \ \"inst_level_loose_acc,none\": 0.9280575539568345,\n \"inst_level_loose_acc_stderr,none\"\ : \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\ : 0.0022658610271903325,\n \"exact_match_stderr,none\": 0.001305703553131165,\n\ \ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\ : {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\ : 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_counting_and_prob_hard\"\ : {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \ \ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n\ \ \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\ ,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\ \ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\ : \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\ : 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_num_theory_hard\"\ : {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\ : 0.006493506493506494,\n \"exact_match_stderr,none\": 0.006493506493506494\n\ \ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\ ,\n \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\ : 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\": {\n \ \ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\ : 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\ : {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.5331615691489362,\n\ \ \"acc_stderr,none\": 0.004548433564124718\n },\n \"leaderboard_musr\"\ : {\n \"acc_norm,none\": 0.4444444444444444,\n \"acc_norm_stderr,none\"\ : 0.017253293554553326,\n \"alias\": \" - leaderboard_musr\"\n },\n \ \ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\ ,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\ \ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\ \ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.234375,\n\ \ \"acc_norm_stderr,none\": 0.02652733398834892\n },\n \"leaderboard_musr_team_allocation\"\ : {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\ : 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n }\n}\n```" repo_url: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct leaderboard_url: '' point_of_contact: '' configs: - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_boolean_expressions data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_boolean_expressions_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_boolean_expressions_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_causal_judgement data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_causal_judgement_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_causal_judgement_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_date_understanding data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_date_understanding_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_date_understanding_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_disambiguation_qa data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_formal_fallacies data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_formal_fallacies_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_formal_fallacies_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_geometric_shapes data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_geometric_shapes_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_geometric_shapes_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_hyperbaton data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_hyperbaton_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_hyperbaton_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_logical_deduction_five_objects data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_logical_deduction_seven_objects data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_logical_deduction_three_objects data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_movie_recommendation data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_movie_recommendation_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_movie_recommendation_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_navigate data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_navigate_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_navigate_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_object_counting data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_object_counting_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_object_counting_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_penguins_in_a_table data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_reasoning_about_colored_objects data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_ruin_names data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_ruin_names_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_ruin_names_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_salient_translation_error_detection data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_snarks data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_snarks_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_snarks_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_sports_understanding data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_sports_understanding_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_sports_understanding_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_temporal_sequences data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_temporal_sequences_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_temporal_sequences_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_tracking_shuffled_objects_five_objects data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_tracking_shuffled_objects_seven_objects data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_tracking_shuffled_objects_three_objects data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_web_of_lies data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_bbh_web_of_lies_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_bbh_web_of_lies_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_gpqa_diamond data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_gpqa_diamond_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_gpqa_diamond_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_gpqa_extended data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_gpqa_extended_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_gpqa_extended_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_gpqa_main data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_gpqa_main_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_gpqa_main_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_ifeval data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_ifeval_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_ifeval_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_math_algebra_hard data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_math_algebra_hard_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_math_algebra_hard_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_math_counting_and_prob_hard data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_math_geometry_hard data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_math_geometry_hard_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_math_geometry_hard_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_math_intermediate_algebra_hard data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_math_num_theory_hard data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_math_num_theory_hard_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_math_num_theory_hard_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_math_prealgebra_hard data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_math_prealgebra_hard_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_math_prealgebra_hard_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_math_precalculus_hard data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_math_precalculus_hard_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_math_precalculus_hard_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_mmlu_pro data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_mmlu_pro_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_mmlu_pro_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_musr_murder_mysteries data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_musr_murder_mysteries_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_musr_murder_mysteries_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_musr_object_placements data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_musr_object_placements_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_musr_object_placements_2024-12-04T14-39-18.066641.jsonl' - config_name: meta-llama__Llama-3.3-70B-Instruct__leaderboard_musr_team_allocation data_files: - split: 2024_12_04T14_39_18.066641 path: - '**/samples_leaderboard_musr_team_allocation_2024-12-04T14-39-18.066641.jsonl' - split: latest path: - '**/samples_leaderboard_musr_team_allocation_2024-12-04T14-39-18.066641.jsonl' --- # Dataset Card for Evaluation run of meta-llama/Llama-3.3-70B-Instruct <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run. To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset( "open-llm-leaderboard/meta-llama__Llama-3.3-70B-Instruct-details", name="meta-llama__Llama-3.3-70B-Instruct__leaderboard_bbh_boolean_expressions", split="latest" ) ``` ## Latest results These are the [latest results from run 2024-12-04T14-39-18.066641](https://huggingface.co/datasets/open-llm-leaderboard/meta-llama__Llama-3.3-70B-Instruct-details/blob/main/meta-llama__Llama-3.3-70B-Instruct/results_2024-12-04T14-39-18.066641.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "leaderboard": { "acc,none": 0.5331615691489362, "acc_stderr,none": 0.004548433564124718, "prompt_level_loose_acc,none": 0.8927911275415896, "prompt_level_loose_acc_stderr,none": 0.01331353600556568, "acc_norm,none": 0.6112336230380075, "acc_norm_stderr,none": 0.004909869995586035, "prompt_level_strict_acc,none": 0.8798521256931608, "prompt_level_strict_acc_stderr,none": 0.01399155582686473, "exact_match,none": 0.0022658610271903325, "exact_match_stderr,none": 0.001305703553131165, "inst_level_strict_acc,none": 0.919664268585132, "inst_level_strict_acc_stderr,none": "N/A", "inst_level_loose_acc,none": 0.9280575539568345, "inst_level_loose_acc_stderr,none": "N/A", "alias": "leaderboard" }, "leaderboard_bbh": { "acc_norm,none": 0.6915466064919285, "acc_norm_stderr,none": 0.00548646607671526, "alias": " - leaderboard_bbh" }, "leaderboard_bbh_boolean_expressions": { "alias": " - leaderboard_bbh_boolean_expressions", "acc_norm,none": 0.916, "acc_norm_stderr,none": 0.017578738526776348 }, "leaderboard_bbh_causal_judgement": { "alias": " - leaderboard_bbh_causal_judgement", "acc_norm,none": 0.679144385026738, "acc_norm_stderr,none": 0.03422783320926161 }, "leaderboard_bbh_date_understanding": { "alias": " - leaderboard_bbh_date_understanding", "acc_norm,none": 0.704, "acc_norm_stderr,none": 0.028928939388379697 }, "leaderboard_bbh_disambiguation_qa": { "alias": " - leaderboard_bbh_disambiguation_qa", "acc_norm,none": 0.664, "acc_norm_stderr,none": 0.029933259094191533 }, "leaderboard_bbh_formal_fallacies": { "alias": " - leaderboard_bbh_formal_fallacies", "acc_norm,none": 0.82, "acc_norm_stderr,none": 0.02434689065029351 }, "leaderboard_bbh_geometric_shapes": { "alias": " - leaderboard_bbh_geometric_shapes", "acc_norm,none": 0.352, "acc_norm_stderr,none": 0.030266288057359866 }, "leaderboard_bbh_hyperbaton": { "alias": " - leaderboard_bbh_hyperbaton", "acc_norm,none": 0.708, "acc_norm_stderr,none": 0.028814320402205634 }, "leaderboard_bbh_logical_deduction_five_objects": { "alias": " - leaderboard_bbh_logical_deduction_five_objects", "acc_norm,none": 0.656, "acc_norm_stderr,none": 0.03010450339231644 }, "leaderboard_bbh_logical_deduction_seven_objects": { "alias": " - leaderboard_bbh_logical_deduction_seven_objects", "acc_norm,none": 0.624, "acc_norm_stderr,none": 0.03069633626739458 }, "leaderboard_bbh_logical_deduction_three_objects": { "alias": " - leaderboard_bbh_logical_deduction_three_objects", "acc_norm,none": 0.92, "acc_norm_stderr,none": 0.017192507941463025 }, "leaderboard_bbh_movie_recommendation": { "alias": " - leaderboard_bbh_movie_recommendation", "acc_norm,none": 0.784, "acc_norm_stderr,none": 0.02607865766373279 }, "leaderboard_bbh_navigate": { "alias": " - leaderboard_bbh_navigate", "acc_norm,none": 0.696, "acc_norm_stderr,none": 0.029150213374159652 }, "leaderboard_bbh_object_counting": { "alias": " - leaderboard_bbh_object_counting", "acc_norm,none": 0.58, "acc_norm_stderr,none": 0.03127799950463661 }, "leaderboard_bbh_penguins_in_a_table": { "alias": " - leaderboard_bbh_penguins_in_a_table", "acc_norm,none": 0.6438356164383562, "acc_norm_stderr,none": 0.03976754138601307 }, "leaderboard_bbh_reasoning_about_colored_objects": { "alias": " - leaderboard_bbh_reasoning_about_colored_objects", "acc_norm,none": 0.892, "acc_norm_stderr,none": 0.019669559381568776 }, "leaderboard_bbh_ruin_names": { "alias": " - leaderboard_bbh_ruin_names", "acc_norm,none": 0.852, "acc_norm_stderr,none": 0.022503547243806186 }, "leaderboard_bbh_salient_translation_error_detection": { "alias": " - leaderboard_bbh_salient_translation_error_detection", "acc_norm,none": 0.672, "acc_norm_stderr,none": 0.029752391824475363 }, "leaderboard_bbh_snarks": { "alias": " - leaderboard_bbh_snarks", "acc_norm,none": 0.8033707865168539, "acc_norm_stderr,none": 0.029874139553421764 }, "leaderboard_bbh_sports_understanding": { "alias": " - leaderboard_bbh_sports_understanding", "acc_norm,none": 0.952, "acc_norm_stderr,none": 0.013546884228085683 }, "leaderboard_bbh_temporal_sequences": { "alias": " - leaderboard_bbh_temporal_sequences", "acc_norm,none": 1.0, "acc_norm_stderr,none": 0.0 }, "leaderboard_bbh_tracking_shuffled_objects_five_objects": { "alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects", "acc_norm,none": 0.324, "acc_norm_stderr,none": 0.029658294924545567 }, "leaderboard_bbh_tracking_shuffled_objects_seven_objects": { "alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects", "acc_norm,none": 0.26, "acc_norm_stderr,none": 0.027797315752644335 }, "leaderboard_bbh_tracking_shuffled_objects_three_objects": { "alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects", "acc_norm,none": 0.36, "acc_norm_stderr,none": 0.03041876402517494 }, "leaderboard_bbh_web_of_lies": { "alias": " - leaderboard_bbh_web_of_lies", "acc_norm,none": 0.744, "acc_norm_stderr,none": 0.027657108718204846 }, "leaderboard_gpqa": { "acc_norm,none": 0.3288590604026846, "acc_norm_stderr,none": 0.013615253350667551, "alias": " - leaderboard_gpqa" }, "leaderboard_gpqa_diamond": { "alias": " - leaderboard_gpqa_diamond", "acc_norm,none": 0.29292929292929293, "acc_norm_stderr,none": 0.032424979581788145 }, "leaderboard_gpqa_extended": { "alias": " - leaderboard_gpqa_extended", "acc_norm,none": 0.3424908424908425, "acc_norm_stderr,none": 0.02032718003704072 }, "leaderboard_gpqa_main": { "alias": " - leaderboard_gpqa_main", "acc_norm,none": 0.328125, "acc_norm_stderr,none": 0.0222080353262888 }, "leaderboard_ifeval": { "alias": " - leaderboard_ifeval", "prompt_level_strict_acc,none": 0.8798521256931608, "prompt_level_strict_acc_stderr,none": 0.013991555826864732, "inst_level_strict_acc,none": 0.919664268585132, "inst_level_strict_acc_stderr,none": "N/A", "prompt_level_loose_acc,none": 0.8927911275415896, "prompt_level_loose_acc_stderr,none": 0.01331353600556568, "inst_level_loose_acc,none": 0.9280575539568345, "inst_level_loose_acc_stderr,none": "N/A" }, "leaderboard_math_hard": { "exact_match,none": 0.0022658610271903325, "exact_match_stderr,none": 0.001305703553131165, "alias": " - leaderboard_math_hard" }, "leaderboard_math_algebra_hard": { "alias": " - leaderboard_math_algebra_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_counting_and_prob_hard": { "alias": " - leaderboard_math_counting_and_prob_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_geometry_hard": { "alias": " - leaderboard_math_geometry_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_intermediate_algebra_hard": { "alias": " - leaderboard_math_intermediate_algebra_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_num_theory_hard": { "alias": " - leaderboard_math_num_theory_hard", "exact_match,none": 0.006493506493506494, "exact_match_stderr,none": 0.006493506493506494 }, "leaderboard_math_prealgebra_hard": { "alias": " - leaderboard_math_prealgebra_hard", "exact_match,none": 0.010362694300518135, "exact_match_stderr,none": 0.007308424386792209 }, "leaderboard_math_precalculus_hard": { "alias": " - leaderboard_math_precalculus_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_mmlu_pro": { "alias": " - leaderboard_mmlu_pro", "acc,none": 0.5331615691489362, "acc_stderr,none": 0.004548433564124718 }, "leaderboard_musr": { "acc_norm,none": 0.4444444444444444, "acc_norm_stderr,none": 0.017253293554553326, "alias": " - leaderboard_musr" }, "leaderboard_musr_murder_mysteries": { "alias": " - leaderboard_musr_murder_mysteries", "acc_norm,none": 0.536, "acc_norm_stderr,none": 0.031603975145223735 }, "leaderboard_musr_object_placements": { "alias": " - leaderboard_musr_object_placements", "acc_norm,none": 0.234375, "acc_norm_stderr,none": 0.02652733398834892 }, "leaderboard_musr_team_allocation": { "alias": " - leaderboard_musr_team_allocation", "acc_norm,none": 0.568, "acc_norm_stderr,none": 0.03139181076542941 } }, "leaderboard": { "acc,none": 0.5331615691489362, "acc_stderr,none": 0.004548433564124718, "prompt_level_loose_acc,none": 0.8927911275415896, "prompt_level_loose_acc_stderr,none": 0.01331353600556568, "acc_norm,none": 0.6112336230380075, "acc_norm_stderr,none": 0.004909869995586035, "prompt_level_strict_acc,none": 0.8798521256931608, "prompt_level_strict_acc_stderr,none": 0.01399155582686473, "exact_match,none": 0.0022658610271903325, "exact_match_stderr,none": 0.001305703553131165, "inst_level_strict_acc,none": 0.919664268585132, "inst_level_strict_acc_stderr,none": "N/A", "inst_level_loose_acc,none": 0.9280575539568345, "inst_level_loose_acc_stderr,none": "N/A", "alias": "leaderboard" }, "leaderboard_bbh": { "acc_norm,none": 0.6915466064919285, "acc_norm_stderr,none": 0.00548646607671526, "alias": " - leaderboard_bbh" }, "leaderboard_bbh_boolean_expressions": { "alias": " - leaderboard_bbh_boolean_expressions", "acc_norm,none": 0.916, "acc_norm_stderr,none": 0.017578738526776348 }, "leaderboard_bbh_causal_judgement": { "alias": " - leaderboard_bbh_causal_judgement", "acc_norm,none": 0.679144385026738, "acc_norm_stderr,none": 0.03422783320926161 }, "leaderboard_bbh_date_understanding": { "alias": " - leaderboard_bbh_date_understanding", "acc_norm,none": 0.704, "acc_norm_stderr,none": 0.028928939388379697 }, "leaderboard_bbh_disambiguation_qa": { "alias": " - leaderboard_bbh_disambiguation_qa", "acc_norm,none": 0.664, "acc_norm_stderr,none": 0.029933259094191533 }, "leaderboard_bbh_formal_fallacies": { "alias": " - leaderboard_bbh_formal_fallacies", "acc_norm,none": 0.82, "acc_norm_stderr,none": 0.02434689065029351 }, "leaderboard_bbh_geometric_shapes": { "alias": " - leaderboard_bbh_geometric_shapes", "acc_norm,none": 0.352, "acc_norm_stderr,none": 0.030266288057359866 }, "leaderboard_bbh_hyperbaton": { "alias": " - leaderboard_bbh_hyperbaton", "acc_norm,none": 0.708, "acc_norm_stderr,none": 0.028814320402205634 }, "leaderboard_bbh_logical_deduction_five_objects": { "alias": " - leaderboard_bbh_logical_deduction_five_objects", "acc_norm,none": 0.656, "acc_norm_stderr,none": 0.03010450339231644 }, "leaderboard_bbh_logical_deduction_seven_objects": { "alias": " - leaderboard_bbh_logical_deduction_seven_objects", "acc_norm,none": 0.624, "acc_norm_stderr,none": 0.03069633626739458 }, "leaderboard_bbh_logical_deduction_three_objects": { "alias": " - leaderboard_bbh_logical_deduction_three_objects", "acc_norm,none": 0.92, "acc_norm_stderr,none": 0.017192507941463025 }, "leaderboard_bbh_movie_recommendation": { "alias": " - leaderboard_bbh_movie_recommendation", "acc_norm,none": 0.784, "acc_norm_stderr,none": 0.02607865766373279 }, "leaderboard_bbh_navigate": { "alias": " - leaderboard_bbh_navigate", "acc_norm,none": 0.696, "acc_norm_stderr,none": 0.029150213374159652 }, "leaderboard_bbh_object_counting": { "alias": " - leaderboard_bbh_object_counting", "acc_norm,none": 0.58, "acc_norm_stderr,none": 0.03127799950463661 }, "leaderboard_bbh_penguins_in_a_table": { "alias": " - leaderboard_bbh_penguins_in_a_table", "acc_norm,none": 0.6438356164383562, "acc_norm_stderr,none": 0.03976754138601307 }, "leaderboard_bbh_reasoning_about_colored_objects": { "alias": " - leaderboard_bbh_reasoning_about_colored_objects", "acc_norm,none": 0.892, "acc_norm_stderr,none": 0.019669559381568776 }, "leaderboard_bbh_ruin_names": { "alias": " - leaderboard_bbh_ruin_names", "acc_norm,none": 0.852, "acc_norm_stderr,none": 0.022503547243806186 }, "leaderboard_bbh_salient_translation_error_detection": { "alias": " - leaderboard_bbh_salient_translation_error_detection", "acc_norm,none": 0.672, "acc_norm_stderr,none": 0.029752391824475363 }, "leaderboard_bbh_snarks": { "alias": " - leaderboard_bbh_snarks", "acc_norm,none": 0.8033707865168539, "acc_norm_stderr,none": 0.029874139553421764 }, "leaderboard_bbh_sports_understanding": { "alias": " - leaderboard_bbh_sports_understanding", "acc_norm,none": 0.952, "acc_norm_stderr,none": 0.013546884228085683 }, "leaderboard_bbh_temporal_sequences": { "alias": " - leaderboard_bbh_temporal_sequences", "acc_norm,none": 1.0, "acc_norm_stderr,none": 0.0 }, "leaderboard_bbh_tracking_shuffled_objects_five_objects": { "alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects", "acc_norm,none": 0.324, "acc_norm_stderr,none": 0.029658294924545567 }, "leaderboard_bbh_tracking_shuffled_objects_seven_objects": { "alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects", "acc_norm,none": 0.26, "acc_norm_stderr,none": 0.027797315752644335 }, "leaderboard_bbh_tracking_shuffled_objects_three_objects": { "alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects", "acc_norm,none": 0.36, "acc_norm_stderr,none": 0.03041876402517494 }, "leaderboard_bbh_web_of_lies": { "alias": " - leaderboard_bbh_web_of_lies", "acc_norm,none": 0.744, "acc_norm_stderr,none": 0.027657108718204846 }, "leaderboard_gpqa": { "acc_norm,none": 0.3288590604026846, "acc_norm_stderr,none": 0.013615253350667551, "alias": " - leaderboard_gpqa" }, "leaderboard_gpqa_diamond": { "alias": " - leaderboard_gpqa_diamond", "acc_norm,none": 0.29292929292929293, "acc_norm_stderr,none": 0.032424979581788145 }, "leaderboard_gpqa_extended": { "alias": " - leaderboard_gpqa_extended", "acc_norm,none": 0.3424908424908425, "acc_norm_stderr,none": 0.02032718003704072 }, "leaderboard_gpqa_main": { "alias": " - leaderboard_gpqa_main", "acc_norm,none": 0.328125, "acc_norm_stderr,none": 0.0222080353262888 }, "leaderboard_ifeval": { "alias": " - leaderboard_ifeval", "prompt_level_strict_acc,none": 0.8798521256931608, "prompt_level_strict_acc_stderr,none": 0.013991555826864732, "inst_level_strict_acc,none": 0.919664268585132, "inst_level_strict_acc_stderr,none": "N/A", "prompt_level_loose_acc,none": 0.8927911275415896, "prompt_level_loose_acc_stderr,none": 0.01331353600556568, "inst_level_loose_acc,none": 0.9280575539568345, "inst_level_loose_acc_stderr,none": "N/A" }, "leaderboard_math_hard": { "exact_match,none": 0.0022658610271903325, "exact_match_stderr,none": 0.001305703553131165, "alias": " - leaderboard_math_hard" }, "leaderboard_math_algebra_hard": { "alias": " - leaderboard_math_algebra_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_counting_and_prob_hard": { "alias": " - leaderboard_math_counting_and_prob_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_geometry_hard": { "alias": " - leaderboard_math_geometry_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_intermediate_algebra_hard": { "alias": " - leaderboard_math_intermediate_algebra_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_math_num_theory_hard": { "alias": " - leaderboard_math_num_theory_hard", "exact_match,none": 0.006493506493506494, "exact_match_stderr,none": 0.006493506493506494 }, "leaderboard_math_prealgebra_hard": { "alias": " - leaderboard_math_prealgebra_hard", "exact_match,none": 0.010362694300518135, "exact_match_stderr,none": 0.007308424386792209 }, "leaderboard_math_precalculus_hard": { "alias": " - leaderboard_math_precalculus_hard", "exact_match,none": 0.0, "exact_match_stderr,none": 0.0 }, "leaderboard_mmlu_pro": { "alias": " - leaderboard_mmlu_pro", "acc,none": 0.5331615691489362, "acc_stderr,none": 0.004548433564124718 }, "leaderboard_musr": { "acc_norm,none": 0.4444444444444444, "acc_norm_stderr,none": 0.017253293554553326, "alias": " - leaderboard_musr" }, "leaderboard_musr_murder_mysteries": { "alias": " - leaderboard_musr_murder_mysteries", "acc_norm,none": 0.536, "acc_norm_stderr,none": 0.031603975145223735 }, "leaderboard_musr_object_placements": { "alias": " - leaderboard_musr_object_placements", "acc_norm,none": 0.234375, "acc_norm_stderr,none": 0.02652733398834892 }, "leaderboard_musr_team_allocation": { "alias": " - leaderboard_musr_team_allocation", "acc_norm,none": 0.568, "acc_norm_stderr,none": 0.03139181076542941 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
julia-se/tracka_qwen_fewshot_joy
julia-se
"2024-12-04T14:45:53Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T14:45:52Z"
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: Anger dtype: int64 - name: Disgust dtype: int64 - name: Fear dtype: int64 - name: Joy dtype: int64 - name: Sadness dtype: int64 - name: Surprise dtype: int64 - name: predicted_is_joy dtype: int64 - name: y_joy dtype: int64 splits: - name: train num_bytes: 472807 num_examples: 2226 download_size: 220764 dataset_size: 472807 configs: - config_name: default data_files: - split: train path: data/train-* ---
uiuioy/tuga
uiuioy
"2024-12-04T15:46:31Z"
7
0
[ "license:openrail", "size_categories:n<1K", "format:audiofolder", "modality:audio", "library:datasets", "library:mlcroissant", "region:us" ]
null
"2024-12-04T15:45:29Z"
--- license: openrail ---
argilla-internal-testing/argilla-server-dataset-test-9df031af-d8ae-4afb-a8bb-c5630699bdee
argilla-internal-testing
"2024-12-04T15:53:34Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:argilla", "region:us", "rlfh", "argilla", "human-feedback" ]
null
"2024-12-04T15:53:31Z"
--- tags: - rlfh - argilla - human-feedback --- # Dataset Card for argilla-server-dataset-test-9df031af-d8ae-4afb-a8bb-c5630699bdee This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Using this dataset with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.Dataset.from_hub("argilla-internal-testing/argilla-server-dataset-test-9df031af-d8ae-4afb-a8bb-c5630699bdee", settings="auto") ``` This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation. ## Using this dataset with `datasets` To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("argilla-internal-testing/argilla-server-dataset-test-9df031af-d8ae-4afb-a8bb-c5630699bdee") ``` This will only load the records of the dataset, but not the Argilla settings. ## Dataset Structure This dataset repo contains: * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. * A dataset configuration folder conforming to the Argilla dataset format in `.argilla`. The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. ### Fields The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset. | Field Name | Title | Type | Required | | ---------- | ----- | ---- | -------- | | text | Field Title | text | False | ### Questions The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | <!-- check length of metadata properties --> ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
mlfoundations-dev/evol_instruct_gpt-4o-mini_scale_x.125
mlfoundations-dev
"2024-12-11T17:00:05Z"
7
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T16:17:02Z"
--- dataset_info: features: - name: evolved_instruction dtype: string - name: completion dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: train num_bytes: 173643384 num_examples: 28022 download_size: 97314979 dataset_size: 173643384 configs: - config_name: default data_files: - split: train path: data/train-* ---
mlgawd/final_dpo_nemo_v25
mlgawd
"2024-12-04T16:24:57Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T16:24:55Z"
--- dataset_info: features: - name: questions dtype: string - name: accepted list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 25832769 num_examples: 5845 download_size: 14817580 dataset_size: 25832769 configs: - config_name: default data_files: - split: train path: data/train-* ---
Asteriks/chars74k-eng-good
Asteriks
"2024-12-04T21:38:39Z"
7
0
[ "task_categories:image-classification", "language:en", "size_categories:1K<n<10K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "image-classification" ]
"2024-12-04T16:39:28Z"
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': Sample001 '1': Sample002 '2': Sample003 '3': Sample004 '4': Sample005 '5': Sample006 '6': Sample007 '7': Sample008 '8': Sample009 '9': Sample010 '10': Sample011 '11': Sample012 '12': Sample013 '13': Sample014 '14': Sample015 '15': Sample016 '16': Sample017 '17': Sample018 '18': Sample019 '19': Sample020 '20': Sample021 '21': Sample022 '22': Sample023 '23': Sample024 '24': Sample025 '25': Sample026 '26': Sample027 '27': Sample028 '28': Sample029 '29': Sample030 '30': Sample031 '31': Sample032 '32': Sample033 '33': Sample034 '34': Sample035 '35': Sample036 '36': Sample037 '37': Sample038 '38': Sample039 '39': Sample040 '40': Sample041 '41': Sample042 '42': Sample043 '43': Sample044 '44': Sample045 '45': Sample046 '46': Sample047 '47': Sample048 '48': Sample049 '49': Sample050 '50': Sample051 '51': Sample052 '52': Sample053 '53': Sample054 '54': Sample055 '55': Sample056 '56': Sample057 '57': Sample058 '58': Sample059 '59': Sample060 '60': Sample061 '61': Sample062 splits: - name: train num_bytes: 73956435.184 num_examples: 6136 - name: validation num_bytes: 18902272.208 num_examples: 1564 download_size: 95517369 dataset_size: 92858707.392 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* task_categories: - image-classification language: - en pretty_name: Chars74k size_categories: - 1K<n<10K --- ## Chars74k The "Good" subset of the "English" subset of the Chars74k dataset split into training and validation sets. The validation set was created to match the label distribution of the training set. 62 classes (0-9, A-Z, a-z) Dataset page: https://teodecampos.github.io/chars74k/ Paper describing the dataset: https://www.semanticscholar.org/paper/Character-Recognition-in-Natural-Images-Campos-Babu/dbbd5fdc09349bbfdee7aa7365a9d37716852b32 5 images where removed due to poor quality. Lable distribution: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e26d05fb4a692673b3569a/ZvllFAQgS09s8pSrIvf4N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e26d05fb4a692673b3569a/TtC_gAK12lUMsNWdHRel7.png)
salma-remyx/vqasynth_sample_spatial_new_test
salma-remyx
"2024-12-04T16:50:25Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "vqasynth", "remyx" ]
null
"2024-12-04T16:50:18Z"
--- dataset_info: features: - name: image dtype: image - name: messages dtype: 'null' splits: - name: train num_bytes: 1363335.0 num_examples: 10 download_size: 1364618 dataset_size: 1363335.0 configs: - config_name: default data_files: - split: train path: data/train-* tags: - vqasynth - remyx ---
lgumpina/akamai-etp-samples
lgumpina
"2024-12-04T17:07:51Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T17:07:50Z"
--- dataset_info: features: - name: transformed_text dtype: string splits: - name: train num_bytes: 69153 num_examples: 13 download_size: 14411 dataset_size: 69153 configs: - config_name: default data_files: - split: train path: data/train-* ---
all-oj-gen/ds_coder6.7b_pos_reflct_rmsprop_iter3_sppo_hard_new_all_oj_iter3-bin
all-oj-gen
"2024-12-04T17:21:50Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T17:21:36Z"
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string - name: rejected_traceback dtype: string - name: chosen_probs dtype: float64 - name: chosen_probs_win dtype: float64 - name: chosen_probs_lose dtype: float64 splits: - name: train num_bytes: 17100841 num_examples: 4967 download_size: 7534526 dataset_size: 17100841 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ds_coder6.7b_pos_reflct_rmsprop_iter3_sppo_hard_new_all_oj_iter3-bin" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maniro-ai/2024-12-04-rod-nav-tape-small-s
maniro-ai
"2024-12-04T17:36:47Z"
7
0
[ "task_categories:robotics", "region:us", "LeRobot" ]
[ "robotics" ]
"2024-12-04T17:36:43Z"
--- task_categories: - robotics tags: - LeRobot --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
maniro-ai/2024-12-04-rod-nav-tape-parallel-offset
maniro-ai
"2024-12-04T17:38:04Z"
7
0
[ "task_categories:robotics", "region:us", "LeRobot" ]
[ "robotics" ]
"2024-12-04T17:37:58Z"
--- task_categories: - robotics tags: - LeRobot --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
skdrx/python-dpo-dataset-varname-formatted-NOSYSTEMPROMPT
skdrx
"2024-12-04T18:50:28Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T18:50:27Z"
--- dataset_info: features: - name: rejected list: - name: content dtype: string - name: role dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: prompt list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 911822 num_examples: 1000 download_size: 403823 dataset_size: 911822 configs: - config_name: default data_files: - split: train path: data/train-* ---
skdrx/python-dpo-dataset-varname-formatted-ONLYSYSTEMPROMPT
skdrx
"2024-12-04T18:51:20Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T18:51:18Z"
--- dataset_info: features: - name: rejected list: - name: content dtype: string - name: role dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: prompt list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 1222822 num_examples: 1000 download_size: 405338 dataset_size: 1222822 configs: - config_name: default data_files: - split: train path: data/train-* ---
skdrx/python-dpo-dataset-varname-formatted-combined-ONLYSYSTEMPROMPT
skdrx
"2024-12-04T18:56:04Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T18:56:02Z"
--- dataset_info: features: - name: rejected list: - name: content dtype: string - name: role dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: prompt list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 1406035 num_examples: 1000 download_size: 433764 dataset_size: 1406035 configs: - config_name: default data_files: - split: train path: data/train-* ---
julia-se/tracka_qwen_zeroshot_disgust
julia-se
"2024-12-04T20:00:49Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T20:00:47Z"
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: Anger dtype: int64 - name: Disgust dtype: int64 - name: Fear dtype: int64 - name: Joy dtype: int64 - name: Sadness dtype: int64 - name: Surprise dtype: int64 - name: predicted_is_disgust dtype: int64 - name: y_disgust dtype: int64 splits: - name: train num_bytes: 472807 num_examples: 2226 download_size: 216656 dataset_size: 472807 configs: - config_name: default data_files: - split: train path: data/train-* ---
kawsarahmd/english_bangla_nmt_datasets_v1
kawsarahmd
"2024-12-04T20:21:53Z"
7
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T20:21:48Z"
--- dataset_info: features: - name: id dtype: string - name: news_id dtype: string - name: bn_text dtype: string - name: en_text dtype: string splits: - name: train num_bytes: 162726175 num_examples: 52800 - name: validation num_bytes: 14614772 num_examples: 4693 - name: test num_bytes: 3603573 num_examples: 1174 download_size: 81719637 dataset_size: 180944520 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
MarkLeeeee/catsData
MarkLeeeee
"2024-12-04T21:18:09Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T21:18:08Z"
--- dataset_info: features: - name: image dtype: string - name: text dtype: string splits: - name: train num_bytes: 1845 num_examples: 20 download_size: 2134 dataset_size: 1845 configs: - config_name: default data_files: - split: train path: data/train-* ---
iimaginary/fineweb-edu-10B
iimaginary
"2024-12-06T13:26:55Z"
7
0
[ "language:en", "license:apache-2.0", "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T21:18:44Z"
--- language: - en license: apache-2.0 ---
magnifi/Phi3_intent_v45_1_w_unknown
magnifi
"2024-12-04T22:01:40Z"
7
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:01:39Z"
--- dataset_info: features: - name: Query dtype: string - name: true_intent dtype: string splits: - name: train num_bytes: 697708 num_examples: 9729 - name: validation num_bytes: 8109 num_examples: 113 download_size: 205543 dataset_size: 705817 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
IanLi233/Toxic-Chat-V2
IanLi233
"2024-12-04T22:46:10Z"
7
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "doi:10.57967/hf/3749", "region:us" ]
null
"2024-12-04T22:45:07Z"
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: toxic dtype: int64 splits: - name: train num_bytes: 11169460.0 num_examples: 8132 - name: validation num_bytes: 1395495.7402852927 num_examples: 1016 - name: test num_bytes: 1396869.2597147073 num_examples: 1017 download_size: 6144718 dataset_size: 13961825.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
pclucas14/nqa-RAG-64_3_24
pclucas14
"2024-12-04T22:47:17Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:47:16Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 5793428 num_examples: 66 download_size: 1988498 dataset_size: 5793428 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_8_24
pclucas14
"2024-12-04T22:49:27Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:49:26Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 5849479 num_examples: 66 download_size: 1882049 dataset_size: 5849479 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_10_24
pclucas14
"2024-12-04T22:53:14Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:53:12Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 5733767 num_examples: 66 download_size: 1990656 dataset_size: 5733767 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_4_24
pclucas14
"2024-12-04T22:54:53Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:54:52Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 5931033 num_examples: 66 download_size: 2055010 dataset_size: 5931033 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_5_24
pclucas14
"2024-12-04T22:56:50Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:56:45Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 5862504 num_examples: 66 download_size: 2001873 dataset_size: 5862504 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_6_24
pclucas14
"2024-12-04T22:57:02Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:57:00Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 6074705 num_examples: 66 download_size: 2094367 dataset_size: 6074705 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_7_24
pclucas14
"2024-12-04T22:59:58Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T22:59:56Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 5937735 num_examples: 66 download_size: 2040001 dataset_size: 5937735 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_9_24
pclucas14
"2024-12-04T23:04:07Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T23:04:05Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 5771358 num_examples: 66 download_size: 2084168 dataset_size: 5771358 configs: - config_name: default data_files: - split: train path: data/train-* ---
pclucas14/nqa-RAG-64_2_24
pclucas14
"2024-12-04T23:07:01Z"
7
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2024-12-04T23:06:59Z"
--- dataset_info: features: - name: text sequence: sequence: string - name: questions sequence: string - name: answers sequence: sequence: string - name: document_id dtype: string - name: split dtype: string splits: - name: train num_bytes: 6034839 num_examples: 66 download_size: 2118452 dataset_size: 6034839 configs: - config_name: default data_files: - split: train path: data/train-* ---