--- dataset_info: features: - name: question dtype: string - name: context dtype: string - name: answer dtype: string splits: - name: train num_bytes: 65343587 num_examples: 65594 - name: test num_bytes: 16441554 num_examples: 16399 download_size: 50866268 dataset_size: 81785141 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* language: - tr --- The dataset consists of nearly 82K {Context, Question, Answer} triplets in Turkish. Since most of the answers are only a few words and taken directly from the provided context, it can be better used in in finetuning encoder-only models like BERT for extractive question answering or embedding models for retrieval. The dataset is a filtered and combined version of multiple Turkish QA-based datasets. Please use [ucsahin/TR-Extractive-QA-5K](https://huggingface.co/datasets/ucsahin/TR-Extractive-QA-5K) for more detailed and sampled version of this dataset.