---
language:
- ro
license: apache-2.0
tags:
- biology
- medical
---
# Dataset Overview
### Contents:
- Total Questions: **14,109**
- Single Choice: **6,021**
- Group Choice: **3,918**
- Multiple Choice: **4,170**
**Sources**: Romanian biology Olympiads and college admission exams.
### Column Roles in the Dataset
- `question_number`: An integer, stored as a string, identifying the question within its respective source, with different numbering systems for different types of exams.
- `question`: The text of the question itself.
- `type`: Indicates the type of question, such as single-choice, group-choice, or multiple-choice.
- `options`: A list of potential answers provided alongside the question.
- `grade`: The educational level of the question, ranging from VII to XII for high school and including university-level questions.
- `stage`: The stage of the exam (local, state, or national) for which the question was used.
- `year`: The year in which the question was included in an exam.
- `right_answer`: The correct answer(s) to the question, formatted as a single letter for single and group choices, or a combination of letters for multiple choices.
- `source`: The origin of the question, such as specific Olympiads or universities.
- `id_in_source`: A unique identifier for the question within its source, which helps handle cases of ambiguous identification.
- `dupe_id`: A UUID assigned to a group of questions that are identified as duplicates within the dataset, helping ensure question uniqueness.
### Deduping
We consider questions to be duplicates even if the options are presented in a different order. If you wish to retain such entries, please ensure to verify them manually beforehand.
```Python
# Example Code for Deduping
from datasets import load_dataset
import pandas as pd
ds = load_dataset("RoBiology/RoBiologyDataChoiceQA")
df = pd.DataFrame(ds['train'])
final_df = pd.concat([
df[df['dupe_id'].isnull()],
df[df['dupe_id'].notnull()].drop_duplicates(subset=['dupe_id'])
])
```
# Dataset Datasheet
Inspiration: [Microsoft Research](https://www.microsoft.com/en-us/research/uploads/prod/2019/01/1803.09010.pdf)
## Motivation for Dataset Creation
Why was the dataset created?
The dataset was developed to assess and enhance the performance of large language models (LLMs) on domain-specific tasks, specifically Romanian biology tests. It offers choice-based questions to evaluate LLM accuracy and can also be used for fine-tuning LLMs to understand specialized Romanian biology terminology.
## Dataset Composition
What are the instances?
The instances consist of (single, group, or multiple) choice questions sourced from Romanian biology olympiads and college admission exam books. Each question is paired with its correct answer(s), extracted from the corresponding answer keys. Additional identifying information is also appended to each instance, as detailed in the following paragraphs.
Are relationships between instances made explicit in the data?
Yes, relationships between instances are explicitly marked. Using question identification metadata, instances can be grouped by attributes such as test, year, class, and stage. When identical questions with identical answer options appear across different tests or problem sets, they are assigned a shared `dupe_id`.
How many instances of each type are there?
The dataset contains a total of 14,109 extracted questions:
- Single choice: 6,021
- Group choice: 3,918
- Multiple choice: 4,170
What data does each instance consist of?
We will explain each field:
- `question_number` = an integer stored as string, for olympiads it takes values from 1 to 80. Most tests tend to have at most 60, but the very old ones (2004) don't quite respect the format. As for college admissions, those ones take values from 1 to 800 (not uniformly, there are tests/chapters with random number of questions, no general rule).
- `question` = the question text
- `type`
- single-choice (only one of the options will be considered true/the right choice for the given question)
- group-choice (the answer is represented by a single letter, which corresponds to a combination of options being true together):
- A - if ONLY the options numbered by 1, 2 and 3 are correct
- B - if ONLY the options numbered by 1 and 3 are correct
- C - if ONLY the options numbered by 2 and 4 are correct
- D - if ONLY the option numbered by 4 is correct
- E - if ALL of the numbered options are correct
- multiple-choice (the answer is represented by any alphabetical ordered combination of the given options. Even though it is multiple, the answer CAN STILL be a single letter)
- `options` = a list of texts (usually statements or list of items) that in combination with the question text can be considered true or false. Olympiad tests have 4 options while college admission ones have 5.
- `grade` = where is the test/problem set extracted from, it takes 6 values: facultate (college), XII, XI, X, IX (highschool), VII.
- `stage` = for college it is fixed on admitere (admission). For olympiad it represents the chain of theoretical importance and difficulty: `locala -> judeteana -> nationala` (local -> state -> national).
- `year` = the year (as a string) in which the problem set/test was given as a competition
- `right_answer` = a letter for single-choice and group-choice (check the explications above) and multiple (non repeating) letters concatenated in a string with no other characters, in alphabetical order.
- `source` = olimpiada (olympiad of biology in Romania) or in the case of college, the university it was taken from (currently 3 possible values: UMF Cluj, UMF Brasov, UMF Timisoara)
- `id_in_source` = a string that has the purpose of further recognising the question within the problem set it was given, in case of ambiguity. Ensures uniqueness when combined with the other fields recommended for identifying the questions. It contains spaces so keep that in mind.
- `dupe_id` = a UUID that uniquely identifies a group of duplicated questions. The group may contain 2 or more instances. The instance is considered a duplicate if and only if both the question and options are the same (not necessarily in the same order for options). Two texts are considered the same if they are identical/use synonyms for common words/are obviously rephrased versions of each other. If a text adds extra words but besides that it is identical with another text, it is not marked as a duplicate.
Is everything included or does the data rely on external resources?
Everything is included.
Are there recommended data splits or evaluation measures?
The data is currently split into three (train, valid, test). We attempted a uniform distribution of the data, based on both quantity and quality of the data.
Both the test and valid split were sampled via the following recipe:
#### Grade Based Separation:
- **Grade XII:** 175 questions
- 75 national level
- 100 state level
- **Grade XI:** 175 questions
- 75 national level
- 100 state level
- **Grade X:** 200 questions
- 55 national level
- 125 state level
- 20 local level
- **Grade IX:** 250 questions
- 115 national level
- 115 state level
- 20 local level
- **Grade VII:** 200 questions
- 85 national level
- 85 state level
- 30 local level
- **University Level (Facultate):** 400 questions (detailed division below)
- UMF Timișoara: 200 questions
- 11 chapters total, 18 questions per chapter, except for the nervous system, which has 20 questions due to higher coverage.
- UMF Brașov: 75 questions
- Derived from 15 questions from each synthesis test.
- UMF Cluj: 125 questions
- Physiology (for assistants): 8 questions
- 1 question per chapter for 5 chapters, plus 3 random questions.
- Anatomy (for assistants): 8 questions
- Same structure as Physiology.
- Physiology (for medicine students): 55 questions
- 4 questions from each of the first 13 chapters, plus 3 questions from Chapter 14.
- Anatomy (for medicine students): 54 questions
- Similar to Physiology, but only 2 questions from Chapter 14.
#### Grade-Stage Yearly Distribution
The tables below present the yearly distribution of how many questions to select for each grade, per stage.
### Grouped Stage Distribution Table
| Year | National VII | National IX | National X | National XI | National XII | | State VII | State IX | State X | State XI | State XII | | Local VII | Local IX | Local X | Local XI | Local XII |
|------|--------------|-------------|------------|-------------|--------------|-|-----------|----------|---------|----------|-----------|-|-----------|----------|---------|----------|-----------|
| 2004 | - | 2 | - | - | - | | - | 1 | - | - | - | | X | X | X | - | - |
| 2005 | - | 2 | - | - | - | | - | 1 | - | - | - | | - | - | - | - | - |
| 2006 | - | - | - | - | - | | - | - | - | - | - | | - | - | - | - | - |
| 2007 | - | - | - | - | - | | - | - | - | - | - | | - | - | - | - | - |
| 2008 | - | 4 | - | - | - | | - | 1 | - | - | - | | - | - | - | - | - |
| 2009 | 5 | 4 | - | - | - | | 5 | 2 | - | - | - | | X | X | X | - | - |
| 2010 | 5 | - | - | - | - | | 5 | 2 | - | - | - | | X | - | - | - | - |
| 2011 | 7 | 5 | - | - | - | | 7 | 3 | - | - | - | | - | - | - | - | - |
| 2012 | 8 | 5 | - | - | - | | 8 | 3 | - | - | - | | - | - | - | - | - |
| 2013 | 8 | 5 | - | - | - | | 12 | 3 | - | - | - | | X | X | X | - | - |
| 2014 | 12 | 8 | 3 | 5 | 5 | | 13 | 4 | 5 | 4 | 4 | | X | X | - | - | - |
| 2015 | 15 | 8 | 3 | 5 | 5 | | 15 | 4 | 5 | 4 | 4 | | 15 | 15 | 10 | - | - |
| 2016 | 15 | 8 | 4 | 7 | 7 | | 20 | 6 | 6 | 6 | 6 | | 15 | 15 | 10 | - | - |
| 2017 | - | - | - | - | - | | - | 8 | 8 | 8 | 8 | | - | - | - | - | - |
| 2018 | - | 10 | 5 | 8 | 8 | | - | 10 | 10 | 8 | 8 | | - | - | - | - | - |
| 2019 | - | 12 | 7 | 8 | 8 | | - | 12 | 12 | 12 | 12 | | - | - | - | - | - |
| 2020 | - | - | - | - | - | | - | 12 | 14 | 14 | 14 | | - | - | - | - | - |
| 2021 | - | 12 | 8 | 12 | 12 | | - | 13 | 20 | 14 | 14 | | - | - | - | - | - |
| 2022 | - | 15 | 10 | 15 | 15 | | - | 15 | 20 | 14 | 14 | | - | - | - | - | - |
| 2023 | - | 15 | 15 | 15 | 15 | | - | 15 | 25 | 15 | 15 | | - | - | - | - | - |
| 2024 | - | 15 | 15 | 15 | 15 | | - | 15 | 25 | 15 | 15 | | - | - | - | - | - |
- "-" means no data was available for that year while "X" means nothing was selected.
## Data Collection Process
How was the data collected?
- Olympiad data: Sourced from public online archives, primarily from [olimpiade.ro](https://www.olimpiade.ro/). Additional data was retrieved through separate online searches when needed.
- College admission books: Obtained from private sources. The collected data consists of PDFs, with some containing parsable text and others consisting of images that required additional processing.
Who was involved in the data collection process?
The pdf data was collected by us as well as some medicine students.
Over what time-frame was the data collected?
It took roughly one month to collect the data.
How was the data associated with each instance acquired?
The data was initially collected as PDF files. To standardize the format, a Word-to-PDF converter was sometimes used. The PDFs either contained parsable text or had text embedded in images. While the quality of some images was questionable, most of the information was successfully recognized.
For PDFs with parsable text, Python libraries were used for data extraction, with occasional manual verification and refactoring. For PDFs containing images, Gemini 1.5 Flash was employed to extract the data. Random sampling was performed to verify the accuracy of the extracted data.
Does the dataset contain all possible instances?
No. Some olympiads, although we know for sure they existed, were not found on the internet. Additionally, there is more data collected in PDF format that has not yet been parsed into actual instances.
If the dataset is a sample, then what is the population?
The population includes additional college admissions and olympiads from Romania that can be found and parsed. It can also contain closely related national contests that feature choice-based questions, which could be included.
## Data Preprocessing
What preprocessing/cleaning was done?
After extraction, several preprocessing and cleaning steps were applied to standardize and structure the data:
1. Extracted the question number from the question text and placed it in a separate field.
2. Standardized option identifiers to uppercase letters.
3. Ensured all options followed the structure: `"[identifier]. [text]"` - `[identifier]` is either a letter (A-D, or E for five-option lists) or a number (1-4 for group-choice questions).
4. Replaced multiple spaces with a single space.
5. Replaced newline characters with spaces.
6. Standardized quotes by replacing `,,` and `’’` with `""`.
7. Normalized diacritics to proper Romanian characters (e.g., `ș, ț, â, ă`).
8. Manually corrected grammar issues and typos.
9. Removed trailing characters such as commas, dots, spaces, and semicolons from option texts.
10. Made Gemini 1.5 Flash act as a grammar correcting tool to help us further find typos. Manually checked the output of it as the LLM has a tendency to replace words besides the typos. (Also used Gemma 2 9B when Gemini 1.5 Flash was unavailable)
Was the "raw" data saved in addition to the preprocessed/cleaned data?
The pdf files are saved privately.
Is the preprocessing software available?
No.
Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?
This dataset successfully provides specialized (Romanian) biology terms that can be used for training or knowledge evaluation.