flashcardsGPT-Mistral-7B-v0.1
- This model is a fine-tuned version of unsloth/mistral-7b on an dataset created by Valerio Job based on real university lecture data.
- Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university (more info).
- This repo includes the default format of the model as well as the LoRA adapters of the model. There is a separate repo called valeriojob/flashcardsGPT-Mistral-7B-v0.1-GGUF that includes the quantized versions of this model in GGUF format.
- This model was trained 2x faster with Unsloth and Huggingface's TRL library.
Model description
This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object. It uses the following Prompt Engineering template:
""" Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic. Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example. Ensure the 'back' field contains no line breaks. No additional text or explanation should be provided—only respond with the JSON object.
Here is the OCR-extracted text: """"
Intended uses & limitations
The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university.
Training and evaluation data
The dataset (train and test) used for fine-tuning this model can be found here: datasets/valeriojob/FHNW-Flashcards-Data-v0.1
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size = 2,
- gradient_accumulation_steps = 4,
- warmup_steps = 5,
- max_steps = 55, # increase this to make the model learn "better"
- num_train_epochs=4,
- learning_rate = 2e-4,
- fp16 = not torch.cuda.is_bf16_supported(),
- bf16 = torch.cuda.is_bf16_supported(),
- logging_steps = 1,
- optim = "adamw_8bit",
- weight_decay = 0.01,
- lr_scheduler_type = "linear",
- seed = 3407,
- output_dir = "outputs"
Training results
Training Loss | Step |
---|---|
1.454800 | 1 |
1.222900 | 2 |
1.236600 | 3 |
1.116600 | 5 |
1.134500 | 10 |
0.974100 | 15 |
0.951800 | 20 |
0.608600 | 30 |
0.554900 | 40 |
0.391000 | 55 |
Licenses
- License: apache-2.0
- Downloads last month
- 1