--- base_model: unsloth/mistral-7b language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # flashcardsGPT-Mistral-7B-v0.1-GGUF - This model is a fine-tuned version of [unsloth/mistral-7b](https://huggingface.co/unsloth/mistral-7b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) based on real university lecture data. - Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university ([more info](https://www.fhnw.ch/en/degree-programmes/business/bsc-in-business-information-technology)). - This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/flashcardsGPT-Mistral-7B-v0.1](https://huggingface.co/valeriojob/flashcardsGPT-Mistral-7B-v0.1) that includes the default format of the model as well as the LoRA adapters of the model. - This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp). ## Model description This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object. It uses the following Prompt Engineering template: """ Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic. Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example. Ensure the 'back' field contains no line breaks. No additional text or explanation should be provided—only respond with the JSON object. Here is the OCR-extracted text: """" ## Intended uses & limitations The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university. ## Training and evaluation data The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/FHNW-Flashcards-Data-v0.1](https://huggingface.co/datasets/valeriojob/FHNW-Flashcards-Data-v0.1) ## Licenses - **License:** apache-2.0