Joetib's picture
Create README.md
51a57c1
|
raw
history blame
2.81 kB
metadata
license: mit
datasets:
  - head_qa
language:
  - en
library_name: transformers

ibleducation/ibl-multiple-choice-7B

ibleducation/ibl-multiple-choice-7B is a model finetuned on top of mistralai/Mistral-7B-Instruct-v0.1

The model is finetuned to generate a multiple choice questions. The output of the model is a json object with the following entries

  1. category: The topic area of the question
  2. qtext: The question text
  3. ra: The aid of the correct answer
  4. answers: a list of possible answer choices each with an aid (answer id) and atext (answer text.)

Example Conversations

  1. Question: Photosynthesis
    Answer:
      {
       "category": "Photosynthesis",
       "qtext": "The chlorophyll fluorescence measurement technique is based on the emission of fluorescence by the chlorophylls present in the photosynthetic pigmentation:",
       "ra": 4,
       "answers": [
         {"aid": 1, "atext": "It is used to determine the light absorption characteristics of the pigments."},
         {"aid": 2, "atext": "It is used to determine the light emission characteristics of the pigments."},
         {"aid": 3, "atext": "It is used to determine the kinetics of light absorption by the pigments."},
         {"aid": 4, "atext": "It is used to determine the kinetics of light emission by the pigments."},
         {"aid": 5, "atext": "It is used to determine the energy that the pigments emit when they absorb light."}
       ]
     }
    

Model Details

How to Get Started with the Model

Install the necessary packages

Requires: transformers > 4.35.0

pip install transformers
pip install accelerate

You can then try the following example code

from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch

model_id = "ibleducation/ibl-multiple-choice-7B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
  model_id,
  device_map="auto",
)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
)
prompt = "<s>[INST] Algebra [/INST] "

response = pipeline(prompt)
print(response['generated_text'])

Important - Use the prompt template below:

<s>[INST] {prompt} [/INST]