Edit model card

A question generation model trained on alinet/balanced_qg dataset (resolved subset).

Example usage:

from transformers import BartConfig, BartForConditionalGeneration, BartTokenizer

model_name = "alinet/bart-base-balanced-resolved-qg"

tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name) 

def run_model(input_string, **generator_args):
  input_ids = tokenizer.encode(input_string, return_tensors="pt")
  res = model.generate(input_ids, **generator_args)
  output = tokenizer.batch_decode(res, skip_special_tokens=True)
  print(output)

run_model("Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.", max_length=32, num_beams=4)
# ['What is the term for a reading comprehension dataset consisting of questions posed by crowdworkers?']
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train alinet/bart-base-balanced-resolved-qg

Evaluation results