metadata
language: en
license: mit
tags:
- xlnet
- automatic-short-answer-grading
- regression
- education
- short-answer
- assessment
- grading
datasets:
- Meyerger/ASAG2024
metrics:
- mse
- rmse
- mae
- pearson correlation
model-index:
- name: xlnet-regression
results:
- task:
type: regression
name: automatic short answer grading
metrics:
- type: mse
value: 0.058389
- type: rmse
value: 0.241639
- type: mae
value: 0.153142
- type: pearson correlation
value: 0.80115
pipeline_tag: text-classification
ASAG XLNet Regression Model
This model evaluates student answers by comparing them to reference answers and predicting a grade (regression).
Model Details
- Model Type: XLNet for Regression
- Task: Automatic Short Answer Grading (ASAG)
- Framework: PyTorch/Transformers
- Base Model: xlnet-base-cased
Usage
from transformers import XLNetTokenizer, XLNetForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = XLNetTokenizer.from_pretrained("kenzykhaled/xlnet-regression")
model = XLNetForSequenceClassification.from_pretrained("kenzykhaled/xlnet-regression")
# Prepare inputs
student_answer = "It is vision."
reference_answer = "The stimulus is seeing or hearing the cup fall."
inputs = tokenizer(
text=student_answer,
text_pair=reference_answer,
return_tensors="pt",
padding=True,
truncation=True
)
# Get prediction
with torch.no_grad():
outputs = model(**inputs)
# Get predicted grade (normalized between 0-1)
predicted_grade = outputs.logits.item()
predicted_grade = max(0, min(1, predicted_grade))
print(f"Predicted grade: {predicted_grade:.4f}")
Training Data
This model was trained on the Meyerger/ASAG2024 dataset.
Use Cases
- Automated grading of student short-answer responses
- Educational technology platforms
- Learning management systems
- Assessment tools
- Teacher assistance for grading
Limitations
- The model is trained on specific educational domains and may not generalize well to all subjects
- Performance depends on the similarity of input data to the training data
- Should be used as an assistive tool for grading rather than a complete replacement for human evaluation
Ethical Considerations
When using this model for automated grading:
- Be transparent with students about the use of AI for grading
- Consider potential biases in evaluation
- Provide human review of edge cases
- Allow students to appeal automated grades