taesiri's picture
Create README.md
ece6c5a
|
raw
history blame
726 Bytes
metadata
inference: false
license: cc-by-4.0
datasets:
  - taesiri/video-game-question-answering
language:
  - en
pipeline_tag: visual-question-answering


LLaVA-VideoGameVQA - Work In Progress - Model Card

Model details

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

Model date: LLaVA-v1.5-13B-LoRA was trained in December 2023.

LoRA Weights

  • Checkpoint 1 trained on 28K question-answering pairs. Base Model: liuhaotian/llava-v1.5-13b