taesiri's picture
Update README.md
2c7f96e
|
raw
history blame
981 Bytes
metadata
inference: false
license: cc-by-4.0
datasets:
  - taesiri/video-game-question-answering
  - taesiri/video-game-question-answering-mixtral-8x7b-instruct-v0-1
language:
  - en
pipeline_tag: visual-question-answering


LLaVA-VideoGameVQA - Work In Progress - Model Card

Model details

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

Model date: LLaVA-v1.5-13B-LoRA was trained in December 2023.

LoRA Weights

  • Checkpoint 1 trained on 28K question-answering pairs. Base Model: liuhaotian/llava-v1.5-13b
  • Checkpoint 5 trained on 74K question-answering pairs. Base Model: liuhaotian/llava-v1.5-13b