File size: 1,500 Bytes
ece6c5a
bc9e07e
 
ece6c5a
bc9e07e
 
ece6c5a
 
c38303f
 
2c7f96e
bc9e07e
 
ece6c5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c38303f
 
 
dda4bfa
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
language:
- en
license: cc-by-4.0
tags:
- llava
datasets:
- taesiri/video-game-question-answering
- taesiri/glitch-llava-game-qa-dataset-wip
- taesiri/GameplayCaptions-GPT-4V-V2
- taesiri/video-game-question-answering-mixtral-8x7b-instruct-v0-1
inference: false
pipeline_tag: image-text-to-text
---

<br>
<br>

# LLaVA-VideoGameVQA - Work In Progress - Model Card

## Model details

**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.

**Model date:**
LLaVA-v1.5-13B-LoRA was trained in December 2023.

**LoRA Weights**
 - [Checkpoint 1](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-1) trained on `28K` question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b`
 - [Checkpoint 5](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-5) trained on `74K` question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b`
  - [Checkpoint 8](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-8) trained on `185K` question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b`


**How to run**

```bash
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path  ./lora-checkpoints-8 --model-base liuhaotian/llava-v1.5-13b 
```