Text Generation
Transformers
Safetensors
GGUF
llava
remyx
Inference Endpoints
SpaceLLaVA / README.md
salma-remyx's picture
update README
5332b77 verified
|
raw
history blame
2.14 kB
metadata
license: apache-2.0

image/png

Model Card for SpaceLLaVA

SpaceLLaVA uses LoRA to fine-tune LLaVA on a dataset designed with VQASynth to enhance spatial reasoning as in SpatialVLM

Model Details

Model Description

This model uses data synthesis techniques and publically available models to reproduce the work described in SpatialVLM to enhance the spatial reasoning of multimodal models. With a pipeline of expert models, we can infer spatial relationships between objects in a scene to create VQA dataset for spatial reasoning.

  • Developed by: remyx.ai
  • Model type: MultiModal Model, Vision Language Model, LLaVA
  • License: Apache-2.0
  • Finetuned from model: LLaVA

Model Sources

Uses

Use this model to query spatial relationships between objects in a scene.

Open In Colab

Try it on Discord: http://discord.gg/b2yGuCNpuC

image/png

Citation

@article{chen2024spatialvlm,
  title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities},
  author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei},
  journal = {arXiv preprint arXiv:2401.12168},
  year = {2024},
  url = {https://arxiv.org/abs/2401.12168},
}

@misc{liu2023llava,
      title={Visual Instruction Tuning},
      author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
      publisher={NeurIPS},
      year={2023},
}