Image-Text-to-Text
Transformers
Safetensors
lora
Inference Endpoints
takarajordan's picture
Update README.md
7c1495b verified
|
raw
history blame
971 Bytes
metadata
license: mit
base_model:
  - mistralai/Pixtral-12B-2409
pipeline_tag: image-text-to-text
library_name: transformers
tags:
  - lora
datasets:
  - Multimodal-Fatima/FGVC_Aircraft_train

pixtral_aerial_VQA_adapter

Model Details

  • Type: LoRA Adapter
  • Total Parameters: 6,225,920
  • Memory Usage: 23.75 MB
  • Precisions: torch.float32
  • Layer Types:
    • lora_A: 40
    • lora_B: 40

Intended Use

  • Primary intended uses: Processing aerial footage of construction sites for structural and construction surveying.
  • Can also be applied to any detailed VQA use cases with aerial footage.

Training Data

  • Dataset:
    1. FloodNet Track 2 dataset
    2. Subset of FGVC Aircraft dataset
    3. Custom dataset of 10 image-caption pairs created using Pixtral

Training Procedure

  • Training method: LoRA (Low-Rank Adaptation)
  • Base model: Ertugrul/Pixtral-12B-Captioner-Relaxed
  • Training hardware: Nebius-hosted NVIDIA H100 machine