Safetensors
llava_next
Edit model card

Pangea-7B Model Card

Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages

๐Ÿ‡ช๐Ÿ‡น ๐Ÿ‡ธ๐Ÿ‡ฆ ๐Ÿ‡ง๐Ÿ‡ฌ ๐Ÿ‡ง๐Ÿ‡ฉ ๐Ÿ‡จ๐Ÿ‡ฟ ๐Ÿ‡ฉ๐Ÿ‡ช ๐Ÿ‡ฌ๐Ÿ‡ท ๐Ÿ‡ฌ๐Ÿ‡ง ๐Ÿ‡บ๐Ÿ‡ธ ๐Ÿ‡ช๐Ÿ‡ธ ๐Ÿ‡ฎ๐Ÿ‡ท ๐Ÿ‡ซ๐Ÿ‡ท ๐Ÿ‡ฎ๐Ÿ‡ช ๐Ÿ‡ฎ๐Ÿ‡ณ ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡ณ๐Ÿ‡ฌ ๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿ‡ฎ๐Ÿ‡ฑ ๐Ÿ‡ฏ๐Ÿ‡ต ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡ฐ๐Ÿ‡ท ๐Ÿ‡ณ๐Ÿ‡ฑ ๐Ÿ‡ฒ๐Ÿ‡ณ ๐Ÿ‡ฒ๐Ÿ‡พ ๐Ÿ‡ณ๐Ÿ‡ด ๐Ÿ‡ต๐Ÿ‡ฑ ๐Ÿ‡ต๐Ÿ‡น ๐Ÿ‡ง๐Ÿ‡ท ๐Ÿ‡ท๐Ÿ‡ด ๐Ÿ‡ท๐Ÿ‡บ ๐Ÿ‡ฑ๐Ÿ‡ฐ ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡ฐ๐Ÿ‡ช ๐Ÿ‡น๐Ÿ‡ฟ ๐Ÿ‡ฑ๐Ÿ‡ฐ ๐Ÿ‡น๐Ÿ‡ญ ๐Ÿ‡น๐Ÿ‡ท ๐Ÿ‡บ๐Ÿ‡ฆ ๐Ÿ‡ต๐Ÿ‡ฐ ๐Ÿ‡ป๐Ÿ‡ณ ๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡น๐Ÿ‡ผ

๐Ÿ  Homepage | ๐Ÿค– Pangea-7B | ๐Ÿ“Š PangeaIns | ๐Ÿงช PangeaBench | ๐Ÿ’ป Github | ๐Ÿ“„ Arxiv | ๐Ÿ“• PDF | ๐Ÿ–ฅ๏ธ Demo

description

Model details

  • Model: Pangea is a fully open-source Multilingual Multimodal Multicultural LLM.
  • Date: Pangea-7B was trained in 2024.
  • Training Dataset: 6M PangeaIns.
  • Architecture: Pangea-7B follows the architecture of LLaVA-NeXT, with a Qwen2-7B-Instruct backbone.

Uses

The hf version is intended so that you could use Pangea-7B with the huggingface generate function. If you want to use it with the Llava-Next codebase, please refer to our original checkpoint.

# Assuming that you have text_input and image_path
from transformers import LlavaNextForConditionalGeneration, AutoProcessor
import torch
from PIL import Image

image_input = Image.open(image_path)

model = LlavaNextForConditionalGeneration.from_pretrained(
            "neulab/Pangea-7B-hf", 
            torch_dtype=torch.float16
        ).to(0)
processor = AutoProcessor.from_pretrained("neulab/Pangea-7B-hf")
model.resize_token_embeddings(len(processor.tokenizer))

text_input = f"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\n{text_input}<|im_end|>\n<|im_start|>assistant\n"
model_inputs = processor(images=image_input, text=text_input, return_tensors='pt').to("cuda", torch.float16)
output = model.generate(**model_inputs, max_new_tokens=1024, min_new_tokens=32, temperature=1.0, top_p=0.9, do_sample=True)
output = output[0]
result = processor.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(result)

Citing the Model

BibTeX Citation:

@article{yue2024pangeafullyopenmultilingual,
  title={Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages},
  author={Xiang Yue and Yueqi Song and Akari Asai and Seungone Kim and Jean de Dieu Nyandwi and Simran Khanuja and Anjali Kantharuban and Lintang Sutawika and Sathyanarayanan Ramamoorthy and Graham Neubig},
  year={2024},
  journal={arXiv preprint arXiv:2410.16153},
  url={https://arxiv.org/abs/2410.16153}
}
Downloads last month
537
Safetensors
Model size
7.94B params
Tensor type
FP16
ยท
Inference API
Unable to determine this model's library. Check the docs .

Model tree for neulab/Pangea-7B-hf

Base model

Qwen/Qwen2-7B
Finetuned
(53)
this model

Dataset used to train neulab/Pangea-7B-hf

Collection including neulab/Pangea-7B-hf