--- language: - ja tags: - heron - vision - image-captioning - VQA pipeline_tag: image-to-text license: - apache-2.0 inference: false --- # Heron BLIP Japanese StableLM Base 7B ![heron](./heron_image.png) ## Model Details Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/tree/dev-0.0.1#1-clone-this-repository). ```python import requests from PIL import Image import torch from transformers import AutoProcessor from heron.models.git_llm.git_llama import GitLlamaForCausalLM device_id = 0 # prepare a pretrained model model = GitLlamaForCausalLM.from_pretrained('turing-motors/heron-chat-git-ja-stablelm-base-7b-v0') model.eval() model.to(f"cuda:{device_id}") # prepare a processor processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-ja-stablelm-base-7b-v0', additional_special_tokens=["▁▁"]) # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##Instruction: Please answer the following question concletely. ##Question: What is unusual about this image? Explain precisely and concletely what he is doing? ##Answer: " # do preprocessing inputs = processor( text, image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()} # set eos token eos_token_id_list = [ processor.tokenizer.pad_token_id, processor.tokenizer.eos_token_id, ] # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list) # print result print(processor.tokenizer.batch_decode(out)) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597) * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b) * **Language(s)**: Japanese * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Training This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) and Japanese Visual Genome using LoRA. ### Training Dataset - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) - [Japanese STAIR Captions](http://captions.stair.center/) - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa) ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{GitJapaneseStableLM, url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)}, title = {Heron BLIP Japanese StableLM Base 7B}, author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{JapaneseInstructBLIPAlpha, url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)}, title = {Japanese InstructBLIP Alpha}, author = {Shing, Makoto and Akiba, Takuya} } ``` --- license: apache-2.0 ---