Florence-2-large-ft / README.md
Xenova's picture
Xenova HF staff
[Automated] Update base model metadata
b4c60b2 verified
metadata
base_model: microsoft/Florence-2-large-ft
library_name: transformers.js
license: mit
pipeline_tag: image-text-to-text
tags:
  - vision
  - text-generation
  - text2text-generation
  - image-to-text

https://huggingface.co/microsoft/Florence-2-large-ft with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

NOTE: Florence-2 support is experimental and requires you to install Transformers.js v3 from source.

If you haven't already, you can install the Transformers.js JavaScript library from GitHub using:

npm install xenova/transformers.js#v3

Example: Perform image captioning with onnx-community/Florence-2-large-ft.

import {
    Florence2ForConditionalGeneration,
    AutoProcessor,
    AutoTokenizer,
    RawImage,
} from '@xenova/transformers';

// Load model, processor, and tokenizer
const model_id = 'onnx-community/Florence-2-large-ft';
const model = await Florence2ForConditionalGeneration.from_pretrained(model_id, {
    dtype: {
        embed_tokens: 'fp16', // or 'fp32'
        vision_encoder: 'fp16', // or 'fp32'
        encoder_model: 'q4',
        decoder_model_merged: 'q4',
    },
});
const processor = await AutoProcessor.from_pretrained(model_id);
const tokenizer = await AutoTokenizer.from_pretrained(model_id);

// Load image and prepare vision inputs
const url = 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg';
const image = await RawImage.fromURL(url);
const vision_inputs = await processor(image);

// Specify task and prepare text inputs
const task = '<MORE_DETAILED_CAPTION>';
const prompts = processor.construct_prompts(task);
const text_inputs = tokenizer(prompts);

// Generate text
const generated_ids = await model.generate({
    ...text_inputs,
    ...vision_inputs,
    max_new_tokens: 256,
});

// Decode generated text
const generated_text = tokenizer.batch_decode(generated_ids, { skip_special_tokens: false })[0];

// Post-process the generated text
const result = processor.post_process_generation(generated_text, task, image.size);
console.log(result);
// { '<MORE_DETAILED_CAPTION>': 'A car is parked on the street. The car is a light green color. The doors on the building are brown. The building is a yellow color. There are two doors on both sides of the car. The wheels on the car are very shiny. The ground is made of bricks. The sky is blue. The sun is shining on the top of the building.' }

We also released an online demo, which you can try yourself: https://huggingface.co/spaces/Xenova/florence2-webgpu


Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).