File size: 1,732 Bytes
486892c
 
 
 
 
 
 
 
 
 
f83505d
486892c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
datasets:
- AnyaSchen/image2poetry_ru
language:
- ru
tags:
- Mayakovsky
- image2poetry
---

This repo contains model for generation poetry in style of Mayakovsky from image. The model is fune-tuned concatecation of two pre-trained models: [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) as encoder and [AnyaSchen/rugpt3_mayak](https://huggingface.co/AnyaSchen/rugpt3_mayak) as decoder.

To use this model you can do:

```
from PIL import Image
import requests
from transformers import AutoTokenizer, VisionEncoderDecoderModel, ViTImageProcessor

def generate_poetry(fine_tuned_model, image, tokenizer):
    pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
    pixel_values = pixel_values.to(device)

    # Generate the poetry with the fine-tuned VisionEncoderDecoder model
    generated_tokens = fine_tuned_model.generate(
        pixel_values,
        max_length=300,
        num_beams=3,
        top_p=0.8,
        temperature=2.0,
        do_sample=True,
        pad_token_id=tokenizer.pad_token_id,
        eos_token_id=tokenizer.eos_token_id,
    )

    # Decode the generated tokens
    generated_poetry = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
    return generated_poetry

path = 'AnyaSchen/vit-rugpt3-medium-mayak'
fine_tuned_model = VisionEncoderDecoderModel.from_pretrained(path).to(device)
feature_extractor = ViTImageProcessor.from_pretrained(path)
tokenizer = AutoTokenizer.from_pretrained(path)

url = 'https://anandaindia.org/wp-content/uploads/2018/12/happy-man.jpg'
image = Image.open(requests.get(url, stream=True).raw)

generated_poetry = generate_poetry(fine_tuned_model, image, tokenizer)
print(generated_poetry)
```