File size: 2,537 Bytes
e80fad1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
import warnings
warnings.filterwarnings("ignore")
## import necessary packages

import os
import io
import sys
import base64
import random
import argparse
import math
import numpy as np

from typing import Any, Union,Dict, List
import numpy as np
import requests
from PIL import Image
from imageio import imread
from keras import backend as K

import coco
import utils
import model as modellib
import visualize
from classes import class_names
from fastapi import FastAPI
 
# Create a new FastAPI app instance
app = FastAPI()

# Root directory of the project
ROOT_DIR = os.getcwd()

# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")

# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
os.system("pip install pycocotools==2.0.0")
K.clear_session()

if not os.path.exists(COCO_MODEL_PATH):
    utils.download_trained_weights(COCO_MODEL_PATH)

class InferenceConfig(coco.CocoConfig):
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1
config = InferenceConfig()

model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
model.load_weights(COCO_MODEL_PATH, by_name=True)

 
# Define a function to handle the GET request at `/generate`
# The generate() function is defined as a FastAPI route that takes a 
# string parameter called text. The function generates text based on the # input using the pipeline() object, and returns a JSON response 
# containing the generated text under the key "output"
@app.get("/generate")
def generate(path: str):
    """
    Using the text summarization pipeline from `transformers`, summerize text
    from the given input text. The model used is `philschmid/bart-large-cnn-samsum`, which
    can be found [here](<https://huggingface.co/philschmid/bart-large-cnn-samsum>).
    """
    # Use the pipeline to generate text from the given input text
    
    r = requests.get(path, stream=True)
    img = Image.open(io.BytesIO(r.content)).convert('RGB') 
    open_cv_image = np.array(img) 
    image = open_cv_image

    results = model.detect([image], verbose=1)

    # Get results and save them
    r = results[0]
    output_image = visualize.display_instances_and_save(image,
        r['rois'], r['masks'], r['class_ids'], class_names, r['scores'])


    image = Image.fromarray(output_image)
    im_file = io.BytesIO()
    image.save(im_file, format="JPEG")
    im_bytes = im_file.getvalue()  # im_bytes: image in binary for
    # Return the generated text in a JSON response
    return {"output": im_bytes}