Spaces:
Sleeping
Sleeping
File size: 4,446 Bytes
a855e72 c8a57bb a855e72 b85c571 ced969c a855e72 b85c571 a855e72 b85c571 a855e72 b85c571 a855e72 b85c571 c8a57bb a855e72 b85c571 a855e72 c8a57bb a855e72 c8a57bb ced969c c8a57bb ced969c c8a57bb a35fd46 c8a57bb ced969c c8a57bb a35fd46 ced969c c8a57bb a35fd46 c8a57bb a35fd46 c8a57bb ced969c c8a57bb a35fd46 c8a57bb a35fd46 c8a57bb ced969c b85c571 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
---
title: Object Detection Lambda
emoji: π
colorFrom: purple
colorTo: green
sdk: gradio
sdk_version: 5.5.0
app_file: app.py
pinned: false
short_description: Object detection Lambda
---
# Object detection via AWS Lambda
<b>Aim: AI-driven object detection task</b>
- Front-end: user interface via Gradio library
- Back-end: use of AWS Lambda function to run deployed ML models
<b>Menu: </b>
- [Local development](#1-local-development)
- [AWS deployment](#2-deployment-to-aws)
- [Hugging Face deployment](#3-deployment-to-hugging-face)
## 1. Local development
### 1.1. Build and run the Docker container
<details>
Step 1 - Building the docker image
bash
> docker build -t object-detection-lambda .
Step 2 - Running the docker container locally
bash
> docker run --name object-detection-lambda-cont -p 8080:8080 object-detection-lambda
</details>
### 1.2. Execution via user interface
Use of Gradio library for web interface
<b>Note:</b> The environment variable ```AWS_API``` should point to the local container
> export AWS_API=http://localhost:8080
Command line for execution:
> python3 app.py
The Gradio web application should now be accessible at http://localhost:7860
### 1.3. Execution via command line:
Example of a prediction request
bash
> encoded_image=$(base64 -i ./tests/data/boats.jpg)
> curl -X POST "http://localhost:8080/2015-03-31/functions/function/invocations" \
> -H "Content-Type: application/json" \
> -d '{"body": "'"$encoded_image"'", "isBase64Encoded": true, "model":"yolos-small"}'
python
> python3 inference_api.py \
> --api http://localhost:8080/2015-03-31/functions/function/invocations \
> --file ./tests/data/boats.jpg \
> --model yolos-small
## 2. Deployment to AWS
### 2.1. Pushing the docker container to AWS ECR
<details>
Steps:
- Create new ECR Repository via aws console
Example: ```object-detection-lambda```
- Optional for aws cli configuration (to run above commands):
> aws configure
- Authenticate Docker client to the Amazon ECR registry
> aws ecr get-login-password --region <aws_region> | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<aws_region>.amazonaws.com
- Tag local docker image with the Amazon ECR registry and repository
> docker tag object-detection-lambda:latest <aws_account_id>.dkr.ecr.<aws_region>.amazonaws.com/object-detection-lambda:latest
- Push docker image to ECR
> docker push <aws_account_id>.dkr.ecr.<aws_region>.amazonaws.com/object-detection-lambda:latest
[Link to AWS ECR Documention](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html)
</details>
### 2.2. Creating and testing a Lambda function
<details>
<b>Steps</b>:
- Create function from container image
Example name: ```object-detection```
- Notes: the API endpoint will use the ```lambda_function.py``` file and ```lambda_hander``` function
- Test the lambda via the AWS console
Advanced notes:
- Steps to update the Lambda function with latest container via aws cli:
> aws lambda update-function-code --function-name object-detection --image-uri <aws_account_id>.dkr.ecr.<aws_region>.amazonaws.com/object-detection-lambda:latest
</details>
### 2.3. Creating a REST API via API Gateway
<details>
<b>Steps</b>:
- Create a new ```Rest API``` (e.g. ```object-detection-api```)
- Add a new resource to the API (e.g. ```/detect```)
- Add a ```POST``` method to the resource
- Integrate the Lambda function to the API
- Notes: currently using proxy integration option unchecked
- Deploy API with a specific stage (e.g. ```dev``` stage)
</details>
Example AWS API Endpoint:
```https://<api_id>.execute-api.<aws_region>.amazonaws.com/dev/detect```
### 2.4. Execution for deployed model
Example of a prediction request
bash
> encoded_image=$(base64 -i ./tests/data/boats.jpg)
> curl -X POST "https://<api_id>.execute-api.<aws_region>.amazonaws.com/dev/detect" \
> -H "Content-Type: application/json" \
> -d '{"body": "'"$encoded_image"'", "isBase64Encoded": true, "model":"yolos-small"}'
python
> python3 inference_api.py \
> --api https://<api_id>.execute-api.<aws_region>.amazonaws.com/dev/detect \
> --file ./tests/data/boats.jpg \
> --model yolos-small
## 3. Deployment to Hugging Face
This web application is available on Hugging Face
Hugging Face space URL:
https://huggingface.co/spaces/cvachet/object_detection_lambda
Note: This space uses the ML model deployed on AWS Lambda
|