Spaces:
Configuration error
Configuration error
File size: 2,164 Bytes
7861a07 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
# Overview
This repository provides an ensemble model to combine a YoloV8 model exported from the [Ultralytics](https://github.com/ultralytics/ultralytics) repository with NMS post-processing. The NMS post-processing code contained in [models/postprocess/1/model.py](models/postprocess/1/model.py) is adapted from the [Ultralytics ONNX Example](https://github.com/ultralytics/ultralytics/blob/4b866c97180842b546fe117610869d3c8d69d8ae/examples/YOLOv8-OpenCV-ONNX-Python/main.py).
For more information about Triton's Ensemble Models, see their documentation on [Architecture.md](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/architecture.md) and some of their [preprocessing examples](https://github.com/triton-inference-server/python_backend/tree/main/examples/preprocessing).
# Directory Structure
```
models/
yolov8_onnx/
1/
model.onnx
config.pbtxt
postprocess/
1/
model.py
config.pbtxt
yolov8_ensemble/
1/
<Empty Directory>
config.pbtxt
README.md
main.py
```
# Quick Start
1. Install [Ultralytics](https://github.com/ultralytics/ultralytics) and TritonClient
```
pip install ultralytics==8.0.51 tritonclient[all]==2.31.0
```
2. Export a model to ONNX format:
```
yolo export model=yolov8n.pt format=onnx dynamic=True opset=16
```
3. Rename the model file to `model.onnx` and place it under the `/models/yolov8_onnx/1` directory (see directory structure above).
4. (Optional): Update the Score and NMS threshold in [models/postprocess/1/model.py](models/postprocess/1/model.py#L59)
5. (Optional): Update the [models/yolov8_ensemble/config.pbtxt](models/yolov8_ensemble/config.pbtxt) file if your input resolution has changed.
6. Build the Docker Container for Triton Inference:
```
DOCKER_NAME="yolov8-triton"
docker build -t $DOCKER_NAME .
```
6. Run Triton Inference Server:
```
DOCKER_NAME="yolov8-triton"
docker run --gpus all \
-it --rm \
--net=host \
-v ./models:/models \
$DOCKER_NAME
```
7. Run the script with `python main.py`. The overlay image will be written to `output.jpg`.
|