text
stringlengths 3
11.2M
| id
stringlengths 15
188
| metadata
dict | __index_level_0__
int64 0
275
|
---|---|---|---|
English | [简体中文](README_cn.md)
# ReID of DeepSORT
## Introduction
[DeepSORT](https://arxiv.org/abs/1812.00442)(Deep Cosine Metric Learning SORT) is composed of detector and ReID model in series. Several common ReID models are provided here for the configs of DeepSORT as a reference.
## Model Zoo
### Results on Market1501 pedestrian ReID dataset
| Backbone | Model | Params | FPS | mAP | Top1 | Top5 | download | config |
| :-------------: | :-----------------: | :-------: | :------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| ResNet-101 | PCB Pyramid Embedding | 289M | --- | 86.31 | 94.95 | 98.28 | [download](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams) | [config](./deepsort_pcb_pyramid_r101.yml) |
| PPLCNet-2.5x | PPLCNet Embedding | 36M | --- | 71.59 | 87.38 | 95.49 | [download](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams) | [config](./deepsort_pplcnet.yml) |
### Results on VERI-Wild vehicle ReID dataset
| Backbone | Model | Params | FPS | mAP | Top1 | Top5 | download | config |
| :-------------: | :-----------------: | :-------: | :------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| PPLCNet-2.5x | PPLCNet Embedding | 93M | --- | 82.44 | 93.54 | 98.53 | [download](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet_vehicle.pdparams) | [config](./deepsort_pplcnet_vehicle.yml) |
**Notes:**
- ReID models are provided by [PaddleClas](https://github.com/PaddlePaddle/PaddleClas), the specific training process and code will be published by PaddleClas.
- For pedestrian tracking, please use the **Market1501** pedestrian ReID model in combination with a pedestrian detector.
- For vehicle tracking, please use the **VERI-Wild** vehicle ReID model in combination with a vehicle detector.
| PaddleDetection/configs/mot/deepsort/reid/README.md/0 | {
"file_path": "PaddleDetection/configs/mot/deepsort/reid/README.md",
"repo_id": "PaddleDetection",
"token_count": 867
} | 21 |
worker_num: 8
TrainReader:
sample_transforms:
- Decode: {}
- RGBReverse: {}
- AugmentHSV: {}
- LetterBoxResize: {target_size: [608, 1088]}
- MOTRandomAffine: {}
- RandomFlip: {}
- BboxXYXY2XYWH: {}
- NormalizeBox: {}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- RGBReverse: {}
- Permute: {}
batch_transforms:
- Gt2JDETargetThres:
anchor_masks: [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
anchors: [[[128,384], [180,540], [256,640], [512,640]],
[[32,96], [45,135], [64,192], [90,271]],
[[8,24], [11,34], [16,48], [23,68]]]
downsample_ratios: [32, 16, 8]
ide_thresh: 0.5
fg_thresh: 0.5
bg_thresh: 0.4
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
EvalMOTReader:
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
TestMOTReader:
inputs_def:
image_shape: [3, 608, 1088]
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
| PaddleDetection/configs/mot/jde/_base_/jde_reader_1088x608.yml/0 | {
"file_path": "PaddleDetection/configs/mot/jde/_base_/jde_reader_1088x608.yml",
"repo_id": "PaddleDetection",
"token_count": 652
} | 22 |
[English](README.md) | 简体中文
# 特色垂类跟踪模型
## 车辆跟踪 (Vehicle Tracking)
车辆跟踪的主要应用之一是交通监控。在监控场景中,大多是从公共区域的监控摄像头视角拍摄车辆,获取图像后再进行车辆检测和跟踪。
[BDD100K](https://www.bdd100k.com)是伯克利大学AI实验室(BAIR)提出的一个驾驶视频数据集,是以驾驶员视角为主。该数据集不仅分多类别标注,还分晴天、多云等六种天气,住宅区、公路等六种场景,白天、夜晚等三个时间段,以及是否遮挡、是否截断。BDD100K MOT数据集包含1400个视频序列用于训练,200个视频序列用于验证。每个视频序列大约40秒长,每秒5帧,因此每个视频大约有200帧。此处针对BDD100K MOT数据集进行提取,抽取出类别为car, truck, bus, trailer, other vehicle的数据组合成一个Vehicle类别。
[KITTI](http://www.cvlibs.net/datasets/kitti)是一个包含市区、乡村和高速公路等场景采集的数据集,每张图像中最多达15辆车和30个行人,还有各种程度的遮挡与截断。[KITTI-Tracking](http://www.cvlibs.net/datasets/kitti/eval_tracking.php)(2D bounding-boxes)数据集一共有50个视频序列,21个为训练集,29个为测试集,目标是估计类别Car和Pedestrian的目标轨迹,此处抽取出类别为Car的数据作为一个Vehicle类别。
[VisDrone](http://aiskyeye.com)是无人机视角拍摄的数据集,是以俯视视角为主。该数据集涵盖不同位置(取自中国数千个相距数千公里的14个不同城市)、不同环境(城市和乡村)、不同物体(行人、车辆、自行车等)和不同密度(稀疏和拥挤的场景)。[VisDrone2019-MOT](https://github.com/VisDrone/VisDrone-Dataset)包含56个视频序列用于训练,7个视频序列用于验证。此处针对VisDrone2019-MOT多目标跟踪数据集进行提取,抽取出类别为car、van、truck、bus的数据组合成一个Vehicle类别。
## 模型库
### FairMOT在各个数据集val-set上Vehicle类别的结果
| 数据集 | 骨干网络 | 输入尺寸 | MOTA | IDF1 | FPS | 下载链接 | 配置文件 |
| :-------------| :-------- | :------- | :----: | :----: | :----: | :-----: |:------: |
| BDD100K | DLA-34 | 1088x608 | 43.5 | 50.0 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.yml) |
| BDD100K | HRNetv2-W18| 576x320 | 32.6 | 38.7 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100kmot_vehicle.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100kmot_vehicle.yml) |
| KITTI | DLA-34 | 1088x608 | 82.7 | - | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_kitti_vehicle.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_kitti_vehicle.yml) |
| VisDrone | DLA-34 | 1088x608 | 52.1 | 63.3 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_visdrone_vehicle.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_visdrone_vehicle.yml) |
| VisDrone | HRNetv2-W18| 1088x608 | 46.0 | 56.8 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_vehicle.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_vehicle.yml) |
| VisDrone | HRNetv2-W18| 864x480 | 43.7 | 56.1 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_vehicle.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_vehicle.yml) |
| VisDrone | HRNetv2-W18| 576x320 | 39.8 | 52.4 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_vehicle.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_vehicle.yml) |
**注意:**
- FairMOT均使用DLA-34为骨干网络,4个GPU进行训练,每个GPU上batch size为6,训练30个epoch。
## 数据集准备和处理
### 1、数据集处理代码说明
代码统一都在tools目录下
```
# bdd100kmot
tools/bdd100kmot/gen_bdd100kmot_vehicle.sh:通过执行bdd100k2mot.py和gen_labels_MOT.py生成bdd100kmot_vehicle 数据集
tools/bdd100kmot/bdd100k2mot.py:将bdd100k全集转换成mot格式
tools/bdd100kmot/gen_labels_MOT.py:生成单类别的labels_with_ids文件
# visdrone
tools/visdrone/visdrone2mot.py:生成visdrone_vehicle
```
### 2、bdd100kmot_vehicle数据集处理
```
# 复制tools/bdd100kmot里的代码到数据集目录下
# 生成bdd100kmot_vehicle MOT格式的数据,抽取类别classes=2,3,4,9,10 (car, truck, bus, trailer, other vehicle)
<<--生成前目录-->>
├── bdd100k
│ ├── images
│ ├── labels
<<--生成后目录-->>
├── bdd100k
│ ├── images
│ ├── labels
│ ├── bdd100kmot_vehicle
│ │ ├── images
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── labels_with_ids
│ │ │ ├── train
│ │ │ ├── val
# 执行
sh gen_bdd100kmot_vehicle.sh
```
### 3、visdrone_vehicle数据集处理
```
# 复制tools/visdrone/visdrone2mot.py到数据集目录下
# 生成visdrone_vehicle MOT格式的数据,抽取类别classes=4,5,6,9 (car, van, truck, bus)
<<--生成前目录-->>
├── VisDrone2019-MOT-val
│ ├── annotations
│ ├── sequences
│ ├── visdrone2mot.py
<<--生成后目录-->>
├── VisDrone2019-MOT-val
│ ├── annotations
│ ├── sequences
│ ├── visdrone2mot.py
│ ├── visdrone_vehicle
│ │ ├── images
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── labels_with_ids
│ │ │ ├── train
│ │ │ ├── val
# 执行
python visdrone2mot.py --transMot=True --data_name=visdrone_vehicle --phase=val
python visdrone2mot.py --transMot=True --data_name=visdrone_vehicle --phase=train
```
## 快速开始
### 1. 训练
使用2个GPU通过如下命令一键式启动训练
```bash
python -m paddle.distributed.launch --log_dir=./fairmot_dla34_30e_1088x608_bdd100kmot_vehicle/ --gpus 0,1 tools/train.py -c configs/mot/vehicle/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.yml
```
### 2. 评估
使用单张GPU通过如下命令一键式启动评估
```bash
# 使用PaddleDetection发布的权重
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/vehicle/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.pdparams
# 使用训练保存的checkpoint
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/vehicle/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.yml -o weights=output/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle/model_final.pdparams
```
### 3. 预测
使用单个GPU通过如下命令预测一个视频,并保存为视频
```bash
# 预测一个视频
CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/vehicle/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.pdparams --video_file={your video name}.mp4 --save_videos
```
**注意:**
- 请先确保已经安装了[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`。
### 4. 导出预测模型
```bash
CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/vehicle/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle.pdparams
```
### 5. 用导出的模型基于Python去预测
```bash
python deploy/pptracking/python/mot_jde_infer.py --model_dir=output_inference/fairmot_dla34_30e_1088x608_bdd100kmot_vehicle --video_file={your video name}.mp4 --device=GPU --save_mot_txts
```
**注意:**
- 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`表示保存跟踪结果的txt文件,或`--save_images`表示保存跟踪结果可视化图片。
- 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。
## 引用
```
@article{zhang2020fair,
title={FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking},
author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
journal={arXiv preprint arXiv:2004.01888},
year={2020}
}
@InProceedings{bdd100k,
author = {Yu, Fisher and Chen, Haofeng and Wang, Xin and Xian, Wenqi and Chen,
Yingying and Liu, Fangchen and Madhavan, Vashisht and Darrell, Trevor},
title = {BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
@INPROCEEDINGS{Geiger2012CVPR,
author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2012}
}
@ARTICLE{9573394,
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/TPAMI.2021.3119563}
}
```
| PaddleDetection/configs/mot/vehicle/README.md/0 | {
"file_path": "PaddleDetection/configs/mot/vehicle/README.md",
"repo_id": "PaddleDetection",
"token_count": 5316
} | 23 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/picodet_v2.yml',
'_base_/optimizer_300e.yml',
]
pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/PPLCNet_x0_75_pretrained.pdparams
weights: output/picodet_s_416_coco/best_model
find_unused_parameters: True
keep_best_weight: True
use_ema: True
epoch: 300
snapshot_epoch: 10
PicoDet:
backbone: LCNet
neck: CSPPAN
head: PicoHeadV2
LCNet:
scale: 0.75
feature_maps: [3, 4, 5]
act: relu6
CSPPAN:
out_channels: 96
use_depthwise: True
num_csp_blocks: 1
num_features: 4
act: relu6
PicoHeadV2:
conv_feat:
name: PicoFeat
feat_in: 96
feat_out: 96
num_convs: 4
num_fpn_stride: 4
norm_type: bn
share_cls_reg: True
use_se: True
act: relu6
feat_in_chan: 96
act: relu6
LearningRate:
base_lr: 0.2
schedulers:
- !CosineDecay
max_epochs: 300
min_lr_ratio: 0.08
last_plateau_epochs: 30
- !ExpWarmup
epochs: 2
worker_num: 6
eval_height: &eval_height 416
eval_width: &eval_width 416
eval_size: &eval_size [*eval_height, *eval_width]
TrainReader:
sample_transforms:
- Decode: {}
- Mosaic:
prob: 0.6
input_dim: [640, 640]
degrees: [-10, 10]
scale: [0.1, 2.0]
shear: [-2, 2]
translate: [-0.1, 0.1]
enable_mixup: True
- AugmentHSV: {is_bgr: False, hgain: 5, sgain: 30, vgain: 30}
- RandomFlip: {prob: 0.5}
batch_transforms:
- BatchRandomResize: {target_size: [320, 352, 384, 416, 448, 480, 512], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
- PadGT: {}
batch_size: 40
shuffle: true
drop_last: true
mosaic_epoch: 180
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
TestReader:
inputs_def:
image_shape: [1, 3, *eval_height, *eval_width]
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
| PaddleDetection/configs/picodet/picodet_s_416_coco_npu.yml/0 | {
"file_path": "PaddleDetection/configs/picodet/picodet_s_416_coco_npu.yml",
"repo_id": "PaddleDetection",
"token_count": 1080
} | 24 |
{
"images": [],
"annotations": [],
"categories": [
{
"supercategory": "component",
"id": 1,
"name": "pedestrian"
}
]
}
| PaddleDetection/configs/pphuman/pedestrian_yolov3/pedestrian.json/0 | {
"file_path": "PaddleDetection/configs/pphuman/pedestrian_yolov3/pedestrian.json",
"repo_id": "PaddleDetection",
"token_count": 110
} | 25 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../ppyoloe/_base_/optimizer_300e.yml',
'../ppyoloe/_base_/ppyoloe_plus_crn_tiny_auxhead.yml',
'../ppyoloe/_base_/ppyoloe_plus_reader_320.yml',
]
log_iter: 100
snapshot_epoch: 4
weights: output/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_t_auxhead_300e_coco.pdparams # 640*640 COCO mAP 39.7
depth_mult: 0.33
width_mult: 0.375
num_classes: 1
TrainDataset:
!COCODataSet
image_dir: ""
anno_path: annotations/train_all.json
dataset_dir: dataset/ppvehicle
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
allow_empty: true
EvalDataset:
!COCODataSet
image_dir: ""
anno_path: annotations/val_all.json
dataset_dir: dataset/ppvehicle
TestDataset:
!ImageFolder
anno_path: annotations/val_all.json
dataset_dir: dataset/ppvehicle
TrainReader:
batch_size: 8
epoch: 60
LearningRate:
base_lr: 0.001
schedulers:
- !CosineDecay
max_epochs: 72
- !LinearWarmup
start_factor: 0.
epochs: 1
PPYOLOEHead:
static_assigner_epoch: -1
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
| PaddleDetection/configs/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.yml/0 | {
"file_path": "PaddleDetection/configs/ppvehicle/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle.yml",
"repo_id": "PaddleDetection",
"token_count": 595
} | 26 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/ppyolo_tiny.yml',
'./_base_/optimizer_650e.yml',
'./_base_/ppyolo_tiny_reader.yml',
]
snapshot_epoch: 1
weights: output/ppyolo_tiny_650e_coco/model_final
| PaddleDetection/configs/ppyolo/ppyolo_tiny_650e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/ppyolo/ppyolo_tiny_650e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 114
} | 27 |
_BASE_: [
'./_base_/wgisd_detection.yml',
'../../runtime.yml',
'../_base_/optimizer_80e.yml',
'../_base_/ppyoloe_plus_crn.yml',
'../_base_/ppyoloe_plus_reader.yml',
]
log_iter: 100
snapshot_epoch: 5
weights: output/ppyoloe_plus_crn_m_80e_obj365_pretrained_wgisd/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ppyoloe_crn_m_obj365_pretrained.pdparams
depth_mult: 0.67
width_mult: 0.75
| PaddleDetection/configs/ppyoloe/application/ppyoloe_plus_crn_m_80e_obj365_pretrained_wgisd.yml/0 | {
"file_path": "PaddleDetection/configs/ppyoloe/application/ppyoloe_plus_crn_m_80e_obj365_pretrained_wgisd.yml",
"repo_id": "PaddleDetection",
"token_count": 203
} | 28 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/optimizer_300e.yml',
'./_base_/ppyoloe_crn.yml',
'./_base_/ppyoloe_reader.yml',
]
log_iter: 100
snapshot_epoch: 10
weights: output/ppyoloe_crn_s_300e_coco/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/CSPResNetb_s_pretrained.pdparams
depth_mult: 0.33
width_mult: 0.50
| PaddleDetection/configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 181
} | 29 |
num_proposals: &num_proposals 100
proposal_embedding_dim: &proposal_embedding_dim 256
bbox_resolution: &bbox_resolution 7
mask_resolution: &mask_resolution 14
architecture: QueryInst
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_cos_pretrained.pdparams
QueryInst:
backbone: ResNet
neck: FPN
rpn_head: EmbeddingRPNHead
roi_head: SparseRoIHead
post_process: SparsePostProcess
ResNet:
depth: 50
norm_type: bn
freeze_at: 0
return_idx: [ 0, 1, 2, 3 ]
num_stages: 4
lr_mult_list: [ 0.1, 0.1, 0.1, 0.1 ]
FPN:
out_channel: *proposal_embedding_dim
extra_stage: 0
EmbeddingRPNHead:
num_proposals: *num_proposals
SparseRoIHead:
num_stages: 6
bbox_roi_extractor:
resolution: *bbox_resolution
sampling_ratio: 2
aligned: True
mask_roi_extractor:
resolution: *mask_resolution
sampling_ratio: 2
aligned: True
bbox_head: DIIHead
mask_head: DynamicMaskHead
loss_func: QueryInstLoss
DIIHead:
feedforward_channels: 2048
dynamic_feature_channels: 64
roi_resolution: *bbox_resolution
num_attn_heads: 8
dropout: 0.0
num_ffn_fcs: 2
num_cls_fcs: 1
num_reg_fcs: 3
DynamicMaskHead:
dynamic_feature_channels: 64
roi_resolution: *mask_resolution
num_convs: 4
conv_kernel_size: 3
conv_channels: 256
upsample_method: 'deconv'
upsample_scale_factor: 2
QueryInstLoss:
focal_loss_alpha: 0.25
focal_loss_gamma: 2.0
class_weight: 2.0
l1_weight: 5.0
giou_weight: 2.0
mask_weight: 8.0
SparsePostProcess:
num_proposals: *num_proposals
binary_thresh: 0.5
| PaddleDetection/configs/queryinst/_base_/queryinst_r50_fpn.yml/0 | {
"file_path": "PaddleDetection/configs/queryinst/_base_/queryinst_r50_fpn.yml",
"repo_id": "PaddleDetection",
"token_count": 657
} | 30 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/optimizer_6x.yml',
'_base_/rtdetr_r50vd.yml',
'_base_/rtdetr_reader.yml',
]
weights: output/rtdetr_hgnetv2_l_6x_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/PPHGNetV2_L_ssld_pretrained.pdparams
find_unused_parameters: True
log_iter: 200
DETR:
backbone: PPHGNetV2
PPHGNetV2:
arch: 'L'
return_idx: [1, 2, 3]
freeze_stem_only: True
freeze_at: 0
freeze_norm: True
lr_mult_list: [0., 0.05, 0.05, 0.05, 0.05]
| PaddleDetection/configs/rtdetr/rtdetr_hgnetv2_l_6x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/rtdetr/rtdetr_hgnetv2_l_6x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 271
} | 31 |
简体中文 | [English](README_en.md)
# Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection (ARSL)
## ARSL-FCOS 模型库
| 模型 | COCO监督数据比例 | Semi mAP<sup>val<br>0.5:0.95 | Semi Epochs (Iters) | 模型下载 | 配置文件 |
| :------------: | :---------:|:----------------------------: | :------------------: |:--------: |:----------: |
| ARSL-FCOS | 1% | **22.8** | 240 (87120) | [download](https://paddledet.bj.bcebos.com/models/arsl_fcos_r50_fpn_coco_semi001.pdparams) | [config](./arsl_fcos_r50_fpn_coco_semi001.yml) |
| ARSL-FCOS | 5% | **33.1** | 240 (174240) | [download](https://paddledet.bj.bcebos.com/models/arsl_fcos_r50_fpn_coco_semi005.pdparams) | [config](./arsl_fcos_r50_fpn_coco_semi005.yml ) |
| ARSL-FCOS | 10% | **36.9** | 240 (174240) | [download](https://paddledet.bj.bcebos.com/models/arsl_fcos_r50_fpn_coco_semi010.pdparams) | [config](./arsl_fcos_r50_fpn_coco_semi010.yml ) |
| ARSL-FCOS | 10% | **38.5(LSJ)** | 240 (174240) | [download](https://paddledet.bj.bcebos.com/models/arsl_fcos_r50_fpn_coco_semi010_lsj.pdparams) | [config](./arsl_fcos_r50_fpn_coco_semi010_lsj.yml ) |
| ARSL-FCOS | full(100%) | **45.1** | 240 (174240) | [download](https://paddledet.bj.bcebos.com/models/arsl_fcos_r50_fpn_coco_full.pdparams) | [config](./arsl_fcos_r50_fpn_coco_full.yml ) |
## 使用说明
仅训练时必须使用半监督检测的配置文件去训练,评估、预测、部署也可以按基础检测器的配置文件去执行。
### 训练
```bash
# 单卡训练 (不推荐,需按线性比例相应地调整学习率)
CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/semi_det/arsl/arsl_fcos_r50_fpn_coco_semi010.yml --eval
# 多卡训练
python -m paddle.distributed.launch --log_dir=arsl_fcos_r50_fpn_coco_semi010/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/semi_det/arsl/arsl_fcos_r50_fpn_coco_semi010.yml --eval
```
### 评估
```bash
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/semi_det/arsl/arsl_fcos_r50_fpn_coco_semi010.yml -o weights=output/arsl_fcos_r50_fpn_coco_semi010/model_final.pdparams
```
### 预测
```bash
CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/semi_det/arsl/arsl_fcos_r50_fpn_coco_semi010.yml -o weights=output/arsl_fcos_r50_fpn_coco_semi010/model_final.pdparams --infer_img=demo/000000014439.jpg
```
## 引用
```
```
| PaddleDetection/configs/semi_det/arsl/README.md/0 | {
"file_path": "PaddleDetection/configs/semi_det/arsl/README.md",
"repo_id": "PaddleDetection",
"token_count": 1294
} | 32 |
简体中文 | [English](README_en.md)
# RTDETR-SSOD(基于RTDETR的配套半监督目标检测方法)
# 复现模型指标注意事项: 模型中指标是采用在监督数据训练饱和后加载监督数据所训练的模型进行半监督训练
- 例如 使用 baseline/rtdetr_r50vd_6x_coco_sup005.yml使用5%coco数据训练全监督模型,得到rtdetr_r50vd_6x_coco_sup005.pdparams,在rt_detr_ssod005_coco_no_warmup.yml中设置
- pretrain_student_weights: rtdetr_r50vd_6x_coco_sup005.pdparams
- pretrain_teacher_weights: rtdetr_r50vd_6x_coco_sup005.pdparams
- 1.使用coco数据集5%和10%有标记数据和voc数据集VOC2007trainval 所训练的权重已给出请参考 semi_det/baseline/README.md.
- 2.rt_detr_ssod_voc_no_warmup.yml rt_detr_ssod005_coco_no_warmup.yml rt_detr_ssod010_coco_no_warmup.yml 是使用训练好的全监督权中直接开启半监督训练(推荐)
## RTDETR-SSOD模型库
| 模型 | 监督数据比例 | Sup Baseline | Sup Epochs (Iters) | Sup mAP<sup>val<br>0.5:0.95 | Semi mAP<sup>val<br>0.5:0.95 | Semi Epochs (Iters) | 模型下载 | 配置文件 |
| :------------: | :---------: | :---------------------: | :---------------------: |:---------------------------: |:----------------------------: | :------------------: |:--------: |:----------: |
| RTDETR-SSOD | 5% | [sup_config](../baseline/rtdetr_r50vd_6x_coco_sup005.yml) | - | 39.0 | **42.3** | - | [download](https://bj.bcebos.com/v1/paddledet/rt_detr_ssod005_coco_no_warmup.pdparams) | [config](./rt_detr_ssod005_coco_no_warmup.yml) |
| RTDETR-SSOD | 10% | [sup_config](../baseline/rtdetr_r50vd_6x_coco_sup010.yml) | -| 42.3 | **44.8** | - | [download](https://bj.bcebos.com/v1/paddledet/data/semidet/rtdetr_ssod/rt_detr_ssod010_coco/rt_detr_ssod010_coco_no_warmup.pdparams) | [config](./rt_detr_ssod010_coco_with_warmup.yml) |
| RTDETR-SSOD(VOC)| VOC | [sup_config](../baseline/rtdetr_r50vd_6x_coco_voc2007.yml) | - | 62.7 | **65.8(LSJ)** | - | [download](https://bj.bcebos.com/v1/paddledet/data/semidet/rtdetr_ssod/rt_detr_ssod_voc/rt_detr_ssod_voc_no_warmup.pdparams) | [config](./rt_detr_ssod_voc_with_warmup.yml) |
**注意:**
- 以上模型训练默认使用8 GPUs,监督数据总batch_size默认为16,无监督数据总batch_size默认也为16,默认初始学习率为0.01。如果改动了总batch_size,请按线性比例相应地调整学习率;
- **监督数据比例**是指使用的有标签COCO数据集占 COCO train2017 全量训练集的百分比,使用的无标签COCO数据集一般也是相同比例,但具体图片和有标签数据的图片不重合;
- `Semi Epochs (Iters)`表示**半监督训练**的模型的 Epochs (Iters),如果使用**自定义数据集**,需自行根据Iters换算到对应的Epochs调整,最好保证总Iters 和COCO数据集的设置较为接近;
- `Sup mAP`是**只使用有监督数据训练**的模型的精度,请参照**基础检测器的配置文件** 和 [baseline](../baseline);
- `Semi mAP`是**半监督训练**的模型的精度,模型下载和配置文件的链接均为**半监督模型**;
- `LSJ`表示 **large-scale jittering**,表示使用更大范围的多尺度训练,可进一步提升精度,但训练速度也会变慢;
- 半监督检测的配置讲解,请参照[文档](../README.md/#半监督检测配置);
- `Dense Teacher`原文使用`R50-va-caffe`预训练,PaddleDetection中默认使用`R50-vb`预训练,如果使用`R50-vd`结合[SSLD](../../../docs/feature_models/SSLD_PRETRAINED_MODEL.md)的预训练模型,可进一步显著提升检测精度,同时backbone部分配置也需要做出相应更改,如:
```python
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_vd_ssld_v2_pretrained.pdparams
ResNet:
depth: 50
variant: d
norm_type: bn
freeze_at: 0
return_idx: [1, 2, 3]
num_stages: 4
lr_mult_list: [0.05, 0.05, 0.1, 0.15]
```
## 使用说明
仅训练时必须使用半监督检测的配置文件去训练,评估、预测、部署也可以按基础检测器的配置文件去执行。
### 训练
```bash
# 单卡训练 (不推荐,需按线性比例相应地调整学习率)
CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/semi_det/rtdetr_ssod/rt_detr_ssod010_coco_no_warmup.yml --eval
# 多卡训练
python -m paddle.distributed.launch --log_dir=denseteacher_fcos_semi010/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/semi_det/rtdetr_ssod/rt_detr_ssod010_coco_no_warmup.yml --eval
```
### 评估
```bash
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/semi_det/rtdetr_ssod/rt_detr_ssod010_coco_no_warmup.yml -o weights=output/rt_detr_ssod/model_final/model_final.pdparams
```
### 预测
```bash
CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/semi_det/rtdetr_ssod/rt_detr_ssod010_coco_no_warmup.yml -o weights=output/rt_detr_ssod/model_final/model_final.pdparams --infer_img=demo/000000014439.jpg
```
### 部署
部署可以使用半监督检测配置文件,也可以使用基础检测器的配置文件去部署和使用。
```bash
# 导出模型
CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/semi_det/rtdetr_ssod/rt_detr_ssod010_coco_no_warmup.yml -o weights=https://paddledet.bj.bcebos.com/models/rt_detr_ssod010_coco_no_warmup.pdparams
# 导出权重预测
CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/rt_detr_ssod010_coco_no_warmup --image_file=demo/000000014439_640x640.jpg --device=GPU
# 部署测速
CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/rt_detr_ssod010_coco_no_warmup --image_file=demo/000000014439_640x640.jpg --device=GPU --run_benchmark=True # --run_mode=trt_fp16
# 导出ONNX
paddle2onnx --model_dir output_inference/drt_detr_ssod010_coco_no_warmup/ --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 12 --save_file rt_detr_ssod010_coco_no_warmup.onnx
```
# RTDETR-SSOD 下游任务
我们验证了RTDETR-SSOD模型强大的泛化能力,在低光、工业、交通等不同场景下游任务检测效果稳定提升!
voc数据集采用[voc](https://github.com/thsant/wgisd),是一个广泛使用的计算机视觉数据集,用于目标检测、图像分割和场景理解等任务。该数据集包含20个类别的图像,处理后的COCO格式,包含图片标注训练集5011张,图片无标注训练集11540张,测试集2510张,20个类别;
低光数据集使用[ExDark](https://github.com/cs-chan/Exclusively-Dark-Image-Dataset/tree/master/Dataset),该数据集是一个专门在低光照环境下拍摄出针对低光目标检测的数据集,包括从极低光环境到暮光环境等10种不同光照条件下的图片,处理后的COCO格式,包含图片训练集5891张,测试集1472张,12个类别;
工业数据集使用[PKU-Market-PCB](https://robotics.pkusz.edu.cn/resources/dataset/),该数据集用于印刷电路板(PCB)的瑕疵检测,提供了6种常见的PCB缺陷;
商超数据集[SKU110k](https://github.com/eg4000/SKU110K_CVPR19)是商品超市场景下的密集目标检测数据集,包含11,762张图片和超过170个实例。其中包括8,233张用于训练的图像、588张用于验证的图像和2,941张用于测试的图像;
自动驾驶数据集使用[sslad](https://soda-2d.github.io/index.html);
交通数据集使用[visdrone](http://aiskyeye.com/home/);
## 下游数据集实验结果:
| 数据集 | 业务方向 | 划分 | labeled数据量 | 全监督mAP | 半监督mAP |
|----------|-----------|---------------------|-----------------|------------------|--------------|
| voc | 通用 | voc07, 12;1:2 | 5000 | 63.1 | 65.8(+2.7) |
| visdrone | 无人机交通 | 1:9 | 647 | 19.4 | 20.6 (+1.2) |
| pcb | 工业缺陷 | 1:9 | 55 | 22.9 | 26.8 (+3.9) |
| sku110k | 商品 | 1:9 | 821 | 38.9 | 52.4 (+13.5) |
| sslad | 自动驾驶 | 1:32 | 4967 | 42.1 | 43.3 (+1.2) |
| exdark | 低光照 | 1:9 | 589 | 39.6 | 44.1 (+4.5) |
| PaddleDetection/configs/semi_det/rtdetr_ssod/README.md/0 | {
"file_path": "PaddleDetection/configs/semi_det/rtdetr_ssod/README.md",
"repo_id": "PaddleDetection",
"token_count": 5046
} | 33 |
_BASE_: [
'../../yolov3/yolov3_r34_270e_coco.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/yolov3_r34_270e_coco.pdparams
slim: Distill
distill_loss: DistillYOLOv3Loss
DistillYOLOv3Loss:
weight: 1000
| PaddleDetection/configs/slim/distill/yolov3_mobilenet_v1_coco_distill.yml/0 | {
"file_path": "PaddleDetection/configs/slim/distill/yolov3_mobilenet_v1_coco_distill.yml",
"repo_id": "PaddleDetection",
"token_count": 115
} | 34 |
# 小目标数据集下载汇总
## 目录
- [数据集准备](#数据集准备)
- [VisDrone-DET](#VisDrone-DET)
- [DOTA水平框](#DOTA水平框)
- [Xview](#Xview)
- [用户自定义数据集](#用户自定义数据集)
## 数据集准备
### VisDrone-DET
VisDrone-DET是一个无人机航拍场景的小目标数据集,整理后的COCO格式VisDrone-DET数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/smalldet/visdrone.zip),切图后的COCO格式数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/smalldet/visdrone_sliced.zip),检测其中的**10类**,包括 `pedestrian(1), people(2), bicycle(3), car(4), van(5), truck(6), tricycle(7), awning-tricycle(8), bus(9), motor(10)`,原始数据集[下载链接](https://github.com/VisDrone/VisDrone-Dataset)。
具体使用和下载请参考[visdrone](../visdrone)。
### DOTA水平框
DOTA是一个大型的遥感影像公开数据集,这里使用**DOTA-v1.0**水平框数据集,切图后整理的COCO格式的DOTA水平框数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/smalldet/dota_sliced.zip),检测其中的**15类**,
包括 `plane(0), baseball-diamond(1), bridge(2), ground-track-field(3), small-vehicle(4), large-vehicle(5), ship(6), tennis-court(7),basketball-court(8), storage-tank(9), soccer-ball-field(10), roundabout(11), harbor(12), swimming-pool(13), helicopter(14)`,
图片及原始数据集[下载链接](https://captain-whu.github.io/DOAI2019/dataset.html)。
### Xview
Xview是一个大型的航拍遥感检测数据集,目标极小极多,切图后整理的COCO格式数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/smalldet/xview_sliced.zip),检测其中的**60类**,
具体类别为:
<details>
`Fixed-wing Aircraft(0),
Small Aircraft(1),
Cargo Plane(2),
Helicopter(3),
Passenger Vehicle(4),
Small Car(5),
Bus(6),
Pickup Truck(7),
Utility Truck(8),
Truck(9),
Cargo Truck(10),
Truck w/Box(11),
Truck Tractor(12),
Trailer(13),
Truck w/Flatbed(14),
Truck w/Liquid(15),
Crane Truck(16),
Railway Vehicle(17),
Passenger Car(18),
Cargo Car(19),
Flat Car(20),
Tank car(21),
Locomotive(22),
Maritime Vessel(23),
Motorboat(24),
Sailboat(25),
Tugboat(26),
Barge(27),
Fishing Vessel(28),
Ferry(29),
Yacht(30),
Container Ship(31),
Oil Tanker(32),
Engineering Vehicle(33),
Tower crane(34),
Container Crane(35),
Reach Stacker(36),
Straddle Carrier(37),
Mobile Crane(38),
Dump Truck(39),
Haul Truck(40),
Scraper/Tractor(41),
Front loader/Bulldozer(42),
Excavator(43),
Cement Mixer(44),
Ground Grader(45),
Hut/Tent(46),
Shed(47),
Building(48),
Aircraft Hangar(49),
Damaged Building(50),
Facility(51),
Construction Site(52),
Vehicle Lot(53),
Helipad(54),
Storage Tank(55),
Shipping container lot(56),
Shipping Container(57),
Pylon(58),
Tower(59)
`
</details>
,原始数据集[下载链接](https://challenge.xviewdataset.org/)。
### 用户自定义数据集
用户自定义数据集准备请参考[DET数据集标注工具](../../docs/tutorials/data/DetAnnoTools.md)和[DET数据集准备教程](../../docs/tutorials/data/PrepareDetDataSet.md)去准备。
| PaddleDetection/configs/smalldet/DataDownload.md/0 | {
"file_path": "PaddleDetection/configs/smalldet/DataDownload.md",
"repo_id": "PaddleDetection",
"token_count": 1647
} | 35 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../faster_rcnn/_base_/optimizer_1x.yml',
'../faster_rcnn/_base_/faster_rcnn_r50_fpn.yml',
'../faster_rcnn/_base_/faster_fpn_reader.yml',
]
weights: output/faster_rcnn_r50_fpn_1x_visdrone/model_final
metric: COCO
num_classes: 9
TrainDataset:
!COCODataSet
image_dir: train
anno_path: annotations/train.json
dataset_dir: dataset/VisDrone2019_coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: val
anno_path: annotations/val.json
dataset_dir: dataset/VisDrone2019_coco
TestDataset:
!ImageFolder
anno_path: annotations/val.json
| PaddleDetection/configs/sniper/faster_rcnn_r50_fpn_1x_visdrone.yml/0 | {
"file_path": "PaddleDetection/configs/sniper/faster_rcnn_r50_fpn_1x_visdrone.yml",
"repo_id": "PaddleDetection",
"token_count": 315
} | 36 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/sparse_rcnn_r50_fpn.yml',
'_base_/optimizer_3x.yml',
'_base_/sparse_rcnn_reader.yml',
]
num_classes: 80
weights: output/sparse_rcnn_r50_fpn_3x_pro100_coco/model_final
| PaddleDetection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_3x_pro100_coco.yml/0 | {
"file_path": "PaddleDetection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_3x_pro100_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 127
} | 37 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../faster_rcnn/_base_/faster_rcnn_r50_fpn.yml',
'../faster_rcnn/_base_/faster_fpn_reader.yml',
]
weights: output/faster_rcnn_swin_tiny_fpn_3x_coco/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/swin_tiny_patch4_window7_224_22kto1k_pretrained.pdparams
FasterRCNN:
backbone: SwinTransformer
neck: FPN
rpn_head: RPNHead
bbox_head: BBoxHead
bbox_post_process: BBoxPostProcess
SwinTransformer:
arch: 'swin_T_224' # ['swin_T_224', 'swin_S_224', 'swin_B_224', 'swin_L_224', 'swin_B_384', 'swin_L_384']
ape: false
drop_path_rate: 0.1
patch_norm: true
out_indices: [0, 1, 2, 3]
worker_num: 2
TrainReader:
sample_transforms:
- Decode: {}
- RandomResizeCrop: {resizes: [400, 500, 600], cropsizes: [[384, 600], ], prob: 0.5}
- RandomResize: {target_size: [[480, 1333], [512, 1333], [544, 1333], [576, 1333], [608, 1333], [640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 2}
- RandomFlip: {prob: 0.5}
- NormalizeImage: {is_scale: true, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 2
shuffle: true
drop_last: true
collate_batch: false
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True}
- NormalizeImage: {is_scale: true, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
TestReader:
inputs_def:
image_shape: [-1, 3, 640, 640] # TODO deploy: set fixes shape currently
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: 640, keep_ratio: True}
- Pad: {size: 640}
- NormalizeImage: {is_scale: true, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]}
- Permute: {}
batch_size: 1
epoch: 36
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [24, 33]
- !LinearWarmup
start_factor: 0.1
steps: 1000
OptimizerBuilder:
clip_grad_by_norm: 1.0
optimizer:
type: AdamW
weight_decay: 0.05
param_groups:
- params: ['absolute_pos_embed', 'relative_position_bias_table', 'norm']
weight_decay: 0.0
| PaddleDetection/configs/swin/faster_rcnn_swin_tiny_fpn_3x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/swin/faster_rcnn_swin_tiny_fpn_3x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 1085
} | 38 |
architecture: TTFNet
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/DarkNet53_pretrained.pdparams
TTFNet:
backbone: DarkNet
neck: TTFFPN
ttf_head: TTFHead
post_process: BBoxPostProcess
DarkNet:
depth: 53
freeze_at: 0
return_idx: [1, 2, 3, 4]
norm_type: bn
norm_decay: 0.0004
TTFFPN:
planes: [256, 128, 64]
shortcut_num: [3, 2, 1]
TTFHead:
hm_loss:
name: CTFocalLoss
loss_weight: 1.
wh_loss:
name: GIoULoss
loss_weight: 5.
reduction: sum
BBoxPostProcess:
decode:
name: TTFBox
max_per_img: 100
score_thresh: 0.01
down_ratio: 4
| PaddleDetection/configs/ttfnet/_base_/ttfnet_darknet53.yml/0 | {
"file_path": "PaddleDetection/configs/ttfnet/_base_/ttfnet_darknet53.yml",
"repo_id": "PaddleDetection",
"token_count": 284
} | 39 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/ppyoloe_reader.yml',
'./_base_/optimizer_base_36e.yml'
]
weights: output/ppyoloe_vit_base_csppan_cae_36e_coco/model_final
snapshot_epoch: 2
log_iter: 100
use_ema: true
ema_decay: 0.9999
ema_skip_names: ['yolo_head.proj_conv.weight', 'backbone.pos_embed']
custom_black_list: ['reduce_mean']
use_fused_allreduce_gradients: &use_checkpoint False
architecture: YOLOv3
norm_type: sync_bn
YOLOv3:
backbone: VisionTransformer
neck: YOLOCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
VisionTransformer:
patch_size: 16
embed_dim: 768
depth: 12
num_heads: 12
mlp_ratio: 4
qkv_bias: True
drop_rate: 0.0
drop_path_rate: 0.2
init_values: 0.1
final_norm: False
use_rel_pos_bias: False
use_sincos_pos_emb: True
epsilon: 0.000001 # 1e-6
out_indices: [11, ]
with_fpn: True
num_fpn_levels: 3
out_with_norm: False
use_checkpoint: *use_checkpoint
pretrained: https://bj.bcebos.com/v1/paddledet/models/pretrained/vit_base_cae_pretrained.pdparams
YOLOCSPPAN:
in_channels: [768, 768, 768]
act: 'silu'
PPYOLOEHead:
fpn_strides: [8, 16, 32]
in_channels: [768, 768, 768]
static_assigner_epoch: -1
grid_cell_scale: 5.0
grid_cell_offset: 0.5
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
| PaddleDetection/configs/vitdet/ppyoloe_vit_base_csppan_cae_36e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/vitdet/ppyoloe_vit_base_csppan_cae_36e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 750
} | 40 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/optimizer_270e.yml',
'_base_/yolov3_darknet53.yml',
'_base_/yolov3_reader.yml',
]
snapshot_epoch: 5
weights: output/yolov3_darknet53_270e_coco/model_final
| PaddleDetection/configs/yolov3/yolov3_darknet53_270e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/yolov3/yolov3_darknet53_270e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 119
} | 41 |
architecture: YOLOX
norm_type: sync_bn
use_ema: True
ema_decay: 0.9999
ema_decay_type: "exponential"
act: silu
find_unused_parameters: True
depth_mult: 1.0
width_mult: 1.0
YOLOX:
backbone: CSPDarkNet
neck: YOLOCSPPAN
head: YOLOXHead
size_stride: 32
size_range: [15, 25] # multi-scale range [480*480 ~ 800*800]
CSPDarkNet:
arch: "X"
return_idx: [2, 3, 4]
depthwise: False
YOLOCSPPAN:
depthwise: False
YOLOXHead:
l1_epoch: 285
depthwise: False
loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0}
assigner:
name: SimOTAAssigner
candidate_topk: 10
use_vfl: False
nms:
name: MultiClassNMS
nms_top_k: 10000
keep_top_k: 1000
score_threshold: 0.001
nms_threshold: 0.65
# For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%.
# For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot.
| PaddleDetection/configs/yolox/_base_/yolox_cspdarknet.yml/0 | {
"file_path": "PaddleDetection/configs/yolox/_base_/yolox_cspdarknet.yml",
"repo_id": "PaddleDetection",
"token_count": 437
} | 42 |
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <iostream>
#include <map>
#include <string>
#include <vector>
#include "yaml-cpp/yaml.h"
#ifdef _WIN32
#define OS_PATH_SEP "\\"
#else
#define OS_PATH_SEP "/"
#endif
namespace PaddleDetection {
// Inference model configuration parser
class ConfigPaser {
public:
ConfigPaser() {}
~ConfigPaser() {}
bool load_config(const std::string& model_dir,
const std::string& cfg = "infer_cfg.yml") {
// Load as a YAML::Node
YAML::Node config;
config = YAML::LoadFile(model_dir + OS_PATH_SEP + cfg);
// Get runtime mode : paddle, trt_fp16, trt_fp32
if (config["mode"].IsDefined()) {
mode_ = config["mode"].as<std::string>();
} else {
std::cerr << "Please set mode, "
<< "support value : paddle/trt_fp16/trt_fp32." << std::endl;
return false;
}
// Get model arch : YOLO, SSD, RetinaNet, RCNN, Face
if (config["arch"].IsDefined()) {
arch_ = config["arch"].as<std::string>();
} else {
std::cerr << "Please set model arch,"
<< "support value : YOLO, SSD, RetinaNet, RCNN, Face."
<< std::endl;
return false;
}
// Get min_subgraph_size for tensorrt
if (config["min_subgraph_size"].IsDefined()) {
min_subgraph_size_ = config["min_subgraph_size"].as<int>();
} else {
std::cerr << "Please set min_subgraph_size." << std::endl;
return false;
}
// Get draw_threshold for visualization
if (config["draw_threshold"].IsDefined()) {
draw_threshold_ = config["draw_threshold"].as<float>();
} else {
std::cerr << "Please set draw_threshold." << std::endl;
return false;
}
// Get Preprocess for preprocessing
if (config["Preprocess"].IsDefined()) {
preprocess_info_ = config["Preprocess"];
} else {
std::cerr << "Please set Preprocess." << std::endl;
return false;
}
// Get label_list for visualization
if (config["label_list"].IsDefined()) {
label_list_ = config["label_list"].as<std::vector<std::string>>();
} else {
std::cerr << "Please set label_list." << std::endl;
return false;
}
// Get use_dynamic_shape for TensorRT
if (config["use_dynamic_shape"].IsDefined()) {
use_dynamic_shape_ = config["use_dynamic_shape"].as<bool>();
} else {
std::cerr << "Please set use_dynamic_shape." << std::endl;
return false;
}
// Get conf_thresh for tracker
if (config["tracker"].IsDefined()) {
if (config["tracker"]["conf_thres"].IsDefined()) {
conf_thresh_ = config["tracker"]["conf_thres"].as<float>();
} else {
std::cerr << "Please set conf_thres in tracker." << std::endl;
return false;
}
}
// Get NMS for postprocess
if (config["NMS"].IsDefined()) {
nms_info_ = config["NMS"];
}
// Get fpn_stride in PicoDet
if (config["fpn_stride"].IsDefined()) {
fpn_stride_.clear();
for (auto item : config["fpn_stride"]) {
fpn_stride_.emplace_back(item.as<int>());
}
}
if (config["mask"].IsDefined()) {
mask_ = config["mask"].as<bool>();
}
return true;
}
std::string mode_;
float draw_threshold_;
std::string arch_;
int min_subgraph_size_;
YAML::Node preprocess_info_;
YAML::Node nms_info_;
std::vector<std::string> label_list_;
std::vector<int> fpn_stride_;
bool use_dynamic_shape_;
float conf_thresh_;
bool mask_ = false;
};
} // namespace PaddleDetection
| PaddleDetection/deploy/cpp/include/config_parser.h/0 | {
"file_path": "PaddleDetection/deploy/cpp/include/config_parser.h",
"repo_id": "PaddleDetection",
"token_count": 1698
} | 43 |
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <glog/logging.h>
#include <math.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <algorithm>
#include <iostream>
#include <numeric>
#include <string>
#include <vector>
#ifdef _WIN32
#include <direct.h>
#include <io.h>
#elif LINUX
#include <stdarg.h>
#include <sys/stat.h>
#endif
#include <gflags/gflags.h>
#include "include/object_detector.h"
DEFINE_string(model_dir, "", "Path of inference model");
DEFINE_string(image_file, "", "Path of input image");
DEFINE_string(image_dir,
"",
"Dir of input image, `image_file` has a higher priority.");
DEFINE_int32(batch_size, 1, "batch_size");
DEFINE_string(
video_file,
"",
"Path of input video, `video_file` or `camera_id` has a highest priority.");
DEFINE_int32(camera_id, -1, "Device id of camera to predict");
DEFINE_bool(
use_gpu,
false,
"Deprecated, please use `--device` to set the device you want to run.");
DEFINE_string(device,
"CPU",
"Choose the device you want to run, it can be: CPU/GPU/XPU, "
"default is CPU.");
DEFINE_double(threshold, 0.5, "Threshold of score.");
DEFINE_string(output_dir, "output", "Directory of output visualization files.");
DEFINE_string(run_mode,
"paddle",
"Mode of running(paddle/trt_fp32/trt_fp16/trt_int8)");
DEFINE_int32(gpu_id, 0, "Device id of GPU to execute");
DEFINE_bool(run_benchmark,
false,
"Whether to predict a image_file repeatedly for benchmark");
DEFINE_bool(use_mkldnn, false, "Whether use mkldnn with CPU");
DEFINE_int32(cpu_threads, 1, "Num of threads with CPU");
DEFINE_int32(trt_min_shape, 1, "Min shape of TRT DynamicShapeI");
DEFINE_int32(trt_max_shape, 1280, "Max shape of TRT DynamicShapeI");
DEFINE_int32(trt_opt_shape, 640, "Opt shape of TRT DynamicShapeI");
DEFINE_bool(trt_calib_mode,
false,
"If the model is produced by TRT offline quantitative calibration, "
"trt_calib_mode need to set True");
void PrintBenchmarkLog(std::vector<double> det_time, int img_num) {
LOG(INFO) << "----------------------- Config info -----------------------";
LOG(INFO) << "runtime_device: " << FLAGS_device;
LOG(INFO) << "ir_optim: "
<< "True";
LOG(INFO) << "enable_memory_optim: "
<< "True";
int has_trt = FLAGS_run_mode.find("trt");
if (has_trt >= 0) {
LOG(INFO) << "enable_tensorrt: "
<< "True";
std::string precision = FLAGS_run_mode.substr(4, 8);
LOG(INFO) << "precision: " << precision;
} else {
LOG(INFO) << "enable_tensorrt: "
<< "False";
LOG(INFO) << "precision: "
<< "fp32";
}
LOG(INFO) << "enable_mkldnn: " << (FLAGS_use_mkldnn ? "True" : "False");
LOG(INFO) << "cpu_math_library_num_threads: " << FLAGS_cpu_threads;
LOG(INFO) << "----------------------- Data info -----------------------";
LOG(INFO) << "batch_size: " << FLAGS_batch_size;
LOG(INFO) << "input_shape: "
<< "dynamic shape";
LOG(INFO) << "----------------------- Model info -----------------------";
FLAGS_model_dir.erase(FLAGS_model_dir.find_last_not_of("/") + 1);
LOG(INFO) << "model_name: "
<< FLAGS_model_dir.substr(FLAGS_model_dir.find_last_of('/') + 1);
LOG(INFO) << "----------------------- Perf info ------------------------";
LOG(INFO) << "Total number of predicted data: " << img_num
<< " and total time spent(ms): "
<< std::accumulate(det_time.begin(), det_time.end(), 0);
LOG(INFO) << "preproce_time(ms): " << det_time[0] / img_num
<< ", inference_time(ms): " << det_time[1] / img_num
<< ", postprocess_time(ms): " << det_time[2] / img_num;
}
static std::string DirName(const std::string& filepath) {
auto pos = filepath.rfind(OS_PATH_SEP);
if (pos == std::string::npos) {
return "";
}
return filepath.substr(0, pos);
}
static bool PathExists(const std::string& path) {
#ifdef _WIN32
struct _stat buffer;
return (_stat(path.c_str(), &buffer) == 0);
#else
struct stat buffer;
return (stat(path.c_str(), &buffer) == 0);
#endif // !_WIN32
}
static void MkDir(const std::string& path) {
if (PathExists(path)) return;
int ret = 0;
#ifdef _WIN32
ret = _mkdir(path.c_str());
#else
ret = mkdir(path.c_str(), 0755);
#endif // !_WIN32
if (ret != 0) {
std::string path_error(path);
path_error += " mkdir failed!";
throw std::runtime_error(path_error);
}
}
static void MkDirs(const std::string& path) {
if (path.empty()) return;
if (PathExists(path)) return;
MkDirs(DirName(path));
MkDir(path);
}
void PredictVideo(const std::string& video_path,
PaddleDetection::ObjectDetector* det,
const std::string& output_dir = "output") {
// Open video
cv::VideoCapture capture;
std::string video_out_name = "output.mp4";
if (FLAGS_camera_id != -1) {
capture.open(FLAGS_camera_id);
} else {
capture.open(video_path.c_str());
video_out_name =
video_path.substr(video_path.find_last_of(OS_PATH_SEP) + 1);
}
if (!capture.isOpened()) {
printf("can not open video : %s\n", video_path.c_str());
return;
}
// Get Video info : resolution, fps, frame count
int video_width = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_WIDTH));
int video_height = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_HEIGHT));
int video_fps = static_cast<int>(capture.get(CV_CAP_PROP_FPS));
int video_frame_count =
static_cast<int>(capture.get(CV_CAP_PROP_FRAME_COUNT));
printf("fps: %d, frame_count: %d\n", video_fps, video_frame_count);
// Create VideoWriter for output
cv::VideoWriter video_out;
std::string video_out_path(output_dir);
if (output_dir.rfind(OS_PATH_SEP) != output_dir.size() - 1) {
video_out_path += OS_PATH_SEP;
}
video_out_path += video_out_name;
video_out.open(video_out_path.c_str(),
0x00000021,
video_fps,
cv::Size(video_width, video_height),
true);
if (!video_out.isOpened()) {
printf("create video writer failed!\n");
return;
}
std::vector<PaddleDetection::ObjectResult> result;
std::vector<int> bbox_num;
std::vector<double> det_times;
auto labels = det->GetLabelList();
auto colormap = PaddleDetection::GenerateColorMap(labels.size());
// Capture all frames and do inference
cv::Mat frame;
int frame_id = 1;
bool is_rbox = false;
while (capture.read(frame)) {
if (frame.empty()) {
break;
}
std::vector<cv::Mat> imgs;
imgs.push_back(frame);
printf("detect frame: %d\n", frame_id);
det->Predict(imgs, FLAGS_threshold, 0, 1, &result, &bbox_num, &det_times);
std::vector<PaddleDetection::ObjectResult> out_result;
for (const auto& item : result) {
if (item.confidence < FLAGS_threshold || item.class_id == -1) {
continue;
}
out_result.push_back(item);
if (item.rect.size() > 6) {
is_rbox = true;
printf("class=%d confidence=%.4f rect=[%d %d %d %d %d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3],
item.rect[4],
item.rect[5],
item.rect[6],
item.rect[7]);
} else {
printf("class=%d confidence=%.4f rect=[%d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3]);
}
}
cv::Mat out_im = PaddleDetection::VisualizeResult(
frame, out_result, labels, colormap, is_rbox);
video_out.write(out_im);
frame_id += 1;
}
capture.release();
video_out.release();
}
void PredictImage(const std::vector<std::string> all_img_paths,
const int batch_size,
const double threshold,
const bool run_benchmark,
PaddleDetection::ObjectDetector* det,
const std::string& output_dir = "output") {
std::vector<double> det_t = {0, 0, 0};
int steps = ceil(float(all_img_paths.size()) / batch_size);
printf("total images = %d, batch_size = %d, total steps = %d\n",
all_img_paths.size(),
batch_size,
steps);
for (int idx = 0; idx < steps; idx++) {
std::vector<cv::Mat> batch_imgs;
int left_image_cnt = all_img_paths.size() - idx * batch_size;
if (left_image_cnt > batch_size) {
left_image_cnt = batch_size;
}
for (int bs = 0; bs < left_image_cnt; bs++) {
std::string image_file_path = all_img_paths.at(idx * batch_size + bs);
cv::Mat im = cv::imread(image_file_path, 1);
batch_imgs.insert(batch_imgs.end(), im);
}
// Store all detected result
std::vector<PaddleDetection::ObjectResult> result;
std::vector<int> bbox_num;
std::vector<double> det_times;
bool is_rbox = false;
if (run_benchmark) {
det->Predict(
batch_imgs, threshold, 10, 10, &result, &bbox_num, &det_times);
} else {
det->Predict(batch_imgs, threshold, 0, 1, &result, &bbox_num, &det_times);
// get labels and colormap
auto labels = det->GetLabelList();
auto colormap = PaddleDetection::GenerateColorMap(labels.size());
int item_start_idx = 0;
for (int i = 0; i < left_image_cnt; i++) {
cv::Mat im = batch_imgs[i];
std::vector<PaddleDetection::ObjectResult> im_result;
int detect_num = 0;
for (int j = 0; j < bbox_num[i]; j++) {
PaddleDetection::ObjectResult item = result[item_start_idx + j];
if (item.confidence < threshold || item.class_id == -1) {
continue;
}
detect_num += 1;
im_result.push_back(item);
if (item.rect.size() > 6) {
is_rbox = true;
printf("class=%d confidence=%.4f rect=[%d %d %d %d %d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3],
item.rect[4],
item.rect[5],
item.rect[6],
item.rect[7]);
} else {
printf("class=%d confidence=%.4f rect=[%d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3]);
}
}
std::cout << all_img_paths.at(idx * batch_size + i)
<< " The number of detected box: " << detect_num << std::endl;
item_start_idx = item_start_idx + bbox_num[i];
// Visualization result
cv::Mat vis_img = PaddleDetection::VisualizeResult(
im, im_result, labels, colormap, is_rbox);
std::vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(95);
std::string output_path(output_dir);
if (output_dir.rfind(OS_PATH_SEP) != output_dir.size() - 1) {
output_path += OS_PATH_SEP;
}
std::string image_file_path = all_img_paths.at(idx * batch_size + i);
output_path +=
image_file_path.substr(image_file_path.find_last_of('/') + 1);
cv::imwrite(output_path, vis_img, compression_params);
printf("Visualized output saved as %s\n", output_path.c_str());
}
}
det_t[0] += det_times[0];
det_t[1] += det_times[1];
det_t[2] += det_times[2];
det_times.clear();
}
PrintBenchmarkLog(det_t, all_img_paths.size());
}
int main(int argc, char** argv) {
// Parsing command-line
google::ParseCommandLineFlags(&argc, &argv, true);
if (FLAGS_model_dir.empty() ||
(FLAGS_image_file.empty() && FLAGS_image_dir.empty() &&
FLAGS_video_file.empty())) {
std::cout << "Usage: ./main --model_dir=/PATH/TO/INFERENCE_MODEL/ "
<< "--image_file=/PATH/TO/INPUT/IMAGE/" << std::endl;
return -1;
}
if (!(FLAGS_run_mode == "paddle" || FLAGS_run_mode == "trt_fp32" ||
FLAGS_run_mode == "trt_fp16" || FLAGS_run_mode == "trt_int8")) {
std::cout
<< "run_mode should be 'paddle', 'trt_fp32', 'trt_fp16' or 'trt_int8'.";
return -1;
}
transform(FLAGS_device.begin(),
FLAGS_device.end(),
FLAGS_device.begin(),
::toupper);
if (!(FLAGS_device == "CPU" || FLAGS_device == "GPU" ||
FLAGS_device == "XPU")) {
std::cout << "device should be 'CPU', 'GPU' or 'XPU'.";
return -1;
}
if (FLAGS_use_gpu) {
std::cout << "Deprecated, please use `--device` to set the device you want "
"to run.";
return -1;
}
// Load model and create a object detector
PaddleDetection::ObjectDetector det(FLAGS_model_dir,
FLAGS_device,
FLAGS_use_mkldnn,
FLAGS_cpu_threads,
FLAGS_run_mode,
FLAGS_batch_size,
FLAGS_gpu_id,
FLAGS_trt_min_shape,
FLAGS_trt_max_shape,
FLAGS_trt_opt_shape,
FLAGS_trt_calib_mode);
// Do inference on input video or image
if (!PathExists(FLAGS_output_dir)) {
MkDirs(FLAGS_output_dir);
}
if (!FLAGS_video_file.empty() || FLAGS_camera_id != -1) {
PredictVideo(FLAGS_video_file, &det, FLAGS_output_dir);
} else if (!FLAGS_image_file.empty() || !FLAGS_image_dir.empty()) {
std::vector<std::string> all_img_paths;
std::vector<cv::String> cv_all_img_paths;
if (!FLAGS_image_file.empty()) {
all_img_paths.push_back(FLAGS_image_file);
if (FLAGS_batch_size > 1) {
std::cout << "batch_size should be 1, when set `image_file`."
<< std::endl;
return -1;
}
} else {
cv::glob(FLAGS_image_dir, cv_all_img_paths);
for (const auto& img_path : cv_all_img_paths) {
all_img_paths.push_back(img_path);
}
}
PredictImage(all_img_paths,
FLAGS_batch_size,
FLAGS_threshold,
FLAGS_run_benchmark,
&det,
FLAGS_output_dir);
}
return 0;
}
| PaddleDetection/deploy/cpp/src/main.cc/0 | {
"file_path": "PaddleDetection/deploy/cpp/src/main.cc",
"repo_id": "PaddleDetection",
"token_count": 7245
} | 44 |
[English](README.md) | 简体中文
# PaddleDetection A311D 量化模型 C++ 部署示例
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 A311D 上的部署推理加速。
## 1. 部署环境准备
软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 晶晨 A311d 编译文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
## 2. 部署模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
4. 模型需要异构计算,异构计算文件可以参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
更多量化相关相关信息可查阅[模型量化](../../../quantize/README.md)
## 3. 在 A311D 上部署量化后的 PP-YOLOE 检测模型
请按照以下步骤完成在 A311D 上部署 PP-YOLOE 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/a311d.md)
2. 将编译后的库拷贝到当前目录,可使用如下命令:
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection/deploy/fastdeploy/amlogic/a311d/cpp
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
# git checkout develop
cp -r FastDeploy/build/fastdeploy-timvx/ PaddleDetection/deploy/fastdeploy/amlogic/a311d/cpp
```
3. 在当前路径下载部署所需的模型和示例图片:
```bash
cd PaddleDetection/deploy/fastdeploy/amlogic/a311d/cpp
mkdir models && mkdir images
wget https://bj.bcebos.com/fastdeploy/models/ppyoloe_noshare_qat.tar.gz
tar -xvf ppyoloe_noshare_qat.tar.gz
cp -r ppyoloe_noshare_qat models
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp -r 000000014439.jpg images
```
4. 编译部署示例,可使入如下命令:
```bash
cd PaddleDetection/deploy/fastdeploy/amlogic/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
```
5. 基于 adb 工具部署 PP-YOLOE 检测模型到晶晨 A311D
```bash
# 进入 install 目录
cd PaddleDetection/deploy/fastdeploy/amlogic/a311d/cpp/build/install/
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
```
部署成功后运行结果如下:
<img width="640" src="https://user-images.githubusercontent.com/30516196/203708564-43c49485-9b48-4eb2-8fe7-0fa517979fff.png">
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/quantize.md)
## 4. 更多指南
- [PaddleDetection C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1detection.html)
- [FastDeploy部署PaddleDetection模型概览](../../)
- [Python部署](../python)
## 5. 常见问题
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md) | PaddleDetection/deploy/fastdeploy/amlogic/a311d/cpp/README.md/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/amlogic/a311d/cpp/README.md",
"repo_id": "PaddleDetection",
"token_count": 2745
} | 45 |
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "fastdeploy/vision.h"
#ifdef WIN32
const char sep = '\\';
#else
const char sep = '/';
#endif
void CpuInfer(const std::string& tinypose_model_dir,
const std::string& image_file) {
auto tinypose_model_file = tinypose_model_dir + sep + "model.pdmodel";
auto tinypose_params_file = tinypose_model_dir + sep + "model.pdiparams";
auto tinypose_config_file = tinypose_model_dir + sep + "infer_cfg.yml";
auto option = fastdeploy::RuntimeOption();
option.UseCpu();
auto tinypose_model = fastdeploy::vision::keypointdetection::PPTinyPose(
tinypose_model_file, tinypose_params_file, tinypose_config_file, option);
if (!tinypose_model.Initialized()) {
std::cerr << "TinyPose Model Failed to initialize." << std::endl;
return;
}
auto im = cv::imread(image_file);
fastdeploy::vision::KeyPointDetectionResult res;
if (!tinypose_model.Predict(&im, &res)) {
std::cerr << "TinyPose Prediction Failed." << std::endl;
return;
} else {
std::cout << "TinyPose Prediction Done!" << std::endl;
}
std::cout << res.Str() << std::endl;
auto tinypose_vis_im =
fastdeploy::vision::VisKeypointDetection(im, res, 0.5);
cv::imwrite("tinypose_vis_result.jpg", tinypose_vis_im);
std::cout << "TinyPose visualized result saved in ./tinypose_vis_result.jpg"
<< std::endl;
}
void GpuInfer(const std::string& tinypose_model_dir,
const std::string& image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
auto tinypose_model_file = tinypose_model_dir + sep + "model.pdmodel";
auto tinypose_params_file = tinypose_model_dir + sep + "model.pdiparams";
auto tinypose_config_file = tinypose_model_dir + sep + "infer_cfg.yml";
auto tinypose_model = fastdeploy::vision::keypointdetection::PPTinyPose(
tinypose_model_file, tinypose_params_file, tinypose_config_file, option);
if (!tinypose_model.Initialized()) {
std::cerr << "TinyPose Model Failed to initialize." << std::endl;
return;
}
auto im = cv::imread(image_file);
fastdeploy::vision::KeyPointDetectionResult res;
if (!tinypose_model.Predict(&im, &res)) {
std::cerr << "TinyPose Prediction Failed." << std::endl;
return;
} else {
std::cout << "TinyPose Prediction Done!" << std::endl;
}
std::cout << res.Str() << std::endl;
auto tinypose_vis_im =
fastdeploy::vision::VisKeypointDetection(im, res, 0.5);
cv::imwrite("tinypose_vis_result.jpg", tinypose_vis_im);
std::cout << "TinyPose visualized result saved in ./tinypose_vis_result.jpg"
<< std::endl;
}
void TrtInfer(const std::string& tinypose_model_dir,
const std::string& image_file) {
auto tinypose_model_file = tinypose_model_dir + sep + "model.pdmodel";
auto tinypose_params_file = tinypose_model_dir + sep + "model.pdiparams";
auto tinypose_config_file = tinypose_model_dir + sep + "infer_cfg.yml";
auto tinypose_option = fastdeploy::RuntimeOption();
tinypose_option.UseGpu();
tinypose_option.UsePaddleInferBackend();
// If use original Tensorrt, not Paddle-TensorRT,
// please try `option.UseTrtBackend()`
tinypose_option.paddle_infer_option.enable_trt = true;
tinypose_option.paddle_infer_option.collect_trt_shape = true;
tinypose_option.trt_option.SetShape("image", {1, 3, 256, 192}, {1, 3, 256, 192},
{1, 3, 256, 192});
auto tinypose_model = fastdeploy::vision::keypointdetection::PPTinyPose(
tinypose_model_file, tinypose_params_file, tinypose_config_file,
tinypose_option);
if (!tinypose_model.Initialized()) {
std::cerr << "TinyPose Model Failed to initialize." << std::endl;
return;
}
auto im = cv::imread(image_file);
fastdeploy::vision::KeyPointDetectionResult res;
if (!tinypose_model.Predict(&im, &res)) {
std::cerr << "TinyPose Prediction Failed." << std::endl;
return;
} else {
std::cout << "TinyPose Prediction Done!" << std::endl;
}
std::cout << res.Str() << std::endl;
auto tinypose_vis_im =
fastdeploy::vision::VisKeypointDetection(im, res, 0.5);
cv::imwrite("tinypose_vis_result.jpg", tinypose_vis_im);
std::cout << "TinyPose visualized result saved in ./tinypose_vis_result.jpg"
<< std::endl;
}
int main(int argc, char* argv[]) {
if (argc < 4) {
std::cout << "Usage: infer_demo path/to/pptinypose_model_dir path/to/image "
"run_option, "
"e.g ./infer_demo ./pptinypose_model_dir ./test.jpeg 0"
<< std::endl;
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
"with gpu; 2: run with gpu and use tensorrt backend;"
<< std::endl;
return -1;
}
if (std::atoi(argv[3]) == 0) {
CpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 1) {
GpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 2) {
TrtInfer(argv[1], argv[2]);
}
return 0;
}
| PaddleDetection/deploy/fastdeploy/cpu-gpu/cpp/pptinypose_infer.cc/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/cpu-gpu/cpp/pptinypose_infer.cc",
"repo_id": "PaddleDetection",
"token_count": 2193
} | 46 |
import fastdeploy as fd
import cv2
import os
def parse_arguments():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--tinypose_model_dir",
required=True,
help="path of paddletinypose model directory")
parser.add_argument(
"--det_model_dir", help="path of paddledetection model directory")
parser.add_argument(
"--image_file", required=True, help="path of test image file.")
return parser.parse_args()
def build_picodet_option(args):
option = fd.RuntimeOption()
option.use_kunlunxin()
return option
def build_tinypose_option(args):
option = fd.RuntimeOption()
option.use_kunlunxin()
return option
args = parse_arguments()
picodet_model_file = os.path.join(args.det_model_dir, "model.pdmodel")
picodet_params_file = os.path.join(args.det_model_dir, "model.pdiparams")
picodet_config_file = os.path.join(args.det_model_dir, "infer_cfg.yml")
# setup runtime
runtime_option = build_picodet_option(args)
det_model = fd.vision.detection.PicoDet(
picodet_model_file,
picodet_params_file,
picodet_config_file,
runtime_option=runtime_option)
tinypose_model_file = os.path.join(args.tinypose_model_dir, "model.pdmodel")
tinypose_params_file = os.path.join(args.tinypose_model_dir, "model.pdiparams")
tinypose_config_file = os.path.join(args.tinypose_model_dir, "infer_cfg.yml")
# setup runtime
runtime_option = build_tinypose_option(args)
tinypose_model = fd.vision.keypointdetection.PPTinyPose(
tinypose_model_file,
tinypose_params_file,
tinypose_config_file,
runtime_option=runtime_option)
# predict
im = cv2.imread(args.image_file)
pipeline = fd.pipeline.PPTinyPose(det_model, tinypose_model)
pipeline.detection_model_score_threshold = 0.5
pipeline_result = pipeline.predict(im)
print("Paddle TinyPose Result:\n", pipeline_result)
# visualize
vis_im = fd.vision.vis_keypoint_detection(
im, pipeline_result, conf_threshold=0.2)
cv2.imwrite("visualized_result.jpg", vis_im)
print("TinyPose visualized result save in ./visualized_result.jpg")
| PaddleDetection/deploy/fastdeploy/kunlunxin/python/det_keypoint_unite/det_keypoint_unite_infer.py/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/kunlunxin/python/det_keypoint_unite/det_keypoint_unite_infer.py",
"repo_id": "PaddleDetection",
"token_count": 847
} | 47 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import numpy as np
import time
import fastdeploy as fd
# triton_python_backend_utils is available in every Triton Python model. You
# need to use this module to create inference requests and responses. It also
# contains some utility functions for extracting information from model_config
# and converting Triton input/output types to numpy types.
import triton_python_backend_utils as pb_utils
class TritonPythonModel:
"""Your Python model must use the same class name. Every Python model
that is created must have "TritonPythonModel" as the class name.
"""
def initialize(self, args):
"""`initialize` is called only once when the model is being loaded.
Implementing `initialize` function is optional. This function allows
the model to intialize any state associated with this model.
Parameters
----------
args : dict
Both keys and values are strings. The dictionary keys and values are:
* model_config: A JSON string containing the model configuration
* model_instance_kind: A string containing model instance kind
* model_instance_device_id: A string containing model instance device ID
* model_repository: Model repository path
* model_version: Model version
* model_name: Model name
"""
# You must parse model_config. JSON string is not parsed here
self.model_config = json.loads(args['model_config'])
print("model_config:", self.model_config)
self.input_names = []
for input_config in self.model_config["input"]:
self.input_names.append(input_config["name"])
print("postprocess input names:", self.input_names)
self.output_names = []
self.output_dtype = []
for output_config in self.model_config["output"]:
self.output_names.append(output_config["name"])
dtype = pb_utils.triton_string_to_numpy(output_config["data_type"])
self.output_dtype.append(dtype)
print("postprocess output names:", self.output_names)
self.postprocess_ = fd.vision.detection.PaddleDetPostprocessor()
def execute(self, requests):
"""`execute` must be implemented in every Python model. `execute`
function receives a list of pb_utils.InferenceRequest as the only
argument. This function is called when an inference is requested
for this model. Depending on the batching configuration (e.g. Dynamic
Batching) used, `requests` may contain multiple requests. Every
Python model, must create one pb_utils.InferenceResponse for every
pb_utils.InferenceRequest in `requests`. If there is an error, you can
set the error argument when creating a pb_utils.InferenceResponse.
Parameters
----------
requests : list
A list of pb_utils.InferenceRequest
Returns
-------
list
A list of pb_utils.InferenceResponse. The length of this list must
be the same as `requests`
"""
responses = []
for request in requests:
infer_outputs = []
for name in self.input_names:
infer_output = pb_utils.get_input_tensor_by_name(request, name)
if infer_output:
infer_output = infer_output.as_numpy()
infer_outputs.append(infer_output)
results = self.postprocess_.run(infer_outputs)
r_str = fd.vision.utils.fd_result_to_json(results)
r_np = np.array(r_str, dtype=np.object_)
out_tensor = pb_utils.Tensor(self.output_names[0], r_np)
inference_response = pb_utils.InferenceResponse(
output_tensors=[out_tensor, ])
responses.append(inference_response)
return responses
def finalize(self):
"""`finalize` is called only once when the model is being unloaded.
Implementing `finalize` function is optional. This function allows
the model to perform any necessary clean ups before exit.
"""
print('Cleaning up...')
| PaddleDetection/deploy/fastdeploy/serving/models/postprocess/1/model.py/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/serving/models/postprocess/1/model.py",
"repo_id": "PaddleDetection",
"token_count": 1765
} | 48 |
# PaddleDetection SOPHGO部署示例
## 1. 支持模型列表
目前SOPHGO支持如下模型的部署
- [PP-YOLOE系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
- [PicoDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
- [YOLOV8系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)
## 2. 准备PP-YOLOE YOLOV8或者PicoDet部署模型以及转换模型
SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
- Paddle动态图模型转换为ONNX模型,请参考[PaddleDetection导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md).
- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
## 3. 模型转换example
PP-YOLOE YOLOV8和PicoDet模型转换过程类似,下面以ppyoloe_crn_s_300e_coco为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型
### 导出ONNX模型
```shell
#导出paddle模型
python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml --output_dir=output_inference -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams
#paddle模型转ONNX模型
paddle2onnx --model_dir ppyoloe_crn_s_300e_coco \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_file ppyoloe_crn_s_300e_coco.onnx \
--enable_dev_version True
#进入Paddle2ONNX文件夹,固定ONNX模型shape
python -m paddle2onnx.optimize --input_model ppyoloe_crn_s_300e_coco.onnx \
--output_model ppyoloe_crn_s_300e_coco.onnx \
--input_shape_dict "{'image':[1,3,640,640]}"
```
### 导出bmodel模型
以转化BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
## 4. 安装
``` shell
docker pull sophgo/tpuc_dev:latest
# myname1234是一个示例,也可以设置其他名字
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
## 5. ONNX模型转换为bmodel模型
``` shell
mkdir ppyoloe_crn_s_300e_coco && cd ppyoloe_crn_s_300e_coco
# 下载测试图片,并将图片转换为npz格式
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
#使用python获得模型转换所需要的npz文件
im = cv2.imread(im)
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
#[640 640]为ppyoloe_crn_s_300e_coco的输入大小
im_scale_y = 640 / float(im.shape[0])
im_scale_x = 640 / float(im.shape[1])
inputs = {}
inputs['image'] = np.array((im, )).astype('float32')
inputs['scale_factor'] = np.array([im_scale_y, im_scale_x]).astype('float32')
np.savez('inputs.npz', image = inputs['image'], scale_factor = inputs['scale_factor'])
#放入onnx模型文件ppyoloe_crn_s_300e_coco.onnx
mkdir workspace && cd workspace
# 将ONNX模型转换为mlir模型
model_transform.py \
--model_name ppyoloe_crn_s_300e_coco \
--model_def ../ppyoloe_crn_s_300e_coco.onnx \
--input_shapes [[1,3,640,640],[1,2]] \
--keep_aspect_ratio \
--pixel_format rgb \
--output_names p2o.Div.1,p2o.Concat.29 \
--test_input ../inputs.npz \
--test_result ppyoloe_crn_s_300e_coco_top_outputs.npz \
--mlir ppyoloe_crn_s_300e_coco.mlir
```
## 6. 注意
**由于TPU-MLIR当前不支持后处理算法,所以需要查看后处理的输入作为网络的输出**
具体方法为:output_names需要通过[NETRO](https://netron.app/) 查看,网页中打开需要转换的ONNX模型,搜索NonMaxSuppression节点
查看INPUTS中boxes和scores的名字,这个两个名字就是我们所需的output_names
例如使用Netron可视化后,可以得到如下图片

找到蓝色方框标记的NonMaxSuppression节点,可以看到红色方框标记的两个节点名称为p2o.Div.1,p2o.Concat.29
``` bash
# 将mlir模型转换为BM1684x的F32 bmodel模型
model_deploy.py \
--mlir ppyoloe_crn_s_300e_coco.mlir \
--quantize F32 \
--chip bm1684x \
--test_input ppyoloe_crn_s_300e_coco_in_f32.npz \
--test_reference ppyoloe_crn_s_300e_coco_top_outputs.npz \
--model ppyoloe_crn_s_300e_coco_1684x_f32.bmodel
```
最终获得可以在BM1684x上能够运行的bmodel模型ppyoloe_crn_s_300e_coco_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
## 7. 详细的部署示例
- [Cpp部署](./cpp)
- [python部署](./python)
| PaddleDetection/deploy/fastdeploy/sophgo/README.md/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/sophgo/README.md",
"repo_id": "PaddleDetection",
"token_count": 2828
} | 49 |
crop_thresh: 0.5
visual: True
warmup_frame: 50
MOT:
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1
enable: True
ID_BASED_DETACTION:
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip
batch_size: 8
threshold: 0.6
display_frames: 80
skip_frame_num: 2
enable: True
| PaddleDetection/deploy/pipeline/config/examples/infer_cfg_smoking.yml/0 | {
"file_path": "PaddleDetection/deploy/pipeline/config/examples/infer_cfg_smoking.yml",
"repo_id": "PaddleDetection",
"token_count": 200
} | 50 |
English | [简体中文](pphuman_action.md)
# Action Recognition Module of PP-Human
Action Recognition is widely used in the intelligent community/smart city, and security monitoring. PP-Human provides the module of video-classification-based, detection-based, image-classification-based and skeleton-based action recognition.
## Model Zoo
There are multiple available pretrained models including pedestrian detection/tracking, keypoint detection, fighting, calling, smoking and fall detection models. Users can download and use them directly.
| Task | Algorithm | Precision | Inference Speed(ms) | Model Weights |Model Inference and Deployment |
|:----------------------------- |:---------:|:-------------------------:|:-----------------------------------:| :-----------------: |:-----------------------------------------------------------------------------------------:|
| Pedestrian Detection/Tracking | PP-YOLOE | mAP: 56.3 <br> MOTA: 72.0 | Detection: 28ms <br>Tracking:33.1ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) |
| Calling Recognition | PP-HGNet | Precision Rate: 86.85 | Single Person 2.94ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.pdparams) | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) |
| Smoking Recognition | PP-YOLOE | mAP: 39.7 | Single Person 2.0ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.pdparams) | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) |
| Keypoint Detection | HRNet | AP: 87.1 | Single Person 2.9ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) |
| Falling Recognition | ST-GCN | Precision Rate: 96.43 | Single Person 2.7ms | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) |
| Fighting Recognition | PP-TSM | Precision Rate: 89.06% | 128ms for a 2sec video | [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) |
Note:
1. The precision of the pedestrian detection/ tracking model is obtained by trainning and testing on [MOT17](https://motchallenge.net/), [CrowdHuman](http://www.crowdhuman.org/), [HIEVE](http://humaninevents.org/) and some business data.
2. The keypoint detection model is trained on [COCO](https://cocodataset.org/), [UAV-Human](https://github.com/SUTDCV/UAV-Human), and some business data, and the precision is obtained on test sets of business data.
3. The falling action recognition model is trained on [NTU-RGB+D](https://rose1.ntu.edu.sg/dataset/actionRecognition/), [UR Fall Detection Dataset](http://fenix.univ.rzeszow.pl/~mkepski/ds/uf.html), and some business data, and the precision is obtained on the testing set of business data.
4. The calling action recognition model is trained and tested on [UAV-Human](https://github.com/SUTDCV/UAV-Human), by using video frames of calling in this dataset.
5. The smoking action recognition model is trained and tested on business data.
6. The fighting action recognition model is trained and tested on 6 public datasets, including Surveillance Camera Fight Dataset, A Dataset for Automatic Violence Detection in Videos, Hockey Fight Detection Dataset, Video Fight Detection Dataset, Real Life Violence Situations Dataset, UBI Abnormal Event Detection Dataset.
7. The inference speed is the speed of using TensorRT FP16 on NVIDIA T4, including the total time of data pre-training, model inference, and post-processing.
## Skeleton-based action recognition -- falling detection
<div align="center"> <img src="https://user-images.githubusercontent.com/22989727/205582385-08a1b6ae-9b1b-465a-ac25-d6427571eb56.gif" width='600'/><br> <center>Data source and copyright owner:Skyinfor
Technology. Thanks for the provision of actual scenario data, which are only
used for academic research here. </center>
</div>
### Description of Configuration
Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow:
```
SKELETON_ACTION: # Config for skeleton-based action recognition model
model_dir: output_inference/STGCN # Path of the model
batch_size: 1 # The size of the inference batch. Current now only support 1.
max_frames: 50 # The number of frames of action segments. When frames of time-ordered skeleton keypoints of each pedestrian ID achieve the max value,the action type will be judged by the action recognition model. If the setting is the same as the training, there will be an ideal inference result.
display_frames: 80 # The number of display frames. When the inferred action type is falling down, the time length of the act will be displayed in the ID.
coord_size: [384, 512] # The unified size of the coordinate, which is the best when it is the same as the training setting.
enable: False # Whether to enable this function
```
## How to Use
1. Download models `Pedestrian Detection/Tracking`, `Keypoint Detection` and `Falling Recognition` from the links in the Model Zoo and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `SKELETON_ACTION` in infer_cfg_pphuman.yml. And then run the command:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
3. There are two ways to modify the model path:
- In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path.
- Add `-o KPT.model_dir=xxx SKELETON_ACTION.model_dir=xxx ` in the command line following the --config to change the model path:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o KPT.model_dir=./dark_hrnet_w32_256x192 SKELETON_ACTION.model_dir=./STGCN \
--video_file=test_video.mp4 \
--device=gpu
```
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe).
2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box.
3. In this strategy, we use the [keypoint detection model](../../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) to obtain 17 skeleton keypoints. Their sequences and types are identical to those of COCO. For details, please refer to the `COCO dataset` part of [how to prepare keypoint datasets](../../../../docs/tutorials/data/PrepareKeypointDataSet_en.md).
4. Each target pedestrian with a tracking ID has their own accumulation of skeleton keypoints, which is used to form a keypoint sequence in time order. When the number of accumulated frames reach a preset threshold or the tracking is lost, the action recognition model will be applied to judging the action type of the time-ordered keypoint sequence. The current model only supports the recognition of the act of falling down, and the relationship between the action type and `class id` is:
```
0: Fall down
1: Others
```
- The falling action recognition model uses [ST-GCN](https://arxiv.org/abs/1801.07455), and employ the [PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/stgcn.md) toolkit to complete model training.
## Image-Classification-Based Action Recognition -- Calling Recognition
<div align="center"> <img src="https://user-images.githubusercontent.com/22989727/205596971-d92fd24e-977a-4742-91cc-ce5b4802473c.gif" width='600'/><br> <center>Data source and copyright owner:Skyinfor
Technology. Thanks for the provision of actual scenario data, which are only
used for academic research here. </center>
</div>
### Description of Configuration
Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow:
```
ID_BASED_CLSACTION: # config for classfication-based action recognition model
model_dir: output_inference/PPHGNet_tiny_calling_halfbody # Path of the model
batch_size: 8 # The size of the inference batch
threshold: 0.45 # Threshold for corresponding behavior
display_frames: 80 # The number of display frames. When the corresponding action is detected, the time length of the act will be displayed in the ID.
enable: False # Whether to enable this function
```
### How to Use
1. Download models `Pedestrian Detection/Tracking` and `Calling Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
2. Now the only available input is the video input in the action recognition module. Set the "enable: True" of `ID_BASED_CLSACTION` in infer_cfg_pphuman.yml.
3. Run this command:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe).
2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box.
3. With image classification through pedestrian images at the frame level, when the category to which the image belongs is the corresponding behavior, it is considered that the character is in the behavior state for a certain period of time. This task is implemented with [PP-HGNet](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-HGNet.md). In current version, the behavior of calling is supported and the relationship between the action type and `class id` is:
```
0: Calling
1: Others
```
## Detection-based Action Recognition -- Smoking Detection
<div align="center"> <img src="https://user-images.githubusercontent.com/22989727/205599300-380c3805-63d6-43cc-9b77-2687b1328d7b.gif" width='600'/><br> <center>Data source and copyright owner:Skyinfor
Technology. Thanks for the provision of actual scenario data, which are only
used for academic research here. </center>
</div>
### Description of Configuration
Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow:
```
ID_BASED_DETACTION: # Config for detection-based action recognition model
model_dir: output_inference/ppyoloe_crn_s_80e_smoking_visdrone # Path of the model
batch_size: 8 # The size of the inference batch
threshold: 0.4 # Threshold for corresponding behavior.
display_frames: 80 # The number of display frames. When the corresponding action is detected, the time length of the act will be displayed in the ID.
enable: False # Whether to enable this function
```
### How to Use
1. Download models `Pedestrian Detection/Tracking` and `Smoking Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `ID_BASED_DETACTION` in infer_cfg_pphuman.yml.
3. Run this command:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md)
### Introduction to the Solution
1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe).
2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box.
3. We detecting the typical specific target of this behavior in frame-level pedestrian images. When a specific target (in this case, cigarette is the target) is detected, it is considered that the character is in the behavior state for a certain period of time. This task is implemented by [PP-YOLOE](../../../../configs/ppyoloe/). In current version, the behavior of smoking is supported and the relationship between the action type and `class id` is:
```
0: Smoking
1: Others
```
## Video-Classification-Based Action Recognition -- Fighting Detection
With wider and wider deployment of surveillance cameras, it is time-consuming and labor-intensive and inefficient to manually check whether there are abnormal behaviors such as fighting. AI + security assistant smart security. A fight recognition module is integrated into PP-Human to identify whether there is fighting in the video. We provide pre-trained models that users can download and use directly.
| Task | Model | Acc. | Speed(ms) | Weight | Deploy Model |
| ---- | ---- | ---------- | ---- | ---- | ---------- |
| Fighting Detection | PP-TSM | 89.06% | 128ms for a 2-sec video| [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) |
The model is trained with 6 public dataset, including Surveillance Camera Fight Dataset、A Dataset for Automatic Violence Detection in Videos、Hockey Fight Detection Dataset、Video Fight Detection Dataset、Real Life Violence Situations Dataset、UBI Abnormal Event Detection Dataset.
This project focuses on is the identification of fighting behavior under surveillance cameras. Fighting behavior involves multiple people, and the skeleton-based technology is more suitable for single-person behavior recognition. In addition, fighting behavior is strongly dependent on timing information, and the detection and classification-based scheme is not suitable. Due to the complex background of the monitoring scene, the density of people, light, filming angle may affect the accuracy. This solution uses video-classification-based method to determine whether there is fighting in the video.
For the case where the camera is far away from the person, it is optimized by increasing the resolution of the input image. Due to the limited training data, data augmentation is used to improve the generalization performance of the model.
### Description of Configuration
Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow:
```
VIDEO_ACTION: # Config for detection-based action recognition model
model_dir: output_inference/ppTSM # Path of the model
batch_size: 1 # The size of the inference batch. Current now only support 1.
frame_len: 8 # Accumulate the number of sampling frames. Inference will be executed when sampled frames reached this value.
sample_freq: 7 # Sampling frequency. It means how many frames to sample one frame.
short_size: 340 # The shortest length for video frame scaling transforms.
target_size: 320 # Target size for input video
enable: False # Whether to enable this function
```
### How to Use
1. Download model `Fighting Detection` from the links of the above table and unzip it to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path.
2. Modify the file names in the `ppTSM` folder to `model.pdiparams, model.pdiparams.info and model.pdmodel`;
3. Now the only available input is the video input in the action recognition module. set the "enable: True" of `VIDEO_ACTION` in infer_cfg_pphuman.yml.
4. Run this command:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu
```
5. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md).
The result is shown as follow:
<div width="600" align="center">
<img src="https://user-images.githubusercontent.com/22989727/205597198-8b4333b3-6c39-472c-a25c-018dac908867.gif"/>
</div>
Data source and copyright owner: Surveillance Camera Fight Dataset.
### Introduction to the Solution
The current fight recognition model is using [PP-TSM](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/pp-tsm.md), and adaptated to complete the model training. For the input video or video stream, we extraction frame at a certain interval. When the video frame accumulates to the specified number, it is input into the video classification model to determine whether there is fighting.
## Custom Training
The pretrained models are provided and can be used directly, including pedestrian detection/ tracking, keypoint detection, smoking, calling and fighting recognition. If users need to train custom action or optimize the model performance, please refer the link below.
| Task | Model | Development Document |
| ---- | ---- | -------- |
| pedestrian detection/tracking | PP-YOLOE | [doc](../../../../configs/ppyoloe/README.md#getting-start) |
| keypoint detection | HRNet | [doc](../../../../configs/keypoint/README_en.md#3training-and-testing) |
| action recognition (fall down) | ST-GCN | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md) |
| action recognition (smoking) | PP-YOLOE | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/idbased_det.md) |
| action recognition (calling) | PP-HGNet | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md) |
| action recognition (fighting) | PP-TSM | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/videobased_rec.md) |
## Reference
```
@inproceedings{stgcn2018aaai,
title = {Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition},
author = {Sijie Yan and Yuanjun Xiong and Dahua Lin},
booktitle = {AAAI},
year = {2018},
}
```
| PaddleDetection/deploy/pipeline/docs/tutorials/pphuman_action_en.md/0 | {
"file_path": "PaddleDetection/deploy/pipeline/docs/tutorials/pphuman_action_en.md",
"repo_id": "PaddleDetection",
"token_count": 6358
} | 51 |
English | [简体中文](ppvehicle_press.md)
# PP-Vehicle press line identification module
Vehicle compaction line recognition is widely used in smart cities, smart transportation and other directions.
In PP-Vehicle, a vehicle compaction line identification module is integrated to identify whether the vehicle is in violation of regulations.
| task | algorithm | precision | infer speed | download|
|-----------|------|-----------|----------|---------------|
| Vehicle detection/tracking | PP-YOLOE | mAP 63.9 | 38.67ms | [infer deploy model](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) |
| Lane line segmentation | PP-liteseg | mIou 32.69 | 47 ms | [infer deploy model](https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip) |
Notes:
1. The prediction speed of vehicle detection/tracking model is based on NVIDIA T4 and TensorRT FP16. The model prediction speed includes data preprocessing, model prediction and post-processing.
2. The training and precision test of vehicle detection/tracking model are based on [VeRi](https://www.v7labs.com/open-datasets/veri-dataset).
3. The predicted speed of lane line segmentation model is based on Tesla P40 and python prediction. The predicted speed of the model includes data preprocessing, model prediction and post-processing.
4. Lane line model training and precision testing are based on [BDD100K-LaneSeg](https://bdd-data.berkeley.edu/portal.html#download)and [Apollo Scape](http://apolloscape.auto/lane_segmentation.html#to_dataset_href),The label data of the two data sets is in[Lane_dataset_label](https://bj.bcebos.com/v1/paddledet/data/mot/bdd100k/lane_dataset_label.zip)
## Instructions
### Description of Configuration
The parameters related to vehicle line pressing in [config file](../../config/infer_cfg_ppvehicle.yml) is as follows:
```
VEHICLE_PRESSING:
enable: True #Whether to enable the funcion
LANE_SEG:
lane_seg_config: deploy/pipeline/config/lane_seg_config.yml #lane line seg config file
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip #model path
```
The parameters related to Lane line segmentation in [lane line seg config file](../../config/lane_seg_config.yml)is as follows:
```
type: PLSLaneseg #Select segmentation Model
PLSLaneseg:
batch_size: 1 #image batch_size
device: gpu #device is gpu or cpu
filter_flag: True #Whether to filter the horizontal direction road route
horizontal_filtration_degree: 23 #Filter the threshold value of the lane line in the horizontal direction. When the difference between the maximum inclination angle and the minimum inclination angle of the segmented lane line is less than the threshold value, no filtering is performed
horizontal_filtering_threshold: 0.25 #Determine the threshold value for separating the vertical direction from the horizontal direction thr=(min_degree+max_degree) * 0.25 Divide the lane line into vertical direction and horizontal direction according to the comparison between the gradient angle of the lane line and thr
```
### How to Use
1. Download 'vehicle detection/tracking' and 'lane line recognition' two prediction deployment models from the model base and unzip them to '/ output_ Invitation ` under the path; By default, the model will be downloaded automatically. If you download it manually, you need to modify the model folder as the model storage path.
2. Modify Profile ` VEHICLE_PRESSING ' -'enable: True' item to enable this function.
3. When inputting a picture, the startup command is as follows (for more command parameter descriptions,please refer to [QUICK_STARTED - Parameter_Description](./PPVehicle_QUICK_STARTED.md)
```bash
# For single image
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
-o VEHICLE_PRESSING.enable=true
--image_file=test_image.jpg \
--device=gpu
# For folder contains one or multiple images
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
-o VEHICLE_PRESSING.enable=true
--image_dir=images/ \
--device=gpu
```
4. For video input, please run these commands.
```bash
#For single video
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
-o VEHICLE_PRESSING.enable=true
--video_file=test_video.mp4 \
--device=gpu
#For folder contains one or multiple videos
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_dir=test_videos/ \
-o VEHICLE_PRESSING.enable=true
--device=gpu
```
5. There are two ways to modify the model path:
- 1.Set paths of each model in `./deploy/pipeline/config/infer_cfg_ppvehicle.yml`,For Lane line segmentation, the path should be modified under the `LANE_SEG`
- 2.Directly add `-o` in command line to override the default model path in the configuration file:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu \
-o VEHICLE_PRESSING.enable=true
LANE_SEG.model_dir=output_inference
```
The result is shown as follow:
<div width="1000" align="center">
<img src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_press.gif"/>
</div>
## Features to the Solution
1.Lane line recognition model uses [PaddleSeg]( https://github.com/PaddlePaddle/PaddleSeg )Super lightweight segmentation scheme.Train [lable](https://bj.bcebos.com/v1/paddledet/data/mot/bdd100k/lane_dataset_label.zip)it is divided into four categories:
0 Background
1 Double yellow line
2 Solid line
3 Dashed line
Lane line recognition filtering Dashed lines;
2.Lane lines are obtained by clustering segmentation results, and the horizontal lane lines are filtered by default. If not, you can modify the `filter_flag` in [lane line seg config file](../../config/lane_seg_config.yml);
3.Judgment conditions for vehicle line pressing: whether there is intersection between the bottom edge line of vehicle detection frame and lane line;
**Performance optimization measures:**
1.Due to the camera angle, it can be decided whether to filter the lane line in the horizontal direction according to the actual situation;
| PaddleDetection/deploy/pipeline/docs/tutorials/ppvehicle_press_en.md/0 | {
"file_path": "PaddleDetection/deploy/pipeline/docs/tutorials/ppvehicle_press_en.md",
"repo_id": "PaddleDetection",
"token_count": 2649
} | 52 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import yaml
import glob
from functools import reduce
import cv2
import numpy as np
import math
import paddle
from paddle.inference import Config
from paddle.inference import create_predictor
import sys
# add deploy path of PaddleDetection to sys.path
parent_path = os.path.abspath(os.path.join(__file__, *(['..'])))
sys.path.insert(0, parent_path)
from python.benchmark_utils import PaddleInferBenchmark
from python.preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine
from python.visualize import visualize_attr
from python.utils import argsparser, Timer, get_current_memory_mb
from python.infer import Detector, get_test_images, print_arguments, load_predictor
from PIL import Image, ImageDraw, ImageFont
class AttrDetector(Detector):
"""
Args:
model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml
device (str): Choose the device you want to run, it can be: CPU/GPU/XPU/NPU, default is CPU
run_mode (str): mode of running(paddle/trt_fp32/trt_fp16)
batch_size (int): size of pre batch in inference
trt_min_shape (int): min shape for dynamic shape in trt
trt_max_shape (int): max shape for dynamic shape in trt
trt_opt_shape (int): opt shape for dynamic shape in trt
trt_calib_mode (bool): If the model is produced by TRT offline quantitative
calibration, trt_calib_mode need to set True
cpu_threads (int): cpu threads
enable_mkldnn (bool): whether to open MKLDNN
output_dir (str): The path of output
threshold (float): The threshold of score for visualization
"""
def __init__(
self,
model_dir,
device='CPU',
run_mode='paddle',
batch_size=1,
trt_min_shape=1,
trt_max_shape=1280,
trt_opt_shape=640,
trt_calib_mode=False,
cpu_threads=1,
enable_mkldnn=False,
output_dir='output',
threshold=0.5, ):
super(AttrDetector, self).__init__(
model_dir=model_dir,
device=device,
run_mode=run_mode,
batch_size=batch_size,
trt_min_shape=trt_min_shape,
trt_max_shape=trt_max_shape,
trt_opt_shape=trt_opt_shape,
trt_calib_mode=trt_calib_mode,
cpu_threads=cpu_threads,
enable_mkldnn=enable_mkldnn,
output_dir=output_dir,
threshold=threshold, )
@classmethod
def init_with_cfg(cls, args, cfg):
return cls(model_dir=cfg['model_dir'],
batch_size=cfg['batch_size'],
device=args.device,
run_mode=args.run_mode,
trt_min_shape=args.trt_min_shape,
trt_max_shape=args.trt_max_shape,
trt_opt_shape=args.trt_opt_shape,
trt_calib_mode=args.trt_calib_mode,
cpu_threads=args.cpu_threads,
enable_mkldnn=args.enable_mkldnn)
def get_label(self):
return self.pred_config.labels
def postprocess(self, inputs, result):
# postprocess output of predictor
im_results = result['output']
labels = self.pred_config.labels
age_list = ['AgeLess18', 'Age18-60', 'AgeOver60']
direct_list = ['Front', 'Side', 'Back']
bag_list = ['HandBag', 'ShoulderBag', 'Backpack']
upper_list = ['UpperStride', 'UpperLogo', 'UpperPlaid', 'UpperSplice']
lower_list = [
'LowerStripe', 'LowerPattern', 'LongCoat', 'Trousers', 'Shorts',
'Skirt&Dress'
]
glasses_threshold = 0.3
hold_threshold = 0.6
batch_res = []
for res in im_results:
res = res.tolist()
label_res = []
# gender
gender = 'Female' if res[22] > self.threshold else 'Male'
label_res.append(gender)
# age
age = age_list[np.argmax(res[19:22])]
label_res.append(age)
# direction
direction = direct_list[np.argmax(res[23:])]
label_res.append(direction)
# glasses
glasses = 'Glasses: '
if res[1] > glasses_threshold:
glasses += 'True'
else:
glasses += 'False'
label_res.append(glasses)
# hat
hat = 'Hat: '
if res[0] > self.threshold:
hat += 'True'
else:
hat += 'False'
label_res.append(hat)
# hold obj
hold_obj = 'HoldObjectsInFront: '
if res[18] > hold_threshold:
hold_obj += 'True'
else:
hold_obj += 'False'
label_res.append(hold_obj)
# bag
bag = bag_list[np.argmax(res[15:18])]
bag_score = res[15 + np.argmax(res[15:18])]
bag_label = bag if bag_score > self.threshold else 'No bag'
label_res.append(bag_label)
# upper
upper_label = 'Upper:'
sleeve = 'LongSleeve' if res[3] > res[2] else 'ShortSleeve'
upper_label += ' {}'.format(sleeve)
upper_res = res[4:8]
if np.max(upper_res) > self.threshold:
upper_label += ' {}'.format(upper_list[np.argmax(upper_res)])
label_res.append(upper_label)
# lower
lower_res = res[8:14]
lower_label = 'Lower: '
has_lower = False
for i, l in enumerate(lower_res):
if l > self.threshold:
lower_label += ' {}'.format(lower_list[i])
has_lower = True
if not has_lower:
lower_label += ' {}'.format(lower_list[np.argmax(lower_res)])
label_res.append(lower_label)
# shoe
shoe = 'Boots' if res[14] > self.threshold else 'No boots'
label_res.append(shoe)
batch_res.append(label_res)
result = {'output': batch_res}
return result
def predict(self, repeats=1):
'''
Args:
repeats (int): repeats number for prediction
Returns:
result (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
MaskRCNN's result include 'masks': np.ndarray:
shape: [N, im_h, im_w]
'''
# model prediction
for i in range(repeats):
self.predictor.run()
output_names = self.predictor.get_output_names()
output_tensor = self.predictor.get_output_handle(output_names[0])
np_output = output_tensor.copy_to_cpu()
result = dict(output=np_output)
return result
def predict_image(self,
image_list,
run_benchmark=False,
repeats=1,
visual=True):
batch_loop_cnt = math.ceil(float(len(image_list)) / self.batch_size)
results = []
for i in range(batch_loop_cnt):
start_index = i * self.batch_size
end_index = min((i + 1) * self.batch_size, len(image_list))
batch_image_list = image_list[start_index:end_index]
if run_benchmark:
# preprocess
inputs = self.preprocess(batch_image_list) # warmup
self.det_times.preprocess_time_s.start()
inputs = self.preprocess(batch_image_list)
self.det_times.preprocess_time_s.end()
# model prediction
result = self.predict(repeats=repeats) # warmup
self.det_times.inference_time_s.start()
result = self.predict(repeats=repeats)
self.det_times.inference_time_s.end(repeats=repeats)
# postprocess
result_warmup = self.postprocess(inputs, result) # warmup
self.det_times.postprocess_time_s.start()
result = self.postprocess(inputs, result)
self.det_times.postprocess_time_s.end()
self.det_times.img_num += len(batch_image_list)
cm, gm, gu = get_current_memory_mb()
self.cpu_mem += cm
self.gpu_mem += gm
self.gpu_util += gu
else:
# preprocess
self.det_times.preprocess_time_s.start()
inputs = self.preprocess(batch_image_list)
self.det_times.preprocess_time_s.end()
# model prediction
self.det_times.inference_time_s.start()
result = self.predict()
self.det_times.inference_time_s.end()
# postprocess
self.det_times.postprocess_time_s.start()
result = self.postprocess(inputs, result)
self.det_times.postprocess_time_s.end()
self.det_times.img_num += len(batch_image_list)
if visual:
visualize(
batch_image_list, result, output_dir=self.output_dir)
results.append(result)
if visual:
print('Test iter {}'.format(i))
results = self.merge_batch_result(results)
return results
def merge_batch_result(self, batch_result):
if len(batch_result) == 1:
return batch_result[0]
res_key = batch_result[0].keys()
results = {k: [] for k in res_key}
for res in batch_result:
for k, v in res.items():
results[k].extend(v)
return results
def visualize(image_list, batch_res, output_dir='output'):
# visualize the predict result
batch_res = batch_res['output']
for image_file, res in zip(image_list, batch_res):
im = visualize_attr(image_file, [res])
if not os.path.exists(output_dir):
os.makedirs(output_dir)
img_name = os.path.split(image_file)[-1]
out_path = os.path.join(output_dir, img_name)
cv2.imwrite(out_path, im)
print("save result to: " + out_path)
def main():
detector = AttrDetector(
FLAGS.model_dir,
device=FLAGS.device,
run_mode=FLAGS.run_mode,
batch_size=FLAGS.batch_size,
trt_min_shape=FLAGS.trt_min_shape,
trt_max_shape=FLAGS.trt_max_shape,
trt_opt_shape=FLAGS.trt_opt_shape,
trt_calib_mode=FLAGS.trt_calib_mode,
cpu_threads=FLAGS.cpu_threads,
enable_mkldnn=FLAGS.enable_mkldnn,
threshold=FLAGS.threshold,
output_dir=FLAGS.output_dir)
# predict from image
if FLAGS.image_dir is None and FLAGS.image_file is not None:
assert FLAGS.batch_size == 1, "batch_size should be 1, when image_file is not None"
img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file)
detector.predict_image(img_list, FLAGS.run_benchmark, repeats=10)
if not FLAGS.run_benchmark:
detector.det_times.info(average=True)
else:
mems = {
'cpu_rss_mb': detector.cpu_mem / len(img_list),
'gpu_rss_mb': detector.gpu_mem / len(img_list),
'gpu_util': detector.gpu_util * 100 / len(img_list)
}
perf_info = detector.det_times.report(average=True)
model_dir = FLAGS.model_dir
mode = FLAGS.run_mode
model_info = {
'model_name': model_dir.strip('/').split('/')[-1],
'precision': mode.split('_')[-1]
}
data_info = {
'batch_size': FLAGS.batch_size,
'shape': "dynamic_shape",
'data_num': perf_info['img_num']
}
det_log = PaddleInferBenchmark(detector.config, model_info, data_info,
perf_info, mems)
det_log('Attr')
if __name__ == '__main__':
paddle.enable_static()
parser = argsparser()
FLAGS = parser.parse_args()
print_arguments(FLAGS)
FLAGS.device = FLAGS.device.upper()
assert FLAGS.device in ['CPU', 'GPU', 'XPU', 'NPU'
], "device should be CPU, GPU, XPU or NPU"
assert not FLAGS.use_gpu, "use_gpu has been deprecated, please use --device"
main()
| PaddleDetection/deploy/pipeline/pphuman/attr_infer.py/0 | {
"file_path": "PaddleDetection/deploy/pipeline/pphuman/attr_infer.py",
"repo_id": "PaddleDetection",
"token_count": 6571
} | 53 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import sys
import platform
import cv2
import numpy as np
import paddle
from PIL import Image, ImageDraw, ImageFont
import math
from paddle import inference
import time
import ast
def create_predictor(args, cfg, mode):
if mode == "det":
model_dir = cfg['det_model_dir']
else:
model_dir = cfg['rec_model_dir']
if model_dir is None:
print("not find {} model file path {}".format(mode, model_dir))
sys.exit(0)
model_file_path = model_dir + "/inference.pdmodel"
params_file_path = model_dir + "/inference.pdiparams"
if not os.path.exists(model_file_path):
raise ValueError("not find model file path {}".format(model_file_path))
if not os.path.exists(params_file_path):
raise ValueError("not find params file path {}".format(
params_file_path))
config = inference.Config(model_file_path, params_file_path)
batch_size = 1
if args.device == "GPU":
gpu_id = get_infer_gpuid()
if gpu_id is None:
print(
"GPU is not found in current device by nvidia-smi. Please check your device or ignore it if run on jetson."
)
config.enable_use_gpu(500, 0)
precision_map = {
'trt_int8': inference.PrecisionType.Int8,
'trt_fp32': inference.PrecisionType.Float32,
'trt_fp16': inference.PrecisionType.Half
}
min_subgraph_size = 15
if args.run_mode in precision_map.keys():
config.enable_tensorrt_engine(
workspace_size=(1 << 25) * batch_size,
max_batch_size=batch_size,
min_subgraph_size=min_subgraph_size,
precision_mode=precision_map[args.run_mode])
use_dynamic_shape = True
if mode == "det":
min_input_shape = {
"x": [1, 3, 50, 50],
"conv2d_92.tmp_0": [1, 120, 20, 20],
"conv2d_91.tmp_0": [1, 24, 10, 10],
"conv2d_59.tmp_0": [1, 96, 20, 20],
"nearest_interp_v2_1.tmp_0": [1, 256, 10, 10],
"nearest_interp_v2_2.tmp_0": [1, 256, 20, 20],
"conv2d_124.tmp_0": [1, 256, 20, 20],
"nearest_interp_v2_3.tmp_0": [1, 64, 20, 20],
"nearest_interp_v2_4.tmp_0": [1, 64, 20, 20],
"nearest_interp_v2_5.tmp_0": [1, 64, 20, 20],
"elementwise_add_7": [1, 56, 2, 2],
"nearest_interp_v2_0.tmp_0": [1, 256, 2, 2]
}
max_input_shape = {
"x": [1, 3, 1536, 1536],
"conv2d_92.tmp_0": [1, 120, 400, 400],
"conv2d_91.tmp_0": [1, 24, 200, 200],
"conv2d_59.tmp_0": [1, 96, 400, 400],
"nearest_interp_v2_1.tmp_0": [1, 256, 200, 200],
"conv2d_124.tmp_0": [1, 256, 400, 400],
"nearest_interp_v2_2.tmp_0": [1, 256, 400, 400],
"nearest_interp_v2_3.tmp_0": [1, 64, 400, 400],
"nearest_interp_v2_4.tmp_0": [1, 64, 400, 400],
"nearest_interp_v2_5.tmp_0": [1, 64, 400, 400],
"elementwise_add_7": [1, 56, 400, 400],
"nearest_interp_v2_0.tmp_0": [1, 256, 400, 400]
}
opt_input_shape = {
"x": [1, 3, 640, 640],
"conv2d_92.tmp_0": [1, 120, 160, 160],
"conv2d_91.tmp_0": [1, 24, 80, 80],
"conv2d_59.tmp_0": [1, 96, 160, 160],
"nearest_interp_v2_1.tmp_0": [1, 256, 80, 80],
"nearest_interp_v2_2.tmp_0": [1, 256, 160, 160],
"conv2d_124.tmp_0": [1, 256, 160, 160],
"nearest_interp_v2_3.tmp_0": [1, 64, 160, 160],
"nearest_interp_v2_4.tmp_0": [1, 64, 160, 160],
"nearest_interp_v2_5.tmp_0": [1, 64, 160, 160],
"elementwise_add_7": [1, 56, 40, 40],
"nearest_interp_v2_0.tmp_0": [1, 256, 40, 40]
}
min_pact_shape = {
"nearest_interp_v2_26.tmp_0": [1, 256, 20, 20],
"nearest_interp_v2_27.tmp_0": [1, 64, 20, 20],
"nearest_interp_v2_28.tmp_0": [1, 64, 20, 20],
"nearest_interp_v2_29.tmp_0": [1, 64, 20, 20]
}
max_pact_shape = {
"nearest_interp_v2_26.tmp_0": [1, 256, 400, 400],
"nearest_interp_v2_27.tmp_0": [1, 64, 400, 400],
"nearest_interp_v2_28.tmp_0": [1, 64, 400, 400],
"nearest_interp_v2_29.tmp_0": [1, 64, 400, 400]
}
opt_pact_shape = {
"nearest_interp_v2_26.tmp_0": [1, 256, 160, 160],
"nearest_interp_v2_27.tmp_0": [1, 64, 160, 160],
"nearest_interp_v2_28.tmp_0": [1, 64, 160, 160],
"nearest_interp_v2_29.tmp_0": [1, 64, 160, 160]
}
min_input_shape.update(min_pact_shape)
max_input_shape.update(max_pact_shape)
opt_input_shape.update(opt_pact_shape)
elif mode == "rec":
imgH = int(cfg['rec_image_shape'][-2])
min_input_shape = {"x": [1, 3, imgH, 10]}
max_input_shape = {"x": [batch_size, 3, imgH, 2304]}
opt_input_shape = {"x": [batch_size, 3, imgH, 320]}
config.exp_disable_tensorrt_ops(["transpose2"])
elif mode == "cls":
min_input_shape = {"x": [1, 3, 48, 10]}
max_input_shape = {"x": [batch_size, 3, 48, 1024]}
opt_input_shape = {"x": [batch_size, 3, 48, 320]}
else:
use_dynamic_shape = False
if use_dynamic_shape:
config.set_trt_dynamic_shape_info(
min_input_shape, max_input_shape, opt_input_shape)
else:
config.disable_gpu()
if hasattr(args, "cpu_threads"):
config.set_cpu_math_library_num_threads(args.cpu_threads)
else:
# default cpu threads as 10
config.set_cpu_math_library_num_threads(10)
if args.enable_mkldnn:
# cache 10 different shapes for mkldnn to avoid memory leak
config.set_mkldnn_cache_capacity(10)
config.enable_mkldnn()
if args.run_mode == "fp16":
config.enable_mkldnn_bfloat16()
# enable memory optim
config.enable_memory_optim()
config.disable_glog_info()
config.delete_pass("conv_transpose_eltwiseadd_bn_fuse_pass")
config.delete_pass("matmul_transpose_reshape_fuse_pass")
if mode == 'table':
config.delete_pass("fc_fuse_pass") # not supported for table
config.switch_use_feed_fetch_ops(False)
config.switch_ir_optim(True)
# create predictor
predictor = inference.create_predictor(config)
input_names = predictor.get_input_names()
for name in input_names:
input_tensor = predictor.get_input_handle(name)
output_tensors = get_output_tensors(cfg, mode, predictor)
return predictor, input_tensor, output_tensors, config
def get_output_tensors(cfg, mode, predictor):
output_names = predictor.get_output_names()
output_tensors = []
output_name = 'softmax_0.tmp_0'
if output_name in output_names:
return [predictor.get_output_handle(output_name)]
else:
for output_name in output_names:
output_tensor = predictor.get_output_handle(output_name)
output_tensors.append(output_tensor)
return output_tensors
def get_infer_gpuid():
sysstr = platform.system()
if sysstr == "Windows":
return 0
if not paddle.device.is_compiled_with_rocm():
cmd = "env | grep CUDA_VISIBLE_DEVICES"
else:
cmd = "env | grep HIP_VISIBLE_DEVICES"
env_cuda = os.popen(cmd).readlines()
if len(env_cuda) == 0:
return 0
else:
gpu_id = env_cuda[0].strip().split("=")[1]
return int(gpu_id[0])
def draw_e2e_res(dt_boxes, strs, img_path):
src_im = cv2.imread(img_path)
for box, str in zip(dt_boxes, strs):
box = box.astype(np.int32).reshape((-1, 1, 2))
cv2.polylines(src_im, [box], True, color=(255, 255, 0), thickness=2)
cv2.putText(
src_im,
str,
org=(int(box[0, 0, 0]), int(box[0, 0, 1])),
fontFace=cv2.FONT_HERSHEY_COMPLEX,
fontScale=0.7,
color=(0, 255, 0),
thickness=1)
return src_im
def draw_text_det_res(dt_boxes, img_path):
src_im = cv2.imread(img_path)
for box in dt_boxes:
box = np.array(box).astype(np.int32).reshape(-1, 2)
cv2.polylines(src_im, [box], True, color=(255, 255, 0), thickness=2)
return src_im
def resize_img(img, input_size=600):
"""
resize img and limit the longest side of the image to input_size
"""
img = np.array(img)
im_shape = img.shape
im_size_max = np.max(im_shape[0:2])
im_scale = float(input_size) / float(im_size_max)
img = cv2.resize(img, None, None, fx=im_scale, fy=im_scale)
return img
def draw_ocr(image,
boxes,
txts=None,
scores=None,
drop_score=0.5,
font_path="./doc/fonts/simfang.ttf"):
"""
Visualize the results of OCR detection and recognition
args:
image(Image|array): RGB image
boxes(list): boxes with shape(N, 4, 2)
txts(list): the texts
scores(list): txxs corresponding scores
drop_score(float): only scores greater than drop_threshold will be visualized
font_path: the path of font which is used to draw text
return(array):
the visualized img
"""
if scores is None:
scores = [1] * len(boxes)
box_num = len(boxes)
for i in range(box_num):
if scores is not None and (scores[i] < drop_score or
math.isnan(scores[i])):
continue
box = np.reshape(np.array(boxes[i]), [-1, 1, 2]).astype(np.int64)
image = cv2.polylines(np.array(image), [box], True, (255, 0, 0), 2)
if txts is not None:
img = np.array(resize_img(image, input_size=600))
txt_img = text_visual(
txts,
scores,
img_h=img.shape[0],
img_w=600,
threshold=drop_score,
font_path=font_path)
img = np.concatenate([np.array(img), np.array(txt_img)], axis=1)
return img
return image
def draw_ocr_box_txt(image,
boxes,
txts,
scores=None,
drop_score=0.5,
font_path="./doc/simfang.ttf"):
h, w = image.height, image.width
img_left = image.copy()
img_right = Image.new('RGB', (w, h), (255, 255, 255))
import random
random.seed(0)
draw_left = ImageDraw.Draw(img_left)
draw_right = ImageDraw.Draw(img_right)
for idx, (box, txt) in enumerate(zip(boxes, txts)):
if scores is not None and scores[idx] < drop_score:
continue
color = (random.randint(0, 255), random.randint(0, 255),
random.randint(0, 255))
draw_left.polygon(box, fill=color)
draw_right.polygon(
[
box[0][0], box[0][1], box[1][0], box[1][1], box[2][0],
box[2][1], box[3][0], box[3][1]
],
outline=color)
box_height = math.sqrt((box[0][0] - box[3][0])**2 + (box[0][1] - box[3][
1])**2)
box_width = math.sqrt((box[0][0] - box[1][0])**2 + (box[0][1] - box[1][
1])**2)
if box_height > 2 * box_width:
font_size = max(int(box_width * 0.9), 10)
font = ImageFont.truetype(font_path, font_size, encoding="utf-8")
cur_y = box[0][1]
for c in txt:
char_size = font.getsize(c)
draw_right.text(
(box[0][0] + 3, cur_y), c, fill=(0, 0, 0), font=font)
cur_y += char_size[1]
else:
font_size = max(int(box_height * 0.8), 10)
font = ImageFont.truetype(font_path, font_size, encoding="utf-8")
draw_right.text(
[box[0][0], box[0][1]], txt, fill=(0, 0, 0), font=font)
img_left = Image.blend(image, img_left, 0.5)
img_show = Image.new('RGB', (w * 2, h), (255, 255, 255))
img_show.paste(img_left, (0, 0, w, h))
img_show.paste(img_right, (w, 0, w * 2, h))
return np.array(img_show)
def str_count(s):
"""
Count the number of Chinese characters,
a single English character and a single number
equal to half the length of Chinese characters.
args:
s(string): the input of string
return(int):
the number of Chinese characters
"""
import string
count_zh = count_pu = 0
s_len = len(s)
en_dg_count = 0
for c in s:
if c in string.ascii_letters or c.isdigit() or c.isspace():
en_dg_count += 1
elif c.isalpha():
count_zh += 1
else:
count_pu += 1
return s_len - math.ceil(en_dg_count / 2)
def text_visual(texts,
scores,
img_h=400,
img_w=600,
threshold=0.,
font_path="./doc/simfang.ttf"):
"""
create new blank img and draw txt on it
args:
texts(list): the text will be draw
scores(list|None): corresponding score of each txt
img_h(int): the height of blank img
img_w(int): the width of blank img
font_path: the path of font which is used to draw text
return(array):
"""
if scores is not None:
assert len(texts) == len(
scores), "The number of txts and corresponding scores must match"
def create_blank_img():
blank_img = np.ones(shape=[img_h, img_w], dtype=np.int8) * 255
blank_img[:, img_w - 1:] = 0
blank_img = Image.fromarray(blank_img).convert("RGB")
draw_txt = ImageDraw.Draw(blank_img)
return blank_img, draw_txt
blank_img, draw_txt = create_blank_img()
font_size = 20
txt_color = (0, 0, 0)
font = ImageFont.truetype(font_path, font_size, encoding="utf-8")
gap = font_size + 5
txt_img_list = []
count, index = 1, 0
for idx, txt in enumerate(texts):
index += 1
if scores[idx] < threshold or math.isnan(scores[idx]):
index -= 1
continue
first_line = True
while str_count(txt) >= img_w // font_size - 4:
tmp = txt
txt = tmp[:img_w // font_size - 4]
if first_line:
new_txt = str(index) + ': ' + txt
first_line = False
else:
new_txt = ' ' + txt
draw_txt.text((0, gap * count), new_txt, txt_color, font=font)
txt = tmp[img_w // font_size - 4:]
if count >= img_h // gap - 1:
txt_img_list.append(np.array(blank_img))
blank_img, draw_txt = create_blank_img()
count = 0
count += 1
if first_line:
new_txt = str(index) + ': ' + txt + ' ' + '%.3f' % (scores[idx])
else:
new_txt = " " + txt + " " + '%.3f' % (scores[idx])
draw_txt.text((0, gap * count), new_txt, txt_color, font=font)
# whether add new blank img or not
if count >= img_h // gap - 1 and idx + 1 < len(texts):
txt_img_list.append(np.array(blank_img))
blank_img, draw_txt = create_blank_img()
count = 0
count += 1
txt_img_list.append(np.array(blank_img))
if len(txt_img_list) == 1:
blank_img = np.array(txt_img_list[0])
else:
blank_img = np.concatenate(txt_img_list, axis=1)
return np.array(blank_img)
def base64_to_cv2(b64str):
import base64
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
def draw_boxes(image, boxes, scores=None, drop_score=0.5):
if scores is None:
scores = [1] * len(boxes)
for (box, score) in zip(boxes, scores):
if score < drop_score:
continue
box = np.reshape(np.array(box), [-1, 1, 2]).astype(np.int64)
image = cv2.polylines(np.array(image), [box], True, (255, 0, 0), 2)
return image
def get_rotate_crop_image(img, points):
'''
img_height, img_width = img.shape[0:2]
left = int(np.min(points[:, 0]))
right = int(np.max(points[:, 0]))
top = int(np.min(points[:, 1]))
bottom = int(np.max(points[:, 1]))
img_crop = img[top:bottom, left:right, :].copy()
points[:, 0] = points[:, 0] - left
points[:, 1] = points[:, 1] - top
'''
assert len(points) == 4, "shape of points must be 4*2"
img_crop_width = int(
max(
np.linalg.norm(points[0] - points[1]),
np.linalg.norm(points[2] - points[3])))
img_crop_height = int(
max(
np.linalg.norm(points[0] - points[3]),
np.linalg.norm(points[1] - points[2])))
pts_std = np.float32([[0, 0], [img_crop_width, 0],
[img_crop_width, img_crop_height],
[0, img_crop_height]])
M = cv2.getPerspectiveTransform(points, pts_std)
dst_img = cv2.warpPerspective(
img,
M, (img_crop_width, img_crop_height),
borderMode=cv2.BORDER_REPLICATE,
flags=cv2.INTER_CUBIC)
dst_img_height, dst_img_width = dst_img.shape[0:2]
if dst_img_height * 1.0 / dst_img_width >= 1.5:
dst_img = np.rot90(dst_img)
return dst_img
def check_gpu(use_gpu):
if use_gpu and not paddle.is_compiled_with_cuda():
use_gpu = False
return use_gpu
if __name__ == '__main__':
pass
| PaddleDetection/deploy/pipeline/ppvehicle/vehicle_plateutils.py/0 | {
"file_path": "PaddleDetection/deploy/pipeline/ppvehicle/vehicle_plateutils.py",
"repo_id": "PaddleDetection",
"token_count": 9839
} | 54 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <sstream>
// for setprecision
#include <chrono>
#include <iomanip>
#include <iostream>
#include <string>
#include "include/pipeline.h"
#include "include/postprocess.h"
#include "include/predictor.h"
namespace PaddleDetection {
void Pipeline::SetInput(const std::string& input_video) {
input_.push_back(input_video);
}
void Pipeline::ClearInput() {
input_.clear();
stream_.clear();
}
void Pipeline::SelectModel(const std::string& scene,
const bool tiny_obj,
const bool is_mtmct,
const std::string track_model_dir,
const std::string det_model_dir,
const std::string reid_model_dir) {
// model_dir has higher priority
if (!track_model_dir.empty()) {
track_model_dir_ = track_model_dir;
return;
}
if (!det_model_dir.empty() && !reid_model_dir.empty()) {
det_model_dir_ = det_model_dir;
reid_model_dir_ = reid_model_dir;
return;
}
// Single camera model, based on FairMot
if (scene == "pedestrian") {
if (tiny_obj) {
track_model_dir_ = "../pedestrian_track_tiny";
} else {
track_model_dir_ = "../pedestrian_track";
}
} else if (scene != "vehicle") {
if (tiny_obj) {
track_model_dir_ = "../vehicle_track_tiny";
} else {
track_model_dir_ = "../vehicle_track";
}
} else if (scene == "multiclass") {
if (tiny_obj) {
track_model_dir_ = "../multiclass_track_tiny";
} else {
track_model_dir_ = "../multiclass_track";
}
}
// Multi-camera model, based on PicoDet & LCNet
if (is_mtmct && scene == "pedestrian") {
det_model_dir_ = "../pedestrian_det";
reid_model_dir_ = "../pedestrian_reid";
} else if (is_mtmct && scene == "vehicle") {
det_model_dir_ = "../vehicle_det";
reid_model_dir_ = "../vehicle_reid";
} else if (is_mtmct && scene == "multiclass") {
throw "Multi-camera tracking is not supported in multiclass scene now.";
}
}
void Pipeline::InitPredictor() {
if (track_model_dir_.empty() && det_model_dir_.empty()) {
throw "Predictor must receive track_model or det_model!";
}
if (!track_model_dir_.empty()) {
jde_sct_ = std::make_shared<PaddleDetection::JDEPredictor>(device_,
track_model_dir_,
threshold_,
run_mode_,
gpu_id_,
use_mkldnn_,
cpu_threads_,
trt_calib_mode_);
}
if (!det_model_dir_.empty()) {
sde_sct_ = std::make_shared<PaddleDetection::SDEPredictor>(device_,
det_model_dir_,
reid_model_dir_,
threshold_,
run_mode_,
gpu_id_,
use_mkldnn_,
cpu_threads_,
trt_calib_mode_);
}
}
void Pipeline::Run() {
if (track_model_dir_.empty() && det_model_dir_.empty()) {
LOG(ERROR) << "Pipeline must use SelectModel before Run";
return;
}
if (input_.size() == 0) {
LOG(ERROR) << "Pipeline must use SetInput before Run";
return;
}
if (!track_model_dir_.empty()) {
// single camera
if (input_.size() > 1) {
throw "Single camera tracking except single video, but received %d",
input_.size();
}
PredictMOT(input_[0]);
} else {
// multi cameras
if (input_.size() != 2) {
throw "Multi camera tracking except two videos, but received %d",
input_.size();
}
PredictMTMCT(input_);
}
}
void Pipeline::PredictMOT(const std::string& video_path) {
// Open video
cv::VideoCapture capture;
capture.open(video_path.c_str());
if (!capture.isOpened()) {
printf("can not open video : %s\n", video_path.c_str());
return;
}
// Get Video info : resolution, fps
int video_width = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_WIDTH));
int video_height = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_HEIGHT));
int video_fps = static_cast<int>(capture.get(CV_CAP_PROP_FPS));
LOG(INFO) << "----------------------- Input info -----------------------";
LOG(INFO) << "video_width: " << video_width;
LOG(INFO) << "video_height: " << video_height;
LOG(INFO) << "input fps: " << video_fps;
// Create VideoWriter for output
cv::VideoWriter video_out;
std::string video_out_path = output_dir_ + OS_PATH_SEP + "mot_output.mp4";
int fcc = cv::VideoWriter::fourcc('m', 'p', '4', 'v');
video_out.open(video_out_path.c_str(),
fcc, // 0x00000021,
video_fps,
cv::Size(video_width, video_height),
true);
if (!video_out.isOpened()) {
printf("create video writer failed!\n");
return;
}
PaddleDetection::MOTResult result;
std::vector<double> det_times(3);
std::set<int> id_set;
std::set<int> interval_id_set;
std::vector<int> in_id_list;
std::vector<int> out_id_list;
std::map<int, std::vector<float>> prev_center;
Rect entrance = {0,
static_cast<float>(video_height) / 2,
static_cast<float>(video_width),
static_cast<float>(video_height) / 2};
double times;
double total_time;
// Capture all frames and do inference
cv::Mat frame;
int frame_id = 0;
std::vector<std::string> records;
std::vector<std::string> flow_records;
records.push_back("result format: frame_id, track_id, x1, y1, w, h\n");
LOG(INFO) << "------------------- Predict info ------------------------";
while (capture.read(frame)) {
if (frame.empty()) {
break;
}
std::vector<cv::Mat> imgs;
imgs.push_back(frame);
jde_sct_->Predict(imgs, threshold_, &result, &det_times);
frame_id += 1;
total_time = std::accumulate(det_times.begin(), det_times.end(), 0.);
times = total_time / frame_id;
LOG(INFO) << "frame_id: " << frame_id
<< " predict time(s): " << times / 1000;
cv::Mat out_img = PaddleDetection::VisualizeTrackResult(
frame, result, 1000. / times, frame_id);
// TODO(qianhui): the entrance line can be set by users
PaddleDetection::FlowStatistic(result,
frame_id,
secs_interval_,
do_entrance_counting_,
video_fps,
entrance,
&id_set,
&interval_id_set,
&in_id_list,
&out_id_list,
&prev_center,
&flow_records);
if (save_result_) {
PaddleDetection::SaveMOTResult(result, frame_id, &records);
}
// Draw the entrance line
if (do_entrance_counting_) {
float line_thickness = std::max(1, static_cast<int>(video_width / 500.));
cv::Point pt1 = cv::Point(entrance.left, entrance.top);
cv::Point pt2 = cv::Point(entrance.right, entrance.bottom);
cv::line(out_img, pt1, pt2, cv::Scalar(0, 255, 255), line_thickness);
}
video_out.write(out_img);
}
capture.release();
video_out.release();
PrintBenchmarkLog(det_times, frame_id);
LOG(INFO) << "-------------------- Final Output info -------------------";
LOG(INFO) << "Total frame: " << frame_id;
LOG(INFO) << "Visualized output saved as " << video_out_path.c_str();
if (save_result_) {
FILE* fp;
std::string result_output_path =
output_dir_ + OS_PATH_SEP + "mot_output.txt";
if ((fp = fopen(result_output_path.c_str(), "w+")) == NULL) {
printf("Open %s error.\n", result_output_path.c_str());
return;
}
for (int l; l < records.size(); ++l) {
fprintf(fp, records[l].c_str());
}
fclose(fp);
LOG(INFO) << "txt result output saved as " << result_output_path.c_str();
result_output_path = output_dir_ + OS_PATH_SEP + "flow_statistic.txt";
if ((fp = fopen(result_output_path.c_str(), "w+")) == NULL) {
printf("Open %s error.\n", result_output_path);
return;
}
for (int l; l < flow_records.size(); ++l) {
fprintf(fp, flow_records[l].c_str());
}
fclose(fp);
LOG(INFO) << "txt flow statistic saved as " << result_output_path.c_str();
}
}
void Pipeline::PredictMTMCT(const std::vector<std::string> video_path) {
throw "Not Implement!";
}
void Pipeline::RunMOTStream(const cv::Mat img,
const int frame_id,
const int video_fps,
const Rect entrance,
cv::Mat out_img,
std::vector<std::string>* records,
std::set<int>* id_set,
std::set<int>* interval_id_set,
std::vector<int>* in_id_list,
std::vector<int>* out_id_list,
std::map<int, std::vector<float>>* prev_center,
std::vector<std::string>* flow_records) {
PaddleDetection::MOTResult result;
std::vector<double> det_times(3);
double times;
double total_time;
LOG(INFO) << "------------------- Predict info ------------------------";
std::vector<cv::Mat> imgs;
imgs.push_back(img);
jde_sct_->Predict(imgs, threshold_, &result, &det_times);
total_time = std::accumulate(det_times.begin(), det_times.end(), 0.);
times = total_time / frame_id;
LOG(INFO) << "frame_id: " << frame_id << " predict time(s): " << times / 1000;
out_img = PaddleDetection::VisualizeTrackResult(
img, result, 1000. / times, frame_id);
// Count total number
// Count in & out number
PaddleDetection::FlowStatistic(result,
frame_id,
secs_interval_,
do_entrance_counting_,
video_fps,
entrance,
id_set,
interval_id_set,
in_id_list,
out_id_list,
prev_center,
flow_records);
PrintBenchmarkLog(det_times, frame_id);
if (save_result_) {
PaddleDetection::SaveMOTResult(result, frame_id, records);
}
}
void Pipeline::RunMTMCTStream(const std::vector<cv::Mat> imgs,
std::vector<std::string>* records) {
throw "Not Implement!";
}
void Pipeline::PrintBenchmarkLog(const std::vector<double> det_time,
const int img_num) {
LOG(INFO) << "----------------------- Config info -----------------------";
LOG(INFO) << "runtime_device: " << device_;
LOG(INFO) << "ir_optim: "
<< "True";
LOG(INFO) << "enable_memory_optim: "
<< "True";
int has_trt = run_mode_.find("trt");
if (has_trt >= 0) {
LOG(INFO) << "enable_tensorrt: "
<< "True";
std::string precision = run_mode_.substr(4, 8);
LOG(INFO) << "precision: " << precision;
} else {
LOG(INFO) << "enable_tensorrt: "
<< "False";
LOG(INFO) << "precision: "
<< "fp32";
}
LOG(INFO) << "enable_mkldnn: " << (use_mkldnn_ ? "True" : "False");
LOG(INFO) << "cpu_math_library_num_threads: " << cpu_threads_;
LOG(INFO) << "----------------------- Perf info ------------------------";
LOG(INFO) << "Total number of predicted data: " << img_num
<< " and total time spent(s): "
<< std::accumulate(det_time.begin(), det_time.end(), 0.) / 1000;
int num = std::max(1, img_num);
LOG(INFO) << "preproce_time(ms): " << det_time[0] / num
<< ", inference_time(ms): " << det_time[1] / num
<< ", postprocess_time(ms): " << det_time[2] / num;
}
} // namespace PaddleDetection
| PaddleDetection/deploy/pptracking/cpp/src/pipeline.cc/0 | {
"file_path": "PaddleDetection/deploy/pptracking/cpp/src/pipeline.cc",
"repo_id": "PaddleDetection",
"token_count": 6566
} | 55 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This code is based on https://github.com/LCFractal/AIC21-MTMC/tree/main/reid/reid-matching/tools
Note: The following codes are strongly related to zone of the AIC21 test-set S06,
so they can only be used in S06, and can not be used for other MTMCT datasets.
"""
import os
import cv2
import numpy as np
try:
from sklearn.cluster import AgglomerativeClustering
except:
print(
'Warning: Unable to use MTMCT in PP-Tracking, please install sklearn, for example: `pip install sklearn`'
)
pass
BBOX_B = 10 / 15
class Zone(object):
def __init__(self, zone_path='datasets/zone'):
# 0: b 1: g 3: r 123:w
# w r not high speed
# b g high speed
assert zone_path != '', "Error: zone_path is not empty!"
zones = {}
for img_name in os.listdir(zone_path):
camnum = int(img_name.split('.')[0][-3:])
zone_img = cv2.imread(os.path.join(zone_path, img_name))
zones[camnum] = zone_img
self.zones = zones
self.current_cam = 0
def set_cam(self, cam):
self.current_cam = cam
def get_zone(self, bbox):
cx = int((bbox[0] + bbox[2]) / 2)
cy = int((bbox[1] + bbox[3]) / 2)
pix = self.zones[self.current_cam][max(cy - 1, 0), max(cx - 1, 0), :]
zone_num = 0
if pix[0] > 50 and pix[1] > 50 and pix[2] > 50: # w
zone_num = 1
if pix[0] < 50 and pix[1] < 50 and pix[2] > 50: # r
zone_num = 2
if pix[0] < 50 and pix[1] > 50 and pix[2] < 50: # g
zone_num = 3
if pix[0] > 50 and pix[1] < 50 and pix[2] < 50: # b
zone_num = 4
return zone_num
def is_ignore(self, zone_list, frame_list, cid):
# 0 not in any corssroad, 1 white 2 red 3 green 4 bule
zs, ze = zone_list[0], zone_list[-1]
fs, fe = frame_list[0], frame_list[-1]
if zs == ze:
# if always on one section, excluding
if ze in [1, 2]:
return 2
if zs != 0 and 0 in zone_list:
return 0
if fe - fs > 1500:
return 2
if fs < 2:
if cid in [45]:
if ze in [3, 4]:
return 1
else:
return 2
if fe > 1999:
if cid in [41]:
if ze not in [3]:
return 2
else:
return 0
if fs < 2 or fe > 1999:
if ze in [3, 4]:
return 0
if ze in [3, 4]:
return 1
return 2
else:
# if camera section change
if cid in [41, 42, 43, 44, 45, 46]:
# come from road extension, exclusing
if zs == 1 and ze == 2:
return 2
if zs == 2 and ze == 1:
return 2
if cid in [41]:
# On 41 camera, no vehicle come into 42 camera
if (zs in [1, 2]) and ze == 4:
return 2
if zs == 4 and (ze in [1, 2]):
return 2
if cid in [46]:
# On 46 camera,no vehicle come into 45
if (zs in [1, 2]) and ze == 3:
return 2
if zs == 3 and (ze in [1, 2]):
return 2
return 0
def filter_mot(self, mot_list, cid):
new_mot_list = dict()
sub_mot_list = dict()
for tracklet in mot_list:
tracklet_dict = mot_list[tracklet]
frame_list = list(tracklet_dict.keys())
frame_list.sort()
zone_list = []
for f in frame_list:
zone_list.append(tracklet_dict[f]['zone'])
if self.is_ignore(zone_list, frame_list, cid) == 0:
new_mot_list[tracklet] = tracklet_dict
if self.is_ignore(zone_list, frame_list, cid) == 1:
sub_mot_list[tracklet] = tracklet_dict
return new_mot_list
def filter_bbox(self, mot_list, cid):
new_mot_list = dict()
yh = self.zones[cid].shape[0]
for tracklet in mot_list:
tracklet_dict = mot_list[tracklet]
frame_list = list(tracklet_dict.keys())
frame_list.sort()
bbox_list = []
for f in frame_list:
bbox_list.append(tracklet_dict[f]['bbox'])
bbox_x = [b[0] for b in bbox_list]
bbox_y = [b[1] for b in bbox_list]
bbox_w = [b[2] - b[0] for b in bbox_list]
bbox_h = [b[3] - b[1] for b in bbox_list]
new_frame_list = list()
if 0 in bbox_x or 0 in bbox_y:
b0 = [
i for i, f in enumerate(frame_list)
if bbox_x[i] < 5 or bbox_y[i] + bbox_h[i] > yh - 5
]
if len(b0) == len(frame_list):
if cid in [41, 42, 44, 45, 46]:
continue
max_w = max(bbox_w)
max_h = max(bbox_h)
for i, f in enumerate(frame_list):
if bbox_w[i] > max_w * BBOX_B and bbox_h[
i] > max_h * BBOX_B:
new_frame_list.append(f)
else:
l_i, r_i = 0, len(frame_list) - 1
if len(b0) == 0:
continue
if b0[0] == 0:
for i in range(len(b0) - 1):
if b0[i] + 1 == b0[i + 1]:
l_i = b0[i + 1]
else:
break
if b0[-1] == len(frame_list) - 1:
for i in range(len(b0) - 1):
i = len(b0) - 1 - i
if b0[i] - 1 == b0[i - 1]:
r_i = b0[i - 1]
else:
break
max_lw, max_lh = bbox_w[l_i], bbox_h[l_i]
max_rw, max_rh = bbox_w[r_i], bbox_h[r_i]
for i, f in enumerate(frame_list):
if i < l_i:
if bbox_w[i] > max_lw * BBOX_B and bbox_h[
i] > max_lh * BBOX_B:
new_frame_list.append(f)
elif i > r_i:
if bbox_w[i] > max_rw * BBOX_B and bbox_h[
i] > max_rh * BBOX_B:
new_frame_list.append(f)
else:
new_frame_list.append(f)
new_tracklet_dict = dict()
for f in new_frame_list:
new_tracklet_dict[f] = tracklet_dict[f]
new_mot_list[tracklet] = new_tracklet_dict
else:
new_mot_list[tracklet] = tracklet_dict
return new_mot_list
def break_mot(self, mot_list, cid):
new_mot_list = dict()
new_num_tracklets = max(mot_list) + 1
for tracklet in mot_list:
tracklet_dict = mot_list[tracklet]
frame_list = list(tracklet_dict.keys())
frame_list.sort()
zone_list = []
back_tracklet = False
new_zone_f = 0
pre_frame = frame_list[0]
time_break = False
for f in frame_list:
if f - pre_frame > 100:
if cid in [44, 45]:
time_break = True
break
if not cid in [41, 44, 45, 46]:
break
pre_frame = f
new_zone = tracklet_dict[f]['zone']
if len(zone_list) > 0 and zone_list[-1] == new_zone:
continue
if new_zone_f > 1:
if len(zone_list) > 1 and new_zone in zone_list:
back_tracklet = True
zone_list.append(new_zone)
new_zone_f = 0
else:
new_zone_f += 1
if back_tracklet:
new_tracklet_dict = dict()
pre_bbox = -1
pre_arrow = 0
have_break = False
for f in frame_list:
now_bbox = tracklet_dict[f]['bbox']
if type(pre_bbox) == int:
if pre_bbox == -1:
pre_bbox = now_bbox
now_arrow = now_bbox[0] - pre_bbox[0]
if pre_arrow * now_arrow < 0 and len(
new_tracklet_dict) > 15 and not have_break:
new_mot_list[tracklet] = new_tracklet_dict
new_tracklet_dict = dict()
have_break = True
if have_break:
tracklet_dict[f]['id'] = new_num_tracklets
new_tracklet_dict[f] = tracklet_dict[f]
pre_bbox, pre_arrow = now_bbox, now_arrow
if have_break:
new_mot_list[new_num_tracklets] = new_tracklet_dict
new_num_tracklets += 1
else:
new_mot_list[tracklet] = new_tracklet_dict
elif time_break:
new_tracklet_dict = dict()
have_break = False
pre_frame = frame_list[0]
for f in frame_list:
if f - pre_frame > 100:
new_mot_list[tracklet] = new_tracklet_dict
new_tracklet_dict = dict()
have_break = True
new_tracklet_dict[f] = tracklet_dict[f]
pre_frame = f
if have_break:
new_mot_list[new_num_tracklets] = new_tracklet_dict
new_num_tracklets += 1
else:
new_mot_list[tracklet] = new_tracklet_dict
else:
new_mot_list[tracklet] = tracklet_dict
return new_mot_list
def intra_matching(self, mot_list, sub_mot_list):
sub_zone_dict = dict()
new_mot_list = dict()
new_mot_list, new_sub_mot_list = self.do_intra_matching2(mot_list,
sub_mot_list)
return new_mot_list
def do_intra_matching2(self, mot_list, sub_list):
new_zone_dict = dict()
def get_trac_info(tracklet1):
t1_f = list(tracklet1)
t1_f.sort()
t1_fs = t1_f[0]
t1_fe = t1_f[-1]
t1_zs = tracklet1[t1_fs]['zone']
t1_ze = tracklet1[t1_fe]['zone']
t1_boxs = tracklet1[t1_fs]['bbox']
t1_boxe = tracklet1[t1_fe]['bbox']
t1_boxs = [(t1_boxs[2] + t1_boxs[0]) / 2,
(t1_boxs[3] + t1_boxs[1]) / 2]
t1_boxe = [(t1_boxe[2] + t1_boxe[0]) / 2,
(t1_boxe[3] + t1_boxe[1]) / 2]
return t1_fs, t1_fe, t1_zs, t1_ze, t1_boxs, t1_boxe
for t1id in sub_list:
tracklet1 = sub_list[t1id]
if tracklet1 == -1:
continue
t1_fs, t1_fe, t1_zs, t1_ze, t1_boxs, t1_boxe = get_trac_info(
tracklet1)
sim_dict = dict()
for t2id in mot_list:
tracklet2 = mot_list[t2id]
t2_fs, t2_fe, t2_zs, t2_ze, t2_boxs, t2_boxe = get_trac_info(
tracklet2)
if t1_ze == t2_zs:
if abs(t2_fs - t1_fe) < 5 and abs(t2_boxe[0] - t1_boxs[
0]) < 50 and abs(t2_boxe[1] - t1_boxs[1]) < 50:
t1_feat = tracklet1[t1_fe]['feat']
t2_feat = tracklet2[t2_fs]['feat']
sim_dict[t2id] = np.matmul(t1_feat, t2_feat)
if t1_zs == t2_ze:
if abs(t2_fe - t1_fs) < 5 and abs(t2_boxs[0] - t1_boxe[
0]) < 50 and abs(t2_boxs[1] - t1_boxe[1]) < 50:
t1_feat = tracklet1[t1_fs]['feat']
t2_feat = tracklet2[t2_fe]['feat']
sim_dict[t2id] = np.matmul(t1_feat, t2_feat)
if len(sim_dict) > 0:
max_sim = 0
max_id = 0
for t2id in sim_dict:
if sim_dict[t2id] > max_sim:
sim_dict[t2id] = max_sim
max_id = t2id
if max_sim > 0.5:
t2 = mot_list[max_id]
for t1f in tracklet1:
if t1f not in t2:
tracklet1[t1f]['id'] = max_id
t2[t1f] = tracklet1[t1f]
mot_list[max_id] = t2
sub_list[t1id] = -1
return mot_list, sub_list
def do_intra_matching(self, sub_zone_dict, sub_zone):
new_zone_dict = dict()
id_list = list(sub_zone_dict)
id2index = dict()
for index, id in enumerate(id_list):
id2index[id] = index
def get_trac_info(tracklet1):
t1_f = list(tracklet1)
t1_f.sort()
t1_fs = t1_f[0]
t1_fe = t1_f[-1]
t1_zs = tracklet1[t1_fs]['zone']
t1_ze = tracklet1[t1_fe]['zone']
t1_boxs = tracklet1[t1_fs]['bbox']
t1_boxe = tracklet1[t1_fe]['bbox']
t1_boxs = [(t1_boxs[2] + t1_boxs[0]) / 2,
(t1_boxs[3] + t1_boxs[1]) / 2]
t1_boxe = [(t1_boxe[2] + t1_boxe[0]) / 2,
(t1_boxe[3] + t1_boxe[1]) / 2]
return t1_fs, t1_fe, t1_zs, t1_ze, t1_boxs, t1_boxe
sim_matrix = np.zeros([len(id_list), len(id_list)])
for t1id in sub_zone_dict:
tracklet1 = sub_zone_dict[t1id]
t1_fs, t1_fe, t1_zs, t1_ze, t1_boxs, t1_boxe = get_trac_info(
tracklet1)
t1_feat = tracklet1[t1_fe]['feat']
for t2id in sub_zone_dict:
if t1id == t2id:
continue
tracklet2 = sub_zone_dict[t2id]
t2_fs, t2_fe, t2_zs, t2_ze, t2_boxs, t2_boxe = get_trac_info(
tracklet2)
if t1_zs != t1_ze and t2_ze != t2_zs or t1_fe > t2_fs:
continue
if abs(t1_boxe[0] - t2_boxs[0]) > 50 or abs(t1_boxe[1] -
t2_boxs[1]) > 50:
continue
if t2_fs - t1_fe > 5:
continue
t2_feat = tracklet2[t2_fs]['feat']
sim_matrix[id2index[t1id], id2index[t2id]] = np.matmul(t1_feat,
t2_feat)
sim_matrix[id2index[t2id], id2index[t1id]] = np.matmul(t1_feat,
t2_feat)
sim_matrix = 1 - sim_matrix
cluster_labels = AgglomerativeClustering(
n_clusters=None,
distance_threshold=0.7,
affinity='precomputed',
linkage='complete').fit_predict(sim_matrix)
new_zone_dict = dict()
label2id = dict()
for index, label in enumerate(cluster_labels):
tracklet = sub_zone_dict[id_list[index]]
if label not in label2id:
new_id = tracklet[list(tracklet)[0]]
new_tracklet = dict()
else:
new_id = label2id[label]
new_tracklet = new_zone_dict[label2id[label]]
for tf in tracklet:
tracklet[tf]['id'] = new_id
new_tracklet[tf] = tracklet[tf]
new_zone_dict[label] = new_tracklet
return new_zone_dict
| PaddleDetection/deploy/pptracking/python/mot/mtmct/zone.py/0 | {
"file_path": "PaddleDetection/deploy/pptracking/python/mot/mtmct/zone.py",
"repo_id": "PaddleDetection",
"token_count": 10471
} | 56 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from scipy.optimize import linear_sum_assignment
from collections import abc, defaultdict
import cv2
import numpy as np
import math
import paddle
import paddle.nn as nn
from keypoint_preprocess import get_affine_mat_kernel, get_affine_transform
class HrHRNetPostProcess(object):
"""
HrHRNet postprocess contain:
1) get topk keypoints in the output heatmap
2) sample the tagmap's value corresponding to each of the topk coordinate
3) match different joints to combine to some people with Hungary algorithm
4) adjust the coordinate by +-0.25 to decrease error std
5) salvage missing joints by check positivity of heatmap - tagdiff_norm
Args:
max_num_people (int): max number of people support in postprocess
heat_thresh (float): value of topk below this threshhold will be ignored
tag_thresh (float): coord's value sampled in tagmap below this threshold belong to same people for init
inputs(list[heatmap]): the output list of model, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk
original_height, original_width (float): the original image size
"""
def __init__(self, max_num_people=30, heat_thresh=0.2, tag_thresh=1.):
self.max_num_people = max_num_people
self.heat_thresh = heat_thresh
self.tag_thresh = tag_thresh
def lerp(self, j, y, x, heatmap):
H, W = heatmap.shape[-2:]
left = np.clip(x - 1, 0, W - 1)
right = np.clip(x + 1, 0, W - 1)
up = np.clip(y - 1, 0, H - 1)
down = np.clip(y + 1, 0, H - 1)
offset_y = np.where(heatmap[j, down, x] > heatmap[j, up, x], 0.25,
-0.25)
offset_x = np.where(heatmap[j, y, right] > heatmap[j, y, left], 0.25,
-0.25)
return offset_y + 0.5, offset_x + 0.5
def __call__(self, heatmap, tagmap, heat_k, inds_k, original_height,
original_width):
N, J, H, W = heatmap.shape
assert N == 1, "only support batch size 1"
heatmap = heatmap[0]
tagmap = tagmap[0]
heats = heat_k[0]
inds_np = inds_k[0]
y = inds_np // W
x = inds_np % W
tags = tagmap[np.arange(J)[None, :].repeat(self.max_num_people),
y.flatten(), x.flatten()].reshape(J, -1, tagmap.shape[-1])
coords = np.stack((y, x), axis=2)
# threshold
mask = heats > self.heat_thresh
# cluster
cluster = defaultdict(lambda: {
'coords': np.zeros((J, 2), dtype=np.float32),
'scores': np.zeros(J, dtype=np.float32),
'tags': []
})
for jid, m in enumerate(mask):
num_valid = m.sum()
if num_valid == 0:
continue
valid_inds = np.where(m)[0]
valid_tags = tags[jid, m, :]
if len(cluster) == 0: # initialize
for i in valid_inds:
tag = tags[jid, i]
key = tag[0]
cluster[key]['tags'].append(tag)
cluster[key]['scores'][jid] = heats[jid, i]
cluster[key]['coords'][jid] = coords[jid, i]
continue
candidates = list(cluster.keys())[:self.max_num_people]
centroids = [
np.mean(
cluster[k]['tags'], axis=0) for k in candidates
]
num_clusters = len(centroids)
# shape is (num_valid, num_clusters, tag_dim)
dist = valid_tags[:, None, :] - np.array(centroids)[None, ...]
l2_dist = np.linalg.norm(dist, ord=2, axis=2)
# modulate dist with heat value, see `use_detection_val`
cost = np.round(l2_dist) * 100 - heats[jid, m, None]
# pad the cost matrix, otherwise new pose are ignored
if num_valid > num_clusters:
cost = np.pad(cost, ((0, 0), (0, num_valid - num_clusters)),
'constant',
constant_values=((0, 0), (0, 1e-10)))
rows, cols = linear_sum_assignment(cost)
for y, x in zip(rows, cols):
tag = tags[jid, y]
if y < num_valid and x < num_clusters and \
l2_dist[y, x] < self.tag_thresh:
key = candidates[x] # merge to cluster
else:
key = tag[0] # initialize new cluster
cluster[key]['tags'].append(tag)
cluster[key]['scores'][jid] = heats[jid, y]
cluster[key]['coords'][jid] = coords[jid, y]
# shape is [k, J, 2] and [k, J]
pose_tags = np.array([cluster[k]['tags'] for k in cluster])
pose_coords = np.array([cluster[k]['coords'] for k in cluster])
pose_scores = np.array([cluster[k]['scores'] for k in cluster])
valid = pose_scores > 0
pose_kpts = np.zeros((pose_scores.shape[0], J, 3), dtype=np.float32)
if valid.sum() == 0:
return pose_kpts, pose_kpts
# refine coords
valid_coords = pose_coords[valid].astype(np.int32)
y = valid_coords[..., 0].flatten()
x = valid_coords[..., 1].flatten()
_, j = np.nonzero(valid)
offsets = self.lerp(j, y, x, heatmap)
pose_coords[valid, 0] += offsets[0]
pose_coords[valid, 1] += offsets[1]
# mean score before salvage
mean_score = pose_scores.mean(axis=1)
pose_kpts[valid, 2] = pose_scores[valid]
# salvage missing joints
if True:
for pid, coords in enumerate(pose_coords):
tag_mean = np.array(pose_tags[pid]).mean(axis=0)
norm = np.sum((tagmap - tag_mean)**2, axis=3)**0.5
score = heatmap - np.round(norm) # (J, H, W)
flat_score = score.reshape(J, -1)
max_inds = np.argmax(flat_score, axis=1)
max_scores = np.max(flat_score, axis=1)
salvage_joints = (pose_scores[pid] == 0) & (max_scores > 0)
if salvage_joints.sum() == 0:
continue
y = max_inds[salvage_joints] // W
x = max_inds[salvage_joints] % W
offsets = self.lerp(salvage_joints.nonzero()[0], y, x, heatmap)
y = y.astype(np.float32) + offsets[0]
x = x.astype(np.float32) + offsets[1]
pose_coords[pid][salvage_joints, 0] = y
pose_coords[pid][salvage_joints, 1] = x
pose_kpts[pid][salvage_joints, 2] = max_scores[salvage_joints]
pose_kpts[..., :2] = transpred(pose_coords[..., :2][..., ::-1],
original_height, original_width,
min(H, W))
return pose_kpts, mean_score
def transpred(kpts, h, w, s):
trans, _ = get_affine_mat_kernel(h, w, s, inv=True)
return warp_affine_joints(kpts[..., :2].copy(), trans)
def warp_affine_joints(joints, mat):
"""Apply affine transformation defined by the transform matrix on the
joints.
Args:
joints (np.ndarray[..., 2]): Origin coordinate of joints.
mat (np.ndarray[3, 2]): The affine matrix.
Returns:
matrix (np.ndarray[..., 2]): Result coordinate of joints.
"""
joints = np.array(joints)
shape = joints.shape
joints = joints.reshape(-1, 2)
return np.dot(np.concatenate(
(joints, joints[:, 0:1] * 0 + 1), axis=1),
mat.T).reshape(shape)
class HRNetPostProcess(object):
def __init__(self, use_dark=True):
self.use_dark = use_dark
def flip_back(self, output_flipped, matched_parts):
assert output_flipped.ndim == 4,\
'output_flipped should be [batch_size, num_joints, height, width]'
output_flipped = output_flipped[:, :, :, ::-1]
for pair in matched_parts:
tmp = output_flipped[:, pair[0], :, :].copy()
output_flipped[:, pair[0], :, :] = output_flipped[:, pair[1], :, :]
output_flipped[:, pair[1], :, :] = tmp
return output_flipped
def get_max_preds(self, heatmaps):
"""get predictions from score maps
Args:
heatmaps: numpy.ndarray([batch_size, num_joints, height, width])
Returns:
preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords
maxvals: numpy.ndarray([batch_size, num_joints, 2]), the maximum confidence of the keypoints
"""
assert isinstance(heatmaps,
np.ndarray), 'heatmaps should be numpy.ndarray'
assert heatmaps.ndim == 4, 'batch_images should be 4-ndim'
batch_size = heatmaps.shape[0]
num_joints = heatmaps.shape[1]
width = heatmaps.shape[3]
heatmaps_reshaped = heatmaps.reshape((batch_size, num_joints, -1))
idx = np.argmax(heatmaps_reshaped, 2)
maxvals = np.amax(heatmaps_reshaped, 2)
maxvals = maxvals.reshape((batch_size, num_joints, 1))
idx = idx.reshape((batch_size, num_joints, 1))
preds = np.tile(idx, (1, 1, 2)).astype(np.float32)
preds[:, :, 0] = (preds[:, :, 0]) % width
preds[:, :, 1] = np.floor((preds[:, :, 1]) / width)
pred_mask = np.tile(np.greater(maxvals, 0.0), (1, 1, 2))
pred_mask = pred_mask.astype(np.float32)
preds *= pred_mask
return preds, maxvals
def gaussian_blur(self, heatmap, kernel):
border = (kernel - 1) // 2
batch_size = heatmap.shape[0]
num_joints = heatmap.shape[1]
height = heatmap.shape[2]
width = heatmap.shape[3]
for i in range(batch_size):
for j in range(num_joints):
origin_max = np.max(heatmap[i, j])
dr = np.zeros((height + 2 * border, width + 2 * border))
dr[border:-border, border:-border] = heatmap[i, j].copy()
dr = cv2.GaussianBlur(dr, (kernel, kernel), 0)
heatmap[i, j] = dr[border:-border, border:-border].copy()
heatmap[i, j] *= origin_max / np.max(heatmap[i, j])
return heatmap
def dark_parse(self, hm, coord):
heatmap_height = hm.shape[0]
heatmap_width = hm.shape[1]
px = int(coord[0])
py = int(coord[1])
if 1 < px < heatmap_width - 2 and 1 < py < heatmap_height - 2:
dx = 0.5 * (hm[py][px + 1] - hm[py][px - 1])
dy = 0.5 * (hm[py + 1][px] - hm[py - 1][px])
dxx = 0.25 * (hm[py][px + 2] - 2 * hm[py][px] + hm[py][px - 2])
dxy = 0.25 * (hm[py+1][px+1] - hm[py-1][px+1] - hm[py+1][px-1] \
+ hm[py-1][px-1])
dyy = 0.25 * (
hm[py + 2 * 1][px] - 2 * hm[py][px] + hm[py - 2 * 1][px])
derivative = np.matrix([[dx], [dy]])
hessian = np.matrix([[dxx, dxy], [dxy, dyy]])
if dxx * dyy - dxy**2 != 0:
hessianinv = hessian.I
offset = -hessianinv * derivative
offset = np.squeeze(np.array(offset.T), axis=0)
coord += offset
return coord
def dark_postprocess(self, hm, coords, kernelsize):
"""
refer to https://github.com/ilovepose/DarkPose/lib/core/inference.py
"""
hm = self.gaussian_blur(hm, kernelsize)
hm = np.maximum(hm, 1e-10)
hm = np.log(hm)
for n in range(coords.shape[0]):
for p in range(coords.shape[1]):
coords[n, p] = self.dark_parse(hm[n][p], coords[n][p])
return coords
def get_final_preds(self, heatmaps, center, scale, kernelsize=3):
"""the highest heatvalue location with a quarter offset in the
direction from the highest response to the second highest response.
Args:
heatmaps (numpy.ndarray): The predicted heatmaps
center (numpy.ndarray): The boxes center
scale (numpy.ndarray): The scale factor
Returns:
preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords
maxvals: numpy.ndarray([batch_size, num_joints, 1]), the maximum confidence of the keypoints
"""
coords, maxvals = self.get_max_preds(heatmaps)
heatmap_height = heatmaps.shape[2]
heatmap_width = heatmaps.shape[3]
if self.use_dark:
coords = self.dark_postprocess(heatmaps, coords, kernelsize)
else:
for n in range(coords.shape[0]):
for p in range(coords.shape[1]):
hm = heatmaps[n][p]
px = int(math.floor(coords[n][p][0] + 0.5))
py = int(math.floor(coords[n][p][1] + 0.5))
if 1 < px < heatmap_width - 1 and 1 < py < heatmap_height - 1:
diff = np.array([
hm[py][px + 1] - hm[py][px - 1],
hm[py + 1][px] - hm[py - 1][px]
])
coords[n][p] += np.sign(diff) * .25
preds = coords.copy()
# Transform back
for i in range(coords.shape[0]):
preds[i] = transform_preds(coords[i], center[i], scale[i],
[heatmap_width, heatmap_height])
return preds, maxvals
def __call__(self, output, center, scale):
preds, maxvals = self.get_final_preds(output, center, scale)
return np.concatenate(
(preds, maxvals), axis=-1), np.mean(
maxvals, axis=1)
def transform_preds(coords, center, scale, output_size):
target_coords = np.zeros(coords.shape)
trans = get_affine_transform(center, scale * 200, 0, output_size, inv=1)
for p in range(coords.shape[0]):
target_coords[p, 0:2] = affine_transform(coords[p, 0:2], trans)
return target_coords
def affine_transform(pt, t):
new_pt = np.array([pt[0], pt[1], 1.]).T
new_pt = np.dot(t, new_pt)
return new_pt[:2]
def translate_to_ori_images(keypoint_result, batch_records):
kpts = keypoint_result['keypoint']
scores = keypoint_result['score']
kpts[..., 0] += batch_records[:, 0:1]
kpts[..., 1] += batch_records[:, 1:2]
return kpts, scores
| PaddleDetection/deploy/python/keypoint_postprocess.py/0 | {
"file_path": "PaddleDetection/deploy/python/keypoint_postprocess.py",
"repo_id": "PaddleDetection",
"token_count": 7551
} | 57 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import requests
import json
import base64
import os
import argparse
parser = argparse.ArgumentParser(description="args for paddleserving")
parser.add_argument("--image_dir", type=str)
parser.add_argument("--image_file", type=str)
parser.add_argument("--http_port", type=int, default=18093)
parser.add_argument("--service_name", type=str, default="ppdet")
args = parser.parse_args()
def get_test_images(infer_dir, infer_img):
"""
Get image path list in TEST mode
"""
assert infer_img is not None or infer_dir is not None, \
"--image_file or --image_dir should be set"
assert infer_img is None or os.path.isfile(infer_img), \
"{} is not a file".format(infer_img)
assert infer_dir is None or os.path.isdir(infer_dir), \
"{} is not a directory".format(infer_dir)
# infer_img has a higher priority
if infer_img and os.path.isfile(infer_img):
return [infer_img]
images = set()
infer_dir = os.path.abspath(infer_dir)
assert os.path.isdir(infer_dir), \
"infer_dir {} is not a directory".format(infer_dir)
exts = ['jpg', 'jpeg', 'png', 'bmp']
exts += [ext.upper() for ext in exts]
for ext in exts:
images.update(glob.glob('{}/*.{}'.format(infer_dir, ext)))
images = list(images)
assert len(images) > 0, "no image found in {}".format(infer_dir)
print("Found {} inference images in total.".format(len(images)))
return images
if __name__ == "__main__":
url = f"http://127.0.0.1:{args.http_port}/{args.service_name}/prediction"
logid = 10000
img_list = get_test_images(args.image_dir, args.image_file)
for img_file in img_list:
with open(img_file, 'rb') as file:
image_data = file.read()
# base64 encode
image = base64.b64encode(image_data).decode('utf8')
data = {"key": ["image_0"], "value": [image], "logid": logid}
# send requests
r = requests.post(url=url, data=json.dumps(data))
print(r.json())
| PaddleDetection/deploy/serving/python/pipeline_http_client.py/0 | {
"file_path": "PaddleDetection/deploy/serving/python/pipeline_http_client.py",
"repo_id": "PaddleDetection",
"token_count": 989
} | 58 |
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// reference from https://github.com/RangiLyu/nanodet/tree/main/demo_openvino
#include "picodet_openvino.h"
inline float fast_exp(float x) {
union {
uint32_t i;
float f;
} v{};
v.i = (1 << 23) * (1.4426950409 * x + 126.93490512f);
return v.f;
}
inline float sigmoid(float x) { return 1.0f / (1.0f + fast_exp(-x)); }
template <typename _Tp>
int activation_function_softmax(const _Tp* src, _Tp* dst, int length) {
const _Tp alpha = *std::max_element(src, src + length);
_Tp denominator{0};
for (int i = 0; i < length; ++i) {
dst[i] = fast_exp(src[i] - alpha);
denominator += dst[i];
}
for (int i = 0; i < length; ++i) {
dst[i] /= denominator;
}
return 0;
}
PicoDet::PicoDet(const char* model_path) {
InferenceEngine::Core ie;
InferenceEngine::CNNNetwork model = ie.ReadNetwork(model_path);
// prepare input settings
InferenceEngine::InputsDataMap inputs_map(model.getInputsInfo());
input_name_ = inputs_map.begin()->first;
InferenceEngine::InputInfo::Ptr input_info = inputs_map.begin()->second;
// prepare output settings
InferenceEngine::OutputsDataMap outputs_map(model.getOutputsInfo());
for (auto& output_info : outputs_map) {
output_info.second->setPrecision(InferenceEngine::Precision::FP32);
}
// get network
network_ = ie.LoadNetwork(model, "CPU");
infer_request_ = network_.CreateInferRequest();
}
PicoDet::~PicoDet() {}
void PicoDet::preprocess(cv::Mat& image, InferenceEngine::Blob::Ptr& blob) {
int img_w = image.cols;
int img_h = image.rows;
int channels = 3;
InferenceEngine::MemoryBlob::Ptr mblob =
InferenceEngine::as<InferenceEngine::MemoryBlob>(blob);
if (!mblob) {
THROW_IE_EXCEPTION
<< "We expect blob to be inherited from MemoryBlob in matU8ToBlob, "
<< "but by fact we were not able to cast inputBlob to MemoryBlob";
}
auto mblobHolder = mblob->wmap();
float* blob_data = mblobHolder.as<float*>();
for (size_t c = 0; c < channels; c++) {
for (size_t h = 0; h < img_h; h++) {
for (size_t w = 0; w < img_w; w++) {
blob_data[c * img_w * img_h + h * img_w + w] =
(float)image.at<cv::Vec3b>(h, w)[c];
}
}
}
}
std::vector<BoxInfo> PicoDet::detect(cv::Mat image,
float score_threshold,
float nms_threshold) {
InferenceEngine::Blob::Ptr input_blob = infer_request_.GetBlob(input_name_);
preprocess(image, input_blob);
// do inference
infer_request_.Infer();
// get output
std::vector<std::vector<BoxInfo>> results;
results.resize(this->num_class_);
for (const auto& head_info : this->heads_info_) {
const InferenceEngine::Blob::Ptr dis_pred_blob =
infer_request_.GetBlob(head_info.dis_layer);
const InferenceEngine::Blob::Ptr cls_pred_blob =
infer_request_.GetBlob(head_info.cls_layer);
auto mdis_pred =
InferenceEngine::as<InferenceEngine::MemoryBlob>(dis_pred_blob);
auto mdis_pred_holder = mdis_pred->rmap();
const float* dis_pred = mdis_pred_holder.as<const float*>();
auto mcls_pred =
InferenceEngine::as<InferenceEngine::MemoryBlob>(cls_pred_blob);
auto mcls_pred_holder = mcls_pred->rmap();
const float* cls_pred = mcls_pred_holder.as<const float*>();
this->decode_infer(
cls_pred, dis_pred, head_info.stride, score_threshold, results);
}
std::vector<BoxInfo> dets;
for (int i = 0; i < (int)results.size(); i++) {
this->nms(results[i], nms_threshold);
for (auto& box : results[i]) {
dets.push_back(box);
}
}
return dets;
}
void PicoDet::decode_infer(const float*& cls_pred,
const float*& dis_pred,
int stride,
float threshold,
std::vector<std::vector<BoxInfo>>& results) {
int feature_h = input_size_ / stride;
int feature_w = input_size_ / stride;
for (int idx = 0; idx < feature_h * feature_w; idx++) {
int row = idx / feature_w;
int col = idx % feature_w;
float score = 0;
int cur_label = 0;
for (int label = 0; label < num_class_; label++) {
if (cls_pred[idx * num_class_ + label] > score) {
score = cls_pred[idx * num_class_ + label];
cur_label = label;
}
}
if (score > threshold) {
const float* bbox_pred = dis_pred + idx * (reg_max_ + 1) * 4;
results[cur_label].push_back(
this->disPred2Bbox(bbox_pred, cur_label, score, col, row, stride));
}
}
}
BoxInfo PicoDet::disPred2Bbox(
const float*& dfl_det, int label, float score, int x, int y, int stride) {
float ct_x = (x + 0.5) * stride;
float ct_y = (y + 0.5) * stride;
std::vector<float> dis_pred;
dis_pred.resize(4);
for (int i = 0; i < 4; i++) {
float dis = 0;
float* dis_after_sm = new float[reg_max_ + 1];
activation_function_softmax(
dfl_det + i * (reg_max_ + 1), dis_after_sm, reg_max_ + 1);
for (int j = 0; j < reg_max_ + 1; j++) {
dis += j * dis_after_sm[j];
}
dis *= stride;
dis_pred[i] = dis;
delete[] dis_after_sm;
}
float xmin = (std::max)(ct_x - dis_pred[0], .0f);
float ymin = (std::max)(ct_y - dis_pred[1], .0f);
float xmax = (std::min)(ct_x + dis_pred[2], (float)this->input_size_);
float ymax = (std::min)(ct_y + dis_pred[3], (float)this->input_size_);
return BoxInfo{xmin, ymin, xmax, ymax, score, label};
}
void PicoDet::nms(std::vector<BoxInfo>& input_boxes, float NMS_THRESH) {
std::sort(input_boxes.begin(), input_boxes.end(), [](BoxInfo a, BoxInfo b) {
return a.score > b.score;
});
std::vector<float> vArea(input_boxes.size());
for (int i = 0; i < int(input_boxes.size()); ++i) {
vArea[i] = (input_boxes.at(i).x2 - input_boxes.at(i).x1 + 1) *
(input_boxes.at(i).y2 - input_boxes.at(i).y1 + 1);
}
for (int i = 0; i < int(input_boxes.size()); ++i) {
for (int j = i + 1; j < int(input_boxes.size());) {
float xx1 = (std::max)(input_boxes[i].x1, input_boxes[j].x1);
float yy1 = (std::max)(input_boxes[i].y1, input_boxes[j].y1);
float xx2 = (std::min)(input_boxes[i].x2, input_boxes[j].x2);
float yy2 = (std::min)(input_boxes[i].y2, input_boxes[j].y2);
float w = (std::max)(float(0), xx2 - xx1 + 1);
float h = (std::max)(float(0), yy2 - yy1 + 1);
float inter = w * h;
float ovr = inter / (vArea[i] + vArea[j] - inter);
if (ovr >= NMS_THRESH) {
input_boxes.erase(input_boxes.begin() + j);
vArea.erase(vArea.begin() + j);
} else {
j++;
}
}
}
}
| PaddleDetection/deploy/third_engine/demo_openvino_kpts/picodet_openvino.cpp/0 | {
"file_path": "PaddleDetection/deploy/third_engine/demo_openvino_kpts/picodet_openvino.cpp",
"repo_id": "PaddleDetection",
"token_count": 3131
} | 59 |
简体中文 | [English](./idbased_clas_en.md)
# 基于人体id的分类模型开发
## 环境准备
基于人体id的分类方案是使用[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)的功能进行模型训练的。请按照[安装说明](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/installation/install_paddleclas.md)完成环境安装,以进行后续的模型训练及使用流程。
## 数据准备
基于图像分类的行为识别方案直接对视频中的图像帧结果进行识别,因此模型训练流程与通常的图像分类模型一致。
### 数据集下载
打电话的行为识别是基于公开数据集[UAV-Human](https://github.com/SUTDCV/UAV-Human)进行训练的。请通过该链接填写相关数据集申请材料后获取下载链接。
在`UAVHuman/ActionRecognition/RGBVideos`路径下包含了该数据集中RGB视频数据集,每个视频的文件名即为其标注信息。
### 训练及测试图像处理
根据视频文件名,其中与行为识别相关的为`A`相关的字段(即action),我们可以找到期望识别的动作类型数据。
- 正样本视频:以打电话为例,我们只需找到包含`A024`的文件。
- 负样本视频:除目标动作以外所有的视频。
鉴于视频数据转化为图像会有较多冗余,对于正样本视频,我们间隔8帧进行采样,并使用行人检测模型处理为半身图像(取检测框的上半部分,即`img = img[:H/2, :, :]`)。正样本视频中的采样得到的图像即视为正样本,负样本视频中采样得到的图像即为负样本。
**注意**: 正样本视频中并不完全符合打电话这一动作,在视频开头结尾部分会出现部分冗余动作,需要移除。
### 标注文件准备
基于图像分类的行为识别方案是借助[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)进行模型训练的。使用该方案训练的模型,需要准备期望识别的图像数据及对应标注文件。根据[PaddleClas数据集格式说明](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/data_preparation/classification_dataset.md#1-%E6%95%B0%E6%8D%AE%E9%9B%86%E6%A0%BC%E5%BC%8F%E8%AF%B4%E6%98%8E)准备对应的数据即可。标注文件样例如下,其中`0`,`1`分别是图片对应所属的类别:
```
# 每一行采用"空格"分隔图像路径与标注
train/000001.jpg 0
train/000002.jpg 0
train/000003.jpg 1
...
```
此外,标签文件`phone_label_list.txt`,帮助将分类序号映射到具体的类型名称:
```
0 make_a_phone_call # 类型0
1 normal # 类型1
```
完成上述内容后,放置于`dataset`目录下,文件结构如下:
```
data/
├── images # 放置所有图片
├── phone_label_list.txt # 标签文件
├── phone_train_list.txt # 训练列表,包含图片及其对应类型
└── phone_val_list.txt # 测试列表,包含图片及其对应类型
```
## 模型优化
### 检测-跟踪模型优化
基于分类的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。
### 半身图预测
在打电话这一动作中,实际是通过上半身就能实现动作的区分的,因此在训练和预测过程中,将图像由行人全身图换为半身图
## 新增行为
### 数据准备
参考前述介绍的内容,完成数据准备的部分,放置于`{root of PaddleClas}/dataset`下:
```
data/
├── images # 放置所有图片
├── label_list.txt # 标签文件
├── train_list.txt # 训练列表,包含图片及其对应类型
└── val_list.txt # 测试列表,包含图片及其对应类型
```
其中,训练及测试列表如下:
```
# 每一行采用"空格"分隔图像路径与标注
train/000001.jpg 0
train/000002.jpg 0
train/000003.jpg 1
train/000004.jpg 2 # 新增的类别直接填写对应类别号即可
...
```
`label_list.txt`中需要同样对应扩展类型的名称:
```
0 make_a_phone_call # 类型0
1 Your New Action # 类型1
...
n normal # 类型n
```
### 配置文件设置
在PaddleClas中已经集成了[训练配置文件](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml),需要重点关注的设置项如下:
```yaml
# model architecture
Arch:
name: PPHGNet_tiny
class_num: 2 # 对应新增后的数量
...
# 正确设置image_root与cls_label_path,保证image_root + cls_label_path中的图片路径能够正确访问图片路径
DataLoader:
Train:
dataset:
name: ImageNetDataset
image_root: ./dataset/
cls_label_path: ./dataset/phone_train_list_halfbody.txt
...
Infer:
infer_imgs: docs/images/inference_deployment/whl_demo.jpg
batch_size: 1
transforms:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
PostProcess:
name: Topk
topk: 2 # 显示topk的数量,不要超过类别总数
class_id_map_file: dataset/phone_label_list.txt # 修改后的label_list.txt路径
```
### 模型训练及评估
#### 模型训练
通过如下命令启动训练:
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \
-o Arch.pretrained=True
```
其中 `Arch.pretrained` 为 `True`表示使用预训练权重帮助训练。
#### 模型评估
训练好模型之后,可以通过以下命令实现对模型指标的评估。
```bash
python3 tools/eval.py \
-c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \
-o Global.pretrained_model=output/PPHGNet_tiny/best_model
```
其中 `-o Global.pretrained_model="output/PPHGNet_tiny/best_model"` 指定了当前最佳权重所在的路径,如果指定其他权重,只需替换对应的路径即可。
### 模型导出
模型导出的详细介绍请参考[这里](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/en/inference_deployment/export_model_en.md#2-export-classification-model)
可以参考以下步骤实现:
```python
python tools/export_model.py
-c ./PPHGNet_tiny_calling_halfbody.yaml \
-o Global.pretrained_model=./output/PPHGNet_tiny/best_model \
-o Global.save_inference_dir=./output_inference/PPHGNet_tiny_calling_halfbody
```
然后将导出的模型重命名,并加入配置文件,以适配PP-Human的使用。
```bash
cd ./output_inference/PPHGNet_tiny_calling_halfbody
mv inference.pdiparams model.pdiparams
mv inference.pdiparams.info model.pdiparams.info
mv inference.pdmodel model.pdmodel
# 下载预测配置文件
wget https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_configs/PPHGNet_tiny_calling_halfbody/infer_cfg.yml
```
至此,即可使用PP-Human进行实际预测了。
### 自定义行为输出
基于人体id的分类的行为识别方案中,将任务转化为对应人物的图像进行图片级别的分类。对应分类的类型最终即视为当前阶段的行为。因此在完成自定义模型的训练及部署的基础上,还需要将分类模型结果转化为最终的行为识别结果作为输出,并修改可视化的显示结果。
#### 转换为行为识别结果
请对应修改[后处理函数](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L509)。
核心代码为:
```python
# 确定分类模型的最高分数输出结果
cls_id_res = 1
cls_score_res = -1.0
for cls_id in range(len(cls_result[idx])):
score = cls_result[idx][cls_id]
if score > cls_score_res:
cls_id_res = cls_id
cls_score_res = score
# Current now, class 0 is positive, class 1 is negative.
if cls_id_res == 1 or (cls_id_res == 0 and
cls_score_res < self.threshold):
# 如果分类结果不是目标行为或是置信度未达到阈值,则根据历史结果确定当前帧的行为
history_cls, life_remain, history_score = self.result_history.get(
tracker_id, [1, self.frame_life, -1.0])
cls_id_res = history_cls
cls_score_res = 1 - cls_score_res
life_remain -= 1
if life_remain <= 0 and tracker_id in self.result_history:
del (self.result_history[tracker_id])
elif tracker_id in self.result_history:
self.result_history[tracker_id][1] = life_remain
else:
self.result_history[
tracker_id] = [cls_id_res, life_remain, cls_score_res]
else:
# 分类结果属于目标行为,则使用将该结果,并记录到历史结果中
self.result_history[
tracker_id] = [cls_id_res, self.frame_life, cls_score_res]
...
```
#### 修改可视化输出
目前基于ID的行为识别,是根据行为识别的结果及预定义的类别名称进行展示的。详细逻辑请见[此处](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043)。如果自定义的行为需要修改为其他的展示名称,请对应修改此处,以正确输出对应结果。
| PaddleDetection/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md/0 | {
"file_path": "PaddleDetection/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md",
"repo_id": "PaddleDetection",
"token_count": 5888
} | 60 |
[简体中文](./pphuman_mtmct.md) | English
# Customized Multi-Target Multi-Camera Tracking Module of PP-Human
## Data Preparation
### Data Format
Multi-target multi-camera tracking, or mtmct is achieved by the pedestrian REID technique. It is trained with a multiclassification model and uses the features before the head of the classification softmax as the retrieval feature vector.
Therefore its format is the same as the multi-classification task. Each pedestrian is assigned an exclusive id, which is different for different pedestrians while the same pedestrian has the same id in different images.
For example, images 0001.jpg, 0003.jpg are the same person, 0002.jpg, 0004.jpg are different pedestrians. Then the labeled ids are.
```
0001.jpg 00001
0002.jpg 00002
0003.jpg 00001
0004.jpg 00003
...
```
### Data Annotation
After understanding the meaning of the `annotation` format above, we can work on the data annotation. The essence of data annotation is that each single person diagram creates an annotation item that corresponds to the id assigned to that pedestrian.
For example:
For an original picture
1) Use bouding boxes to annotate the position of each person in the picture.
2) Each bouding box (corresponding to each person) contains an int id attribute. For example, the person in 0001.jpg in the above example corresponds to id: 1.
After the annotation is completed, use the detection box to intercept each person into a single picture, the picture and id attribute annotation will establish a corresponding relationship. You can also first cut into a single image and then annotate, the result is the same.
## Model Training
Once the data is annotated, it can be used for model training to complete the optimization of the customized model.
There are two main steps to implement: 1) organize the data and annotated data into a training format. 2) modify the configuration file to start training.
### Training data format
The training data consists of the images used for training and a training list bounding_box_train.txt, the location of which is specified in the training configuration, with the following example placement.
```
REID/
|-- data Training image folder
|-- 00001.jpg
|-- 00002.jpg
|-- 0000x.jpg
`-- bounding_box_train.txt List of training data
```
bounding_box_train.txt file contains the names of all training images (file path relative to the root path) + 1 id annotation value
Each line represents a person's image and id annotation result. The format is as follows:
```
0001.jpg 00001
0002.jpg 00002
0003.jpg 00001
0004.jpg 00003
```
Note: The images are separated from the annotated values by a Tab[\t] symbol. This format must be correct, otherwise, the parsing will fail.
### Modify the configuration to start training
First, execute the following command to download the training code (for more environment issues, please refer to [Install_PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/ install_paddleclas_en.md):
```
git clone https://github.com/PaddlePaddle/PaddleClas
```
You need to change the following configuration items in the configuration file [softmax_triplet_with_center.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/reid/strong_ baseline/softmax_triplet_with_center.yaml):
```
Head:
name: "FC"
embedding_size: *feat_dim
class_num: &class_num 751 #Total number of pedestrian ids
DataLoader:
Train:
dataset:
name: "Market1501"
image_root: ". /dataset/" #training image root path
cls_label_path: "bounding_box_train" #training_file_list
Eval:
Query:
dataset:
name: "Market1501"
image_root: ". /dataset/" #Evaluated image root path
cls_label_path: "query" #List of evaluation files
```
Note:
1. Here the image_root path + the relative path of the image in the bounding_box_train.txt corresponds to the full path where the image is stored.
Then run the following command to start the training.
```
#Multi-card training
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml
#Single card training
python3 tools/train.py \
-c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml
```
After the training is completed, you may run the following commands for performance evaluation:
```
#Multi-card evaluation
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/eval.py \
-c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \
-o Global.pretrained_model=./output/strong_baseline/best_model
#Single card evaluation
python3 tools/eval.py \
-c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \
-o Global.pretrained_model=./output/strong_baseline/best_model
```
### Model Export
Use the following command to export the trained model as an inference deployment model.
```
python3 tools/export_model.py \
-c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \
-o Global.pretrained_model=./output/strong_baseline/best_model \
-o Global.save_inference_dir=deploy/models/strong_baseline_inference
```
After exporting the model, download the [infer_cfg.yml](https://bj.bcebos.com/v1/paddledet/models/pipeline/REID/infer_cfg.yml) file to the newly exported model folder 'strong_baseline_ inference'.
Change the model path `model_dir` in the configuration file `infer_cfg_pphuman.yml` in PP-Human and set `enable`.
```
REID:
model_dir: [YOUR_DEPLOY_MODEL_DIR]/strong_baseline_inference/
enable: True
```
Now, the model is ready.
| PaddleDetection/docs/advanced_tutorials/customization/pphuman_mtmct_en.md/0 | {
"file_path": "PaddleDetection/docs/advanced_tutorials/customization/pphuman_mtmct_en.md",
"repo_id": "PaddleDetection",
"token_count": 1844
} | 61 |
[English](GETTING_STARTED.md) | 简体中文
# 30分钟快速上手PaddleDetection
PaddleDetection作为成熟的目标检测开发套件,提供了从数据准备、模型训练、模型评估、模型导出到模型部署的全流程。在这个章节里面,我们以路标检测数据集为例,提供快速上手PaddleDetection的流程。
## 1 安装
关于安装配置运行环境,请参考[安装指南](INSTALL_cn.md)
在本演示案例中,假定用户将PaddleDetection的代码克隆并放置在`/home/paddle`目录中。用户执行的命令操作均在`/home/paddle/PaddleDetection`目录下完成
## 2 准备数据
目前PaddleDetection支持:COCO VOC WiderFace, MOT四种数据格式。
- 首先按照[准备数据文档](./data/PrepareDetDataSet.md) 准备数据。
- 然后设置`configs/datasets`中相应的coco或voc等数据配置文件中的数据路径。
- 在本项目中,我们使用路标识别数据集
```bash
python dataset/roadsign_voc/download_roadsign_voc.py
```
- 下载后的数据格式为
```
├── download_roadsign_voc.py
├── annotations
│ ├── road0.xml
│ ├── road1.xml
│ | ...
├── images
│ ├── road0.png
│ ├── road1.png
│ | ...
├── label_list.txt
├── train.txt
├── valid.txt
```
## 3 配置文件改动和说明
我们使用`configs/yolov3/yolov3_mobilenet_v1_roadsign`配置进行训练。
在静态图版本下,一个模型往往可以通过两个配置文件(一个主配置文件、一个reader的读取配置)实现,在PaddleDetection 2.0后续版本,采用了模块解耦设计,用户可以组合配置模块实现检测器,并可自由修改覆盖各模块配置,如下图所示
<center>
<img src="../images/roadsign_yml.png" width="500" >
</center>
<br><center>配置文件摘要</center></br>
从上图看到`yolov3_mobilenet_v1_roadsign.yml`配置需要依赖其他的配置文件。在该例子中需要依赖:
```bash
roadsign_voc.yml
runtime.yml
optimizer_40e.yml
yolov3_mobilenet_v1.yml
yolov3_reader.yml
--------------------------------------
yolov3_mobilenet_v1_roadsign 文件入口
roadsign_voc 主要说明了训练数据和验证数据的路径
runtime.yml 主要说明了公共的运行参数,比如说是否使用GPU、每多少个epoch存储checkpoint等
optimizer_40e.yml 主要说明了学习率和优化器的配置。
ppyolov2_r50vd_dcn.yml 主要说明模型、和主干网络的情况。
ppyolov2_reader.yml 主要说明数据读取器配置,如batch size,并发加载子进程数等,同时包含读取后预处理操作,如resize、数据增强等等
```
<center><img src="../images/yaml_show.png" width="1000" ></center>
<br><center>配置文件结构说明</center></br>
### 修改配置文件说明
* 关于数据的路径修改说明
在修改配置文件中,用户如何实现自定义数据集是非常关键的一步,如何定义数据集请参考[如何自定义数据集](https://aistudio.baidu.com/aistudio/projectdetail/1917140)
* 默认学习率是适配多GPU训练(8x GPU),若使用单GPU训练,须对应调整学习率(例如,除以8)
* 更多使用问题,请参考[FAQ](FAQ)
## 4 训练
PaddleDetection提供了单卡/多卡训练模式,满足用户多种训练需求
* GPU单卡训练
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml
```
* GPU多卡训练
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 #windows和Mac下不需要执行该命令
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml
```
* [GPU多机多卡训练](./DistributedTraining_cn.md)
```bash
$fleetrun \
--ips="10.127.6.17,10.127.5.142,10.127.45.13,10.127.44.151" \
--selected_gpu 0,1,2,3,4,5,6,7 \
tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \
```
* Fine-tune其他任务
使用预训练模型fine-tune其他任务时,可以直接加载预训练模型,形状不匹配的参数将自动忽略,例如:
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# 如果模型中参数形状与加载权重形状不同,将不会加载这类参数
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o pretrain_weights=output/model_final
```
* 模型恢复训练
在日常训练过程中,有的用户由于一些原因导致训练中断,用户可以使用-r的命令恢复训练
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -r output/faster_rcnn_r50_1x_coco/10000
```
## 5 评估
* 默认将训练生成的模型保存在当前`output`文件夹下
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/eval.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_roadsign.pdparams
```
* 边训练,边评估
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 #windows和Mac下不需要执行该命令
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml --eval
```
在训练中交替执行评估, 评估在每个epoch训练结束后开始。每次评估后还会评出最佳mAP模型保存到`best_model`文件夹下。
如果验证集很大,测试将会比较耗时,建议调整`configs/runtime.yml` 文件中的 `snapshot_epoch`配置以减少评估次数,或训练完成后再进行评估。
- 通过json文件评估
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/eval.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \
--json_eval \
-output_eval evaluation/
```
* 上述命令中没有加载模型的选项,则使用配置文件中weights的默认配置,`weights`表示训练过程中保存的最后一轮模型文件
* json文件必须命名为bbox.json或者mask.json,放在`evaluation`目录下。
## 6 预测
```bash
python tools/infer.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml --infer_img=demo/road554.png -o weights=https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_roadsign.pdparams
```
* 设置参数预测
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/infer.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \
--infer_img=demo/road554.png \
--output_dir=infer_output/ \
--draw_threshold=0.5 \
-o weights=output/yolov3_mobilenet_v1_roadsign/model_final \
--use_vdl=True
```
`--draw_threshold` 是个可选参数. 根据 [NMS](https://ieeexplore.ieee.org/document/1699659) 的计算,不同阈值会产生不同的结果
`keep_top_k`表示设置输出目标的最大数量,默认值为100,用户可以根据自己的实际情况进行设定。
结果如下图:

## 7 训练可视化
当打开`use_vdl`开关后,为了方便用户实时查看训练过程中状态,PaddleDetection集成了VisualDL可视化工具,当打开`use_vdl`开关后,记录的数据包括:
1. loss变化趋势
2. mAP变化趋势
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml
--use_vdl=true \
--vdl_log_dir=vdl_dir/scalar \
```
使用如下命令启动VisualDL查看日志
```shell
# 下述命令会在127.0.0.1上启动一个服务,支持通过前端web页面查看,可以通过--host这个参数指定实际ip地址
visualdl --logdir vdl_dir/scalar/
```
在浏览器输入提示的网址,效果如下:
<center><img src="https://ai-studio-static-online.cdn.bcebos.com/ab767a202f084d1589f7d34702a75a7ef5d0f0a7e8c445bd80d54775b5761a8d" width="900" ></center>
<br><center>图:VDL效果演示</center></br>
**参数列表**
以下列表可以通过`--help`查看
| FLAG | 支持脚本 | 用途 | 默认值 | 备注 |
| :----------------------: | :------------: | :---------------: | :--------------: | :-----------------: |
| -c | ALL | 指定配置文件 | None | **必选**,例如-c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml |
| -o | ALL | 设置或更改配置文件里的参数内容 | None | 相较于`-c`设置的配置文件有更高优先级,例如:`-o use_gpu=False` |
| --eval | train | 是否边训练边测试 | False | 如需指定,直接`--eval`即可 |
| -r/--resume_checkpoint | train | 恢复训练加载的权重路径 | None | 例如:`-r output/faster_rcnn_r50_1x_coco/10000` |
| --slim_config | ALL | 模型压缩策略配置文件 | None | 例如`--slim_config configs/slim/prune/yolov3_prune_l1_norm.yml` |
| --use_vdl | train/infer | 是否使用[VisualDL](https://github.com/paddlepaddle/visualdl)记录数据,进而在VisualDL面板中显示 | False | VisualDL需Python>=3.5 |
| --vdl\_log_dir | train/infer | 指定 VisualDL 记录数据的存储路径 | train:`vdl_log_dir/scalar` infer: `vdl_log_dir/image` | VisualDL需Python>=3.5 |
| --output_eval | eval | 评估阶段保存json路径 | None | 例如 `--output_eval=eval_output`, 默认为当前路径 |
| --json_eval | eval | 是否通过已存在的bbox.json或者mask.json进行评估 | False | 如需指定,直接`--json_eval`即可, json文件路径在`--output_eval`中设置 |
| --classwise | eval | 是否评估单类AP和绘制单类PR曲线 | False | 如需指定,直接`--classwise`即可 |
| --output_dir | infer/export_model | 预测后结果或导出模型保存路径 | `./output` | 例如`--output_dir=output` |
| --draw_threshold | infer | 可视化时分数阈值 | 0.5 | 例如`--draw_threshold=0.7` |
| --infer_dir | infer | 用于预测的图片文件夹路径 | None | `--infer_img`和`--infer_dir`必须至少设置一个 |
| --infer_img | infer | 用于预测的图片路径 | None | `--infer_img`和`--infer_dir`必须至少设置一个,`infer_img`具有更高优先级 |
| --save_results | infer | 是否在文件夹下将图片的预测结果保存到文件中 | False | 可选 |
## 8 模型导出
在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。
在PaddleDetection中提供了 `tools/export_model.py`脚本来导出模型
```bash
python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml --output_dir=./inference_model \
-o weights=output/yolov3_mobilenet_v1_roadsign/best_model
```
预测模型会导出到`inference_model/yolov3_mobilenet_v1_roadsign`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`,`model.pdmodel` 如果不指定文件夹,模型则会导出在`output_inference`
* 更多关于模型导出的文档,请参考[模型导出文档](../../deploy/EXPORT_MODEL.md)
## 9 模型压缩
为了进一步对模型进行优化,PaddleDetection提供了基于PaddleSlim进行模型压缩的完整教程和benchmark。目前支持的方案:
* 裁剪
* 量化
* 蒸馏
* 联合策略
* 更多关于模型压缩的文档,请参考[模型压缩文档](../../configs/slim/README.md)。
## 10 预测部署
PaddleDetection提供了PaddleInference、PaddleServing、PaddleLite多种部署形式,支持服务端、移动端、嵌入式等多种平台,提供了完善的Python和C++部署方案。
* 在这里,我们以Python为例,说明如何使用PaddleInference进行模型部署
```bash
python deploy/python/infer.py --model_dir=./output_inference/yolov3_mobilenet_v1_roadsign --image_file=demo/road554.png --device=GPU
```
* 同时`infer.py`提供了丰富的接口,用户进行接入视频文件、摄像头进行预测,更多内容请参考[Python端预测部署](../../deploy/python)
### PaddleDetection支持的部署形式说明
|形式|语言|教程|设备/平台|
|-|-|-|-|
|PaddleInference|Python|已完善|Linux(arm X86)、Windows
|PaddleInference|C++|已完善|Linux(arm X86)、Windows|
|PaddleServing|Python|已完善|Linux(arm X86)、Windows|
|PaddleLite|C++|已完善|Android、IOS、FPGA、RK...
* 更多关于预测部署的文档,请参考[预测部署文档](../../deploy/README.md)。
| PaddleDetection/docs/tutorials/GETTING_STARTED_cn.md/0 | {
"file_path": "PaddleDetection/docs/tutorials/GETTING_STARTED_cn.md",
"repo_id": "PaddleDetection",
"token_count": 8054
} | 62 |
[简体中文](DetAnnoTools.md) | English
# Object Detection Annotation Tools
## Concents
[LabelMe](#LabelMe)
* [Instruction](#Instruction-of-LabelMe)
* [Installation](#Installation)
* [Annotation of Images](#Annotation-of-images-in-LabelMe)
* [Annotation Format](#Annotation-Format-of-LabelMe)
* [Export Format](#Export-Format-of-LabelMe)
* [Summary of Format Conversion](#Summary-of-Format-Conversion)
* [Annotation file(json)—>VOC Dataset](#annotation-filejsonvoc-dataset)
* [Annotation file(json)—>COCO Dataset](#annotation-filejsoncoco-dataset)
[LabelImg](#LabelImg)
* [Instruction](#Instruction-of-LabelImg)
* [Installation](#Installation-of-LabelImg)
* [Installation Notes](#Installation-Notes)
* [Annotation of images](#Annotation-of-images-in-LabelImg)
* [Annotation Format](#Annotation-Format-of-LabelImg)
* [Export Format](#Export-Format-of-LabelImg)
* [Notes of Format Conversion](#Notes-of-Format-Conversion)
## [LabelMe](https://github.com/wkentaro/labelme)
### Instruction of LabelMe
#### Installation
Please refer to [The github of LabelMe](https://github.com/wkentaro/labelme) for installation details.
<details>
<summary><b> Ubuntu</b></summary>
```
sudo apt-get install labelme
# or
sudo pip3 install labelme
# or install standalone executable from:
# https://github.com/wkentaro/labelme/releases
```
</details>
<details>
<summary><b> macOS</b></summary>
```
brew install pyqt # maybe pyqt5
pip install labelme
# or
brew install wkentaro/labelme/labelme # command line interface
# brew install --cask wkentaro/labelme/labelme # app
# or install standalone executable/app from:
# https://github.com/wkentaro/labelme/releases
```
</details>
We recommend installing by Anoncanda.
```
conda create –name=labelme python=3
conda activate labelme
pip install pyqt5
pip install labelme
```
#### Annotation of Images in LabelMe
After starting labelme, select an image or an folder with images.
Select `create polygons` in the formula bar. Draw an annotation area as shown in the following GIF. You can right-click on the image to select different shape. When finished, press the Enter/Return key, then fill the corresponding label in the popup box, such as, people.
Click the save button in the formula bar,it will generate an annotation file in json.

### Annotation Format of LabelMe
#### Export Format of LabelMe
```
#generate an annotation file
png/jpeg/jpg-->labelme-->json
```
#### Summary of Format Conversion
```
#convert annotation file to VOC dataset format
json-->labelme2voc.py-->VOC dataset
#convert annotation file to COCO dataset format
json-->labelme2coco.py-->COCO dataset
```
#### Annotation file(json)—>VOC Dataset
Use this script [labelme2voc.py](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection/labelme2voc.py) in command line.
```Te
python labelme2voc.py data_annotated(annotation folder) data_dataset_voc(output folder) --labels labels.txt
```
Then, it will generate following contents:
```
# It generates:
# - data_dataset_voc/JPEGImages
# - data_dataset_voc/Annotations
# - data_dataset_voc/AnnotationsVisualization
```
#### Annotation file(json)—>COCO Dataset
Convert the data annotated by LabelMe to COCO dataset by the script [x2coco.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/tools/x2coco.py) provided by PaddleDetection.
```bash
python tools/x2coco.py \
--dataset_type labelme \
--json_input_dir ./labelme_annos/ \
--image_input_dir ./labelme_imgs/ \
--output_dir ./cocome/ \
--train_proportion 0.8 \
--val_proportion 0.2 \
--test_proportion 0.0
```
After the user dataset is converted to COCO data, the directory structure is as follows (Try to avoid use Chinese for the path name in case of errors caused by Chinese coding problems):
```
dataset/xxx/
├── annotations
│ ├── train.json # Annotation file of coco data
│ ├── valid.json # Annotation file of coco data
├── images
│ ├── xxx1.jpg
│ ├── xxx2.jpg
│ ├── xxx3.jpg
│ | ...
...
```
## [LabelImg](https://github.com/tzutalin/labelImg)
### Instruction
#### Installation of LabelImg
Please refer to [The github of LabelImg](https://github.com/tzutalin/labelImg) for installation details.
<details>
<summary><b> Ubuntu</b></summary>
```
sudo apt-get install pyqt5-dev-tools
sudo pip3 install -r requirements/requirements-linux-python3.txt
make qt5py3
python3 labelImg.py
python3 labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE]
```
</details>
<details>
<summary><b>macOS</b></summary>
```
brew install qt # Install qt-5.x.x by Homebrew
brew install libxml2
or using pip
pip3 install pyqt5 lxml # Install qt and lxml by pip
make qt5py3
python3 labelImg.py
python3 labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE]
```
</details>
We recommend installing by Anoncanda.
Download and go to the folder of [labelImg](https://github.com/tzutalin/labelImg#labelimg)
```
conda install pyqt=5
conda install -c anaconda lxml
pyrcc5 -o libs/resources.py resources.qrc
python labelImg.py
python labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE]
```
#### Installation Notes
Use python scripts to startup LabelImg: `python labelImg.py <IMAGE_PATH>`
#### Annotation of images in LabelImg
After the startup of LabelImg, select an image or a folder with images.
Select `Create RectBox` in the formula bar. Draw an annotation area as shown in the following GIF. When finished, select corresponding label in the popup box. Then save the annotated file in three forms: VOC/YOLO/CreateML.

### Annotation Format of LabelImg
#### Export Format of LabelImg
```
#generate annotation files
png/jpeg/jpg-->labelImg-->xml/txt/json
```
#### Notes of Format Conversion
**PaddleDetection supports the format of VOC or COCO.** The annotation file generated by LabelImg needs to be converted by VOC or COCO. You can refer to [PrepareDataSet](./PrepareDataSet.md#%E5%87%86%E5%A4%87%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE).
| PaddleDetection/docs/tutorials/data/DetAnnoTools_en.md/0 | {
"file_path": "PaddleDetection/docs/tutorials/data/DetAnnoTools_en.md",
"repo_id": "PaddleDetection",
"token_count": 2300
} | 63 |
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import copy
import numpy as np
try:
from collections.abc import Sequence
except Exception:
from collections import Sequence
from paddle.io import Dataset
from ppdet.core.workspace import register, serializable
from ppdet.utils.download import get_dataset_path
from ppdet.data import source
from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
@serializable
class DetDataset(Dataset):
"""
Load detection dataset.
Args:
dataset_dir (str): root directory for dataset.
image_dir (str): directory for images.
anno_path (str): annotation file path.
data_fields (list): key name of data dictionary, at least have 'image'.
sample_num (int): number of samples to load, -1 means all.
use_default_label (bool): whether to load default label list.
repeat (int): repeat times for dataset, use in benchmark.
"""
def __init__(self,
dataset_dir=None,
image_dir=None,
anno_path=None,
data_fields=['image'],
sample_num=-1,
use_default_label=None,
repeat=1,
**kwargs):
super(DetDataset, self).__init__()
self.dataset_dir = dataset_dir if dataset_dir is not None else ''
self.anno_path = anno_path
self.image_dir = image_dir if image_dir is not None else ''
self.data_fields = data_fields
self.sample_num = sample_num
self.use_default_label = use_default_label
self.repeat = repeat
self._epoch = 0
self._curr_iter = 0
def __len__(self, ):
return len(self.roidbs) * self.repeat
def __call__(self, *args, **kwargs):
return self
def __getitem__(self, idx):
n = len(self.roidbs)
if self.repeat > 1:
idx %= n
# data batch
roidb = copy.deepcopy(self.roidbs[idx])
if self.mixup_epoch == 0 or self._epoch < self.mixup_epoch:
idx = np.random.randint(n)
roidb = [roidb, copy.deepcopy(self.roidbs[idx])]
elif self.cutmix_epoch == 0 or self._epoch < self.cutmix_epoch:
idx = np.random.randint(n)
roidb = [roidb, copy.deepcopy(self.roidbs[idx])]
elif self.mosaic_epoch == 0 or self._epoch < self.mosaic_epoch:
roidb = [roidb, ] + [
copy.deepcopy(self.roidbs[np.random.randint(n)])
for _ in range(4)
]
elif self.pre_img_epoch == 0 or self._epoch < self.pre_img_epoch:
# Add previous image as input, only used in CenterTrack
idx_pre_img = idx - 1
if idx_pre_img < 0:
idx_pre_img = idx + 1
roidb = [roidb, ] + [copy.deepcopy(self.roidbs[idx_pre_img])]
if isinstance(roidb, Sequence):
for r in roidb:
r['curr_iter'] = self._curr_iter
else:
roidb['curr_iter'] = self._curr_iter
self._curr_iter += 1
return self.transform(roidb)
def check_or_download_dataset(self):
self.dataset_dir = get_dataset_path(self.dataset_dir, self.anno_path,
self.image_dir)
def set_kwargs(self, **kwargs):
self.mixup_epoch = kwargs.get('mixup_epoch', -1)
self.cutmix_epoch = kwargs.get('cutmix_epoch', -1)
self.mosaic_epoch = kwargs.get('mosaic_epoch', -1)
self.pre_img_epoch = kwargs.get('pre_img_epoch', -1)
def set_transform(self, transform):
self.transform = transform
def set_epoch(self, epoch_id):
self._epoch = epoch_id
def parse_dataset(self, ):
raise NotImplementedError(
"Need to implement parse_dataset method of Dataset")
def get_anno(self):
if self.anno_path is None:
return
return os.path.join(self.dataset_dir, self.anno_path)
def _is_valid_file(f, extensions=('.jpg', '.jpeg', '.png', '.bmp')):
return f.lower().endswith(extensions)
def _make_dataset(dir):
dir = os.path.expanduser(dir)
if not os.path.isdir(dir):
raise ('{} should be a dir'.format(dir))
images = []
for root, _, fnames in sorted(os.walk(dir, followlinks=True)):
for fname in sorted(fnames):
path = os.path.join(root, fname)
if _is_valid_file(path):
images.append(path)
return images
@register
@serializable
class ImageFolder(DetDataset):
def __init__(self,
dataset_dir=None,
image_dir=None,
anno_path=None,
sample_num=-1,
use_default_label=None,
**kwargs):
super(ImageFolder, self).__init__(
dataset_dir,
image_dir,
anno_path,
sample_num=sample_num,
use_default_label=use_default_label)
self._imid2path = {}
self.roidbs = None
self.sample_num = sample_num
def check_or_download_dataset(self):
return
def get_anno(self):
if self.anno_path is None:
return
if self.dataset_dir:
return os.path.join(self.dataset_dir, self.anno_path)
else:
return self.anno_path
def parse_dataset(self, ):
if not self.roidbs:
self.roidbs = self._load_images()
def _parse(self):
image_dir = self.image_dir
if not isinstance(image_dir, Sequence):
image_dir = [image_dir]
images = []
for im_dir in image_dir:
if os.path.isdir(im_dir):
im_dir = os.path.join(self.dataset_dir, im_dir)
images.extend(_make_dataset(im_dir))
elif os.path.isfile(im_dir) and _is_valid_file(im_dir):
images.append(im_dir)
return images
def _load_images(self):
images = self._parse()
ct = 0
records = []
for image in images:
assert image != '' and os.path.isfile(image), \
"Image {} not found".format(image)
if self.sample_num > 0 and ct >= self.sample_num:
break
rec = {'im_id': np.array([ct]), 'im_file': image}
self._imid2path[ct] = image
ct += 1
records.append(rec)
assert len(records) > 0, "No image file found"
return records
def get_imid2path(self):
return self._imid2path
def set_images(self, images):
self.image_dir = images
self.roidbs = self._load_images()
def set_slice_images(self,
images,
slice_size=[640, 640],
overlap_ratio=[0.25, 0.25]):
self.image_dir = images
ori_records = self._load_images()
try:
import sahi
from sahi.slicing import slice_image
except Exception as e:
logger.error(
'sahi not found, plaese install sahi. '
'for example: `pip install sahi`, see https://github.com/obss/sahi.'
)
raise e
sub_img_ids = 0
ct = 0
ct_sub = 0
records = []
for i, ori_rec in enumerate(ori_records):
im_path = ori_rec['im_file']
slice_image_result = sahi.slicing.slice_image(
image=im_path,
slice_height=slice_size[0],
slice_width=slice_size[1],
overlap_height_ratio=overlap_ratio[0],
overlap_width_ratio=overlap_ratio[1])
sub_img_num = len(slice_image_result)
for _ind in range(sub_img_num):
im = slice_image_result.images[_ind]
rec = {
'image': im,
'im_id': np.array([sub_img_ids + _ind]),
'h': im.shape[0],
'w': im.shape[1],
'ori_im_id': np.array([ori_rec['im_id'][0]]),
'st_pix': np.array(
slice_image_result.starting_pixels[_ind],
dtype=np.float32),
'is_last': 1 if _ind == sub_img_num - 1 else 0,
} if 'image' in self.data_fields else {}
records.append(rec)
ct_sub += sub_img_num
ct += 1
logger.info('{} samples and slice to {} sub_samples.'.format(ct,
ct_sub))
self.roidbs = records
def get_label_list(self):
# Only VOC dataset needs label list in ImageFold
return self.anno_path
@register
class CommonDataset(object):
def __init__(self, **dataset_args):
super(CommonDataset, self).__init__()
dataset_args = copy.deepcopy(dataset_args)
type = dataset_args.pop("name")
self.dataset = getattr(source, type)(**dataset_args)
def __call__(self):
return self.dataset
@register
class TrainDataset(CommonDataset):
pass
@register
class EvalMOTDataset(CommonDataset):
pass
@register
class TestMOTDataset(CommonDataset):
pass
@register
class EvalDataset(CommonDataset):
pass
@register
class TestDataset(CommonDataset):
pass
| PaddleDetection/ppdet/data/source/dataset.py/0 | {
"file_path": "PaddleDetection/ppdet/data/source/dataset.py",
"repo_id": "PaddleDetection",
"token_count": 4977
} | 64 |
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# this file contains helper methods for BBOX processing
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import random
import math
import cv2
def meet_emit_constraint(src_bbox, sample_bbox):
center_x = (src_bbox[2] + src_bbox[0]) / 2
center_y = (src_bbox[3] + src_bbox[1]) / 2
if center_x >= sample_bbox[0] and \
center_x <= sample_bbox[2] and \
center_y >= sample_bbox[1] and \
center_y <= sample_bbox[3]:
return True
return False
def clip_bbox(src_bbox):
src_bbox[0] = max(min(src_bbox[0], 1.0), 0.0)
src_bbox[1] = max(min(src_bbox[1], 1.0), 0.0)
src_bbox[2] = max(min(src_bbox[2], 1.0), 0.0)
src_bbox[3] = max(min(src_bbox[3], 1.0), 0.0)
return src_bbox
def bbox_area(src_bbox):
if src_bbox[2] < src_bbox[0] or src_bbox[3] < src_bbox[1]:
return 0.
else:
width = src_bbox[2] - src_bbox[0]
height = src_bbox[3] - src_bbox[1]
return width * height
def is_overlap(object_bbox, sample_bbox):
if object_bbox[0] >= sample_bbox[2] or \
object_bbox[2] <= sample_bbox[0] or \
object_bbox[1] >= sample_bbox[3] or \
object_bbox[3] <= sample_bbox[1]:
return False
else:
return True
def filter_and_process(sample_bbox, bboxes, labels, scores=None,
keypoints=None):
new_bboxes = []
new_labels = []
new_scores = []
new_keypoints = []
new_kp_ignore = []
for i in range(len(bboxes)):
new_bbox = [0, 0, 0, 0]
obj_bbox = [bboxes[i][0], bboxes[i][1], bboxes[i][2], bboxes[i][3]]
if not meet_emit_constraint(obj_bbox, sample_bbox):
continue
if not is_overlap(obj_bbox, sample_bbox):
continue
sample_width = sample_bbox[2] - sample_bbox[0]
sample_height = sample_bbox[3] - sample_bbox[1]
new_bbox[0] = (obj_bbox[0] - sample_bbox[0]) / sample_width
new_bbox[1] = (obj_bbox[1] - sample_bbox[1]) / sample_height
new_bbox[2] = (obj_bbox[2] - sample_bbox[0]) / sample_width
new_bbox[3] = (obj_bbox[3] - sample_bbox[1]) / sample_height
new_bbox = clip_bbox(new_bbox)
if bbox_area(new_bbox) > 0:
new_bboxes.append(new_bbox)
new_labels.append([labels[i][0]])
if scores is not None:
new_scores.append([scores[i][0]])
if keypoints is not None:
sample_keypoint = keypoints[0][i]
for j in range(len(sample_keypoint)):
kp_len = sample_height if j % 2 else sample_width
sample_coord = sample_bbox[1] if j % 2 else sample_bbox[0]
sample_keypoint[j] = (
sample_keypoint[j] - sample_coord) / kp_len
sample_keypoint[j] = max(min(sample_keypoint[j], 1.0), 0.0)
new_keypoints.append(sample_keypoint)
new_kp_ignore.append(keypoints[1][i])
bboxes = np.array(new_bboxes)
labels = np.array(new_labels)
scores = np.array(new_scores)
if keypoints is not None:
keypoints = np.array(new_keypoints)
new_kp_ignore = np.array(new_kp_ignore)
return bboxes, labels, scores, (keypoints, new_kp_ignore)
return bboxes, labels, scores
def bbox_area_sampling(bboxes, labels, scores, target_size, min_size):
new_bboxes = []
new_labels = []
new_scores = []
for i, bbox in enumerate(bboxes):
w = float((bbox[2] - bbox[0]) * target_size)
h = float((bbox[3] - bbox[1]) * target_size)
if w * h < float(min_size * min_size):
continue
else:
new_bboxes.append(bbox)
new_labels.append(labels[i])
if scores is not None and scores.size != 0:
new_scores.append(scores[i])
bboxes = np.array(new_bboxes)
labels = np.array(new_labels)
scores = np.array(new_scores)
return bboxes, labels, scores
def generate_sample_bbox(sampler):
scale = np.random.uniform(sampler[2], sampler[3])
aspect_ratio = np.random.uniform(sampler[4], sampler[5])
aspect_ratio = max(aspect_ratio, (scale**2.0))
aspect_ratio = min(aspect_ratio, 1 / (scale**2.0))
bbox_width = scale * (aspect_ratio**0.5)
bbox_height = scale / (aspect_ratio**0.5)
xmin_bound = 1 - bbox_width
ymin_bound = 1 - bbox_height
xmin = np.random.uniform(0, xmin_bound)
ymin = np.random.uniform(0, ymin_bound)
xmax = xmin + bbox_width
ymax = ymin + bbox_height
sampled_bbox = [xmin, ymin, xmax, ymax]
return sampled_bbox
def generate_sample_bbox_square(sampler, image_width, image_height):
scale = np.random.uniform(sampler[2], sampler[3])
aspect_ratio = np.random.uniform(sampler[4], sampler[5])
aspect_ratio = max(aspect_ratio, (scale**2.0))
aspect_ratio = min(aspect_ratio, 1 / (scale**2.0))
bbox_width = scale * (aspect_ratio**0.5)
bbox_height = scale / (aspect_ratio**0.5)
if image_height < image_width:
bbox_width = bbox_height * image_height / image_width
else:
bbox_height = bbox_width * image_width / image_height
xmin_bound = 1 - bbox_width
ymin_bound = 1 - bbox_height
xmin = np.random.uniform(0, xmin_bound)
ymin = np.random.uniform(0, ymin_bound)
xmax = xmin + bbox_width
ymax = ymin + bbox_height
sampled_bbox = [xmin, ymin, xmax, ymax]
return sampled_bbox
def data_anchor_sampling(bbox_labels, image_width, image_height, scale_array,
resize_width):
num_gt = len(bbox_labels)
# np.random.randint range: [low, high)
rand_idx = np.random.randint(0, num_gt) if num_gt != 0 else 0
if num_gt != 0:
norm_xmin = bbox_labels[rand_idx][0]
norm_ymin = bbox_labels[rand_idx][1]
norm_xmax = bbox_labels[rand_idx][2]
norm_ymax = bbox_labels[rand_idx][3]
xmin = norm_xmin * image_width
ymin = norm_ymin * image_height
wid = image_width * (norm_xmax - norm_xmin)
hei = image_height * (norm_ymax - norm_ymin)
range_size = 0
area = wid * hei
for scale_ind in range(0, len(scale_array) - 1):
if area > scale_array[scale_ind] ** 2 and area < \
scale_array[scale_ind + 1] ** 2:
range_size = scale_ind + 1
break
if area > scale_array[len(scale_array) - 2]**2:
range_size = len(scale_array) - 2
scale_choose = 0.0
if range_size == 0:
rand_idx_size = 0
else:
# np.random.randint range: [low, high)
rng_rand_size = np.random.randint(0, range_size + 1)
rand_idx_size = rng_rand_size % (range_size + 1)
if rand_idx_size == range_size:
min_resize_val = scale_array[rand_idx_size] / 2.0
max_resize_val = min(2.0 * scale_array[rand_idx_size],
2 * math.sqrt(wid * hei))
scale_choose = random.uniform(min_resize_val, max_resize_val)
else:
min_resize_val = scale_array[rand_idx_size] / 2.0
max_resize_val = 2.0 * scale_array[rand_idx_size]
scale_choose = random.uniform(min_resize_val, max_resize_val)
sample_bbox_size = wid * resize_width / scale_choose
w_off_orig = 0.0
h_off_orig = 0.0
if sample_bbox_size < max(image_height, image_width):
if wid <= sample_bbox_size:
w_off_orig = np.random.uniform(xmin + wid - sample_bbox_size,
xmin)
else:
w_off_orig = np.random.uniform(xmin,
xmin + wid - sample_bbox_size)
if hei <= sample_bbox_size:
h_off_orig = np.random.uniform(ymin + hei - sample_bbox_size,
ymin)
else:
h_off_orig = np.random.uniform(ymin,
ymin + hei - sample_bbox_size)
else:
w_off_orig = np.random.uniform(image_width - sample_bbox_size, 0.0)
h_off_orig = np.random.uniform(image_height - sample_bbox_size, 0.0)
w_off_orig = math.floor(w_off_orig)
h_off_orig = math.floor(h_off_orig)
# Figure out top left coordinates.
w_off = float(w_off_orig / image_width)
h_off = float(h_off_orig / image_height)
sampled_bbox = [
w_off, h_off, w_off + float(sample_bbox_size / image_width),
h_off + float(sample_bbox_size / image_height)
]
return sampled_bbox
else:
return 0
def jaccard_overlap(sample_bbox, object_bbox):
if sample_bbox[0] >= object_bbox[2] or \
sample_bbox[2] <= object_bbox[0] or \
sample_bbox[1] >= object_bbox[3] or \
sample_bbox[3] <= object_bbox[1]:
return 0
intersect_xmin = max(sample_bbox[0], object_bbox[0])
intersect_ymin = max(sample_bbox[1], object_bbox[1])
intersect_xmax = min(sample_bbox[2], object_bbox[2])
intersect_ymax = min(sample_bbox[3], object_bbox[3])
intersect_size = (intersect_xmax - intersect_xmin) * (
intersect_ymax - intersect_ymin)
sample_bbox_size = bbox_area(sample_bbox)
object_bbox_size = bbox_area(object_bbox)
overlap = intersect_size / (
sample_bbox_size + object_bbox_size - intersect_size)
return overlap
def intersect_bbox(bbox1, bbox2):
if bbox2[0] > bbox1[2] or bbox2[2] < bbox1[0] or \
bbox2[1] > bbox1[3] or bbox2[3] < bbox1[1]:
intersection_box = [0.0, 0.0, 0.0, 0.0]
else:
intersection_box = [
max(bbox1[0], bbox2[0]), max(bbox1[1], bbox2[1]),
min(bbox1[2], bbox2[2]), min(bbox1[3], bbox2[3])
]
return intersection_box
def bbox_coverage(bbox1, bbox2):
inter_box = intersect_bbox(bbox1, bbox2)
intersect_size = bbox_area(inter_box)
if intersect_size > 0:
bbox1_size = bbox_area(bbox1)
return intersect_size / bbox1_size
else:
return 0.
def satisfy_sample_constraint(sampler,
sample_bbox,
gt_bboxes,
satisfy_all=False):
if sampler[6] == 0 and sampler[7] == 0:
return True
satisfied = []
for i in range(len(gt_bboxes)):
object_bbox = [
gt_bboxes[i][0], gt_bboxes[i][1], gt_bboxes[i][2], gt_bboxes[i][3]
]
overlap = jaccard_overlap(sample_bbox, object_bbox)
if sampler[6] != 0 and \
overlap < sampler[6]:
satisfied.append(False)
continue
if sampler[7] != 0 and \
overlap > sampler[7]:
satisfied.append(False)
continue
satisfied.append(True)
if not satisfy_all:
return True
if satisfy_all:
return np.all(satisfied)
else:
return False
def satisfy_sample_constraint_coverage(sampler, sample_bbox, gt_bboxes):
if sampler[6] == 0 and sampler[7] == 0:
has_jaccard_overlap = False
else:
has_jaccard_overlap = True
if sampler[8] == 0 and sampler[9] == 0:
has_object_coverage = False
else:
has_object_coverage = True
if not has_jaccard_overlap and not has_object_coverage:
return True
found = False
for i in range(len(gt_bboxes)):
object_bbox = [
gt_bboxes[i][0], gt_bboxes[i][1], gt_bboxes[i][2], gt_bboxes[i][3]
]
if has_jaccard_overlap:
overlap = jaccard_overlap(sample_bbox, object_bbox)
if sampler[6] != 0 and \
overlap < sampler[6]:
continue
if sampler[7] != 0 and \
overlap > sampler[7]:
continue
found = True
if has_object_coverage:
object_coverage = bbox_coverage(object_bbox, sample_bbox)
if sampler[8] != 0 and \
object_coverage < sampler[8]:
continue
if sampler[9] != 0 and \
object_coverage > sampler[9]:
continue
found = True
if found:
return True
return found
def crop_image_sampling(img, sample_bbox, image_width, image_height,
target_size):
# no clipping here
xmin = int(sample_bbox[0] * image_width)
xmax = int(sample_bbox[2] * image_width)
ymin = int(sample_bbox[1] * image_height)
ymax = int(sample_bbox[3] * image_height)
w_off = xmin
h_off = ymin
width = xmax - xmin
height = ymax - ymin
cross_xmin = max(0.0, float(w_off))
cross_ymin = max(0.0, float(h_off))
cross_xmax = min(float(w_off + width - 1.0), float(image_width))
cross_ymax = min(float(h_off + height - 1.0), float(image_height))
cross_width = cross_xmax - cross_xmin
cross_height = cross_ymax - cross_ymin
roi_xmin = 0 if w_off >= 0 else abs(w_off)
roi_ymin = 0 if h_off >= 0 else abs(h_off)
roi_width = cross_width
roi_height = cross_height
roi_y1 = int(roi_ymin)
roi_y2 = int(roi_ymin + roi_height)
roi_x1 = int(roi_xmin)
roi_x2 = int(roi_xmin + roi_width)
cross_y1 = int(cross_ymin)
cross_y2 = int(cross_ymin + cross_height)
cross_x1 = int(cross_xmin)
cross_x2 = int(cross_xmin + cross_width)
sample_img = np.zeros((height, width, 3))
sample_img[roi_y1: roi_y2, roi_x1: roi_x2] = \
img[cross_y1: cross_y2, cross_x1: cross_x2]
sample_img = cv2.resize(
sample_img, (target_size, target_size), interpolation=cv2.INTER_AREA)
return sample_img
def is_poly(segm):
assert isinstance(segm, (list, dict)), \
"Invalid segm type: {}".format(type(segm))
return isinstance(segm, list)
def gaussian_radius(bbox_size, min_overlap):
height, width = bbox_size
a1 = 1
b1 = (height + width)
c1 = width * height * (1 - min_overlap) / (1 + min_overlap)
sq1 = np.sqrt(b1**2 - 4 * a1 * c1)
radius1 = (b1 + sq1) / (2 * a1)
a2 = 4
b2 = 2 * (height + width)
c2 = (1 - min_overlap) * width * height
sq2 = np.sqrt(b2**2 - 4 * a2 * c2)
radius2 = (b2 + sq2) / 2
a3 = 4 * min_overlap
b3 = -2 * min_overlap * (height + width)
c3 = (min_overlap - 1) * width * height
sq3 = np.sqrt(b3**2 - 4 * a3 * c3)
radius3 = (b3 + sq3) / 2
return min(radius1, radius2, radius3)
def draw_gaussian(heatmap, center, radius, k=1, delte=6):
diameter = 2 * radius + 1
sigma = diameter / delte
gaussian = gaussian2D((diameter, diameter), sigma_x=sigma, sigma_y=sigma)
x, y = center
height, width = heatmap.shape[0:2]
left, right = min(x, radius), min(width - x, radius + 1)
top, bottom = min(y, radius), min(height - y, radius + 1)
masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right]
masked_gaussian = gaussian[radius - top:radius + bottom, radius - left:
radius + right]
np.maximum(masked_heatmap, masked_gaussian * k, out=masked_heatmap)
def gaussian2D(shape, sigma_x=1, sigma_y=1):
m, n = [(ss - 1.) / 2. for ss in shape]
y, x = np.ogrid[-m:m + 1, -n:n + 1]
h = np.exp(-(x * x / (2 * sigma_x * sigma_x) + y * y / (2 * sigma_y *
sigma_y)))
h[h < np.finfo(h.dtype).eps * h.max()] = 0
return h
def draw_umich_gaussian(heatmap, center, radius, k=1):
"""
draw_umich_gaussian, refer to https://github.com/xingyizhou/CenterNet/blob/master/src/lib/utils/image.py#L126
"""
diameter = 2 * radius + 1
gaussian = gaussian2D(
(diameter, diameter), sigma_x=diameter / 6, sigma_y=diameter / 6)
x, y = int(center[0]), int(center[1])
height, width = heatmap.shape[0:2]
left, right = min(x, radius), min(width - x, radius + 1)
top, bottom = min(y, radius), min(height - y, radius + 1)
masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right]
masked_gaussian = gaussian[radius - top:radius + bottom, radius - left:
radius + right]
if min(masked_gaussian.shape) > 0 and min(masked_heatmap.shape) > 0:
np.maximum(masked_heatmap, masked_gaussian * k, out=masked_heatmap)
return heatmap
def get_border(border, size):
i = 1
while size - border // i <= border // i:
i *= 2
return border // i
| PaddleDetection/ppdet/data/transform/op_helper.py/0 | {
"file_path": "PaddleDetection/ppdet/data/transform/op_helper.py",
"repo_id": "PaddleDetection",
"token_count": 8576
} | 65 |
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "../rbox_iou/rbox_iou_utils.h"
#include "paddle/extension.h"
static const int64_t threadsPerBlock = sizeof(int64_t) * 8;
template <typename T>
__global__ void
nms_rotated_cuda_kernel(const T *boxes_data, const float threshold,
const int64_t num_boxes, int64_t *masks) {
auto raw_start = blockIdx.y;
auto col_start = blockIdx.x;
if (raw_start > col_start)
return;
const int raw_last_storage =
min(num_boxes - raw_start * threadsPerBlock, threadsPerBlock);
const int col_last_storage =
min(num_boxes - col_start * threadsPerBlock, threadsPerBlock);
if (threadIdx.x < raw_last_storage) {
int64_t mask = 0;
auto current_box_idx = raw_start * threadsPerBlock + threadIdx.x;
const T *current_box = boxes_data + current_box_idx * 5;
for (int i = 0; i < col_last_storage; ++i) {
const T *target_box = boxes_data + (col_start * threadsPerBlock + i) * 5;
if (rbox_iou_single<T>(current_box, target_box) > threshold) {
mask |= 1ULL << i;
}
}
const int blocks_per_line = CeilDiv(num_boxes, threadsPerBlock);
masks[current_box_idx * blocks_per_line + col_start] = mask;
}
}
#define CHECK_INPUT_GPU(x) \
PD_CHECK(x.is_gpu(), #x " must be a GPU Tensor.")
std::vector<paddle::Tensor> NMSRotatedCUDAForward(const paddle::Tensor &boxes,
const paddle::Tensor &scores,
float threshold) {
CHECK_INPUT_GPU(boxes);
CHECK_INPUT_GPU(scores);
auto num_boxes = boxes.shape()[0];
auto order_t =
std::get<1>(paddle::argsort(scores, /* axis=*/0, /* descending=*/true));
auto boxes_sorted = paddle::gather(boxes, order_t, /* axis=*/0);
const auto blocks_per_line = CeilDiv(num_boxes, threadsPerBlock);
dim3 block(threadsPerBlock);
dim3 grid(blocks_per_line, blocks_per_line);
auto mask_dev = paddle::empty({num_boxes * blocks_per_line},
paddle::DataType::INT64, paddle::GPUPlace());
PD_DISPATCH_FLOATING_TYPES(
boxes.type(), "nms_rotated_cuda_kernel", ([&] {
nms_rotated_cuda_kernel<data_t><<<grid, block, 0, boxes.stream()>>>(
boxes_sorted.data<data_t>(), threshold, num_boxes,
mask_dev.data<int64_t>());
}));
auto mask_host = mask_dev.copy_to(paddle::CPUPlace(), true);
auto keep_host =
paddle::empty({num_boxes}, paddle::DataType::INT64, paddle::CPUPlace());
int64_t *keep_host_ptr = keep_host.data<int64_t>();
int64_t *mask_host_ptr = mask_host.data<int64_t>();
std::vector<int64_t> remv(blocks_per_line);
int64_t last_box_num = 0;
for (int64_t i = 0; i < num_boxes; ++i) {
auto remv_element_id = i / threadsPerBlock;
auto remv_bit_id = i % threadsPerBlock;
if (!(remv[remv_element_id] & 1ULL << remv_bit_id)) {
keep_host_ptr[last_box_num++] = i;
int64_t *current_mask = mask_host_ptr + i * blocks_per_line;
for (auto j = remv_element_id; j < blocks_per_line; ++j) {
remv[j] |= current_mask[j];
}
}
}
keep_host = keep_host.slice(0, last_box_num);
auto keep_dev = keep_host.copy_to(paddle::GPUPlace(), true);
return {paddle::gather(order_t, keep_dev, /* axis=*/0)};
} | PaddleDetection/ppdet/ext_op/csrc/nms_rotated/nms_rotated.cu/0 | {
"file_path": "PaddleDetection/ppdet/ext_op/csrc/nms_rotated/nms_rotated.cu",
"repo_id": "PaddleDetection",
"token_count": 1659
} | 66 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This code is borrow from https://github.com/xingyizhou/CenterTrack/blob/master/src/tools/eval_kitti_track/munkres.py
"""
import sys
__all__ = ['Munkres', 'make_cost_matrix']
class Munkres:
"""
Calculate the Munkres solution to the classical assignment problem.
See the module documentation for usage.
"""
def __init__(self):
"""Create a new instance"""
self.C = None
self.row_covered = []
self.col_covered = []
self.n = 0
self.Z0_r = 0
self.Z0_c = 0
self.marked = None
self.path = None
def make_cost_matrix(profit_matrix, inversion_function):
"""
**DEPRECATED**
Please use the module function ``make_cost_matrix()``.
"""
import munkres
return munkres.make_cost_matrix(profit_matrix, inversion_function)
make_cost_matrix = staticmethod(make_cost_matrix)
def pad_matrix(self, matrix, pad_value=0):
"""
Pad a possibly non-square matrix to make it square.
:Parameters:
matrix : list of lists
matrix to pad
pad_value : int
value to use to pad the matrix
:rtype: list of lists
:return: a new, possibly padded, matrix
"""
max_columns = 0
total_rows = len(matrix)
for row in matrix:
max_columns = max(max_columns, len(row))
total_rows = max(max_columns, total_rows)
new_matrix = []
for row in matrix:
row_len = len(row)
new_row = row[:]
if total_rows > row_len:
# Row too short. Pad it.
new_row += [0] * (total_rows - row_len)
new_matrix += [new_row]
while len(new_matrix) < total_rows:
new_matrix += [[0] * total_rows]
return new_matrix
def compute(self, cost_matrix):
"""
Compute the indexes for the lowest-cost pairings between rows and
columns in the database. Returns a list of (row, column) tuples
that can be used to traverse the matrix.
:Parameters:
cost_matrix : list of lists
The cost matrix. If this cost matrix is not square, it
will be padded with zeros, via a call to ``pad_matrix()``.
(This method does *not* modify the caller's matrix. It
operates on a copy of the matrix.)
**WARNING**: This code handles square and rectangular
matrices. It does *not* handle irregular matrices.
:rtype: list
:return: A list of ``(row, column)`` tuples that describe the lowest
cost path through the matrix
"""
self.C = self.pad_matrix(cost_matrix)
self.n = len(self.C)
self.original_length = len(cost_matrix)
self.original_width = len(cost_matrix[0])
self.row_covered = [False for i in range(self.n)]
self.col_covered = [False for i in range(self.n)]
self.Z0_r = 0
self.Z0_c = 0
self.path = self.__make_matrix(self.n * 2, 0)
self.marked = self.__make_matrix(self.n, 0)
done = False
step = 1
steps = {
1: self.__step1,
2: self.__step2,
3: self.__step3,
4: self.__step4,
5: self.__step5,
6: self.__step6
}
while not done:
try:
func = steps[step]
step = func()
except KeyError:
done = True
# Look for the starred columns
results = []
for i in range(self.original_length):
for j in range(self.original_width):
if self.marked[i][j] == 1:
results += [(i, j)]
return results
def __copy_matrix(self, matrix):
"""Return an exact copy of the supplied matrix"""
return copy.deepcopy(matrix)
def __make_matrix(self, n, val):
"""Create an *n*x*n* matrix, populating it with the specific value."""
matrix = []
for i in range(n):
matrix += [[val for j in range(n)]]
return matrix
def __step1(self):
"""
For each row of the matrix, find the smallest element and
subtract it from every element in its row. Go to Step 2.
"""
C = self.C
n = self.n
for i in range(n):
minval = min(self.C[i])
# Find the minimum value for this row and subtract that minimum
# from every element in the row.
for j in range(n):
self.C[i][j] -= minval
return 2
def __step2(self):
"""
Find a zero (Z) in the resulting matrix. If there is no starred
zero in its row or column, star Z. Repeat for each element in the
matrix. Go to Step 3.
"""
n = self.n
for i in range(n):
for j in range(n):
if (self.C[i][j] == 0) and \
(not self.col_covered[j]) and \
(not self.row_covered[i]):
self.marked[i][j] = 1
self.col_covered[j] = True
self.row_covered[i] = True
self.__clear_covers()
return 3
def __step3(self):
"""
Cover each column containing a starred zero. If K columns are
covered, the starred zeros describe a complete set of unique
assignments. In this case, Go to DONE, otherwise, Go to Step 4.
"""
n = self.n
count = 0
for i in range(n):
for j in range(n):
if self.marked[i][j] == 1:
self.col_covered[j] = True
count += 1
if count >= n:
step = 7 # done
else:
step = 4
return step
def __step4(self):
"""
Find a noncovered zero and prime it. If there is no starred zero
in the row containing this primed zero, Go to Step 5. Otherwise,
cover this row and uncover the column containing the starred
zero. Continue in this manner until there are no uncovered zeros
left. Save the smallest uncovered value and Go to Step 6.
"""
step = 0
done = False
row = -1
col = -1
star_col = -1
while not done:
(row, col) = self.__find_a_zero()
if row < 0:
done = True
step = 6
else:
self.marked[row][col] = 2
star_col = self.__find_star_in_row(row)
if star_col >= 0:
col = star_col
self.row_covered[row] = True
self.col_covered[col] = False
else:
done = True
self.Z0_r = row
self.Z0_c = col
step = 5
return step
def __step5(self):
"""
Construct a series of alternating primed and starred zeros as
follows. Let Z0 represent the uncovered primed zero found in Step 4.
Let Z1 denote the starred zero in the column of Z0 (if any).
Let Z2 denote the primed zero in the row of Z1 (there will always
be one). Continue until the series terminates at a primed zero
that has no starred zero in its column. Unstar each starred zero
of the series, star each primed zero of the series, erase all
primes and uncover every line in the matrix. Return to Step 3
"""
count = 0
path = self.path
path[count][0] = self.Z0_r
path[count][1] = self.Z0_c
done = False
while not done:
row = self.__find_star_in_col(path[count][1])
if row >= 0:
count += 1
path[count][0] = row
path[count][1] = path[count - 1][1]
else:
done = True
if not done:
col = self.__find_prime_in_row(path[count][0])
count += 1
path[count][0] = path[count - 1][0]
path[count][1] = col
self.__convert_path(path, count)
self.__clear_covers()
self.__erase_primes()
return 3
def __step6(self):
"""
Add the value found in Step 4 to every element of each covered
row, and subtract it from every element of each uncovered column.
Return to Step 4 without altering any stars, primes, or covered
lines.
"""
minval = self.__find_smallest()
for i in range(self.n):
for j in range(self.n):
if self.row_covered[i]:
self.C[i][j] += minval
if not self.col_covered[j]:
self.C[i][j] -= minval
return 4
def __find_smallest(self):
"""Find the smallest uncovered value in the matrix."""
minval = 2e9 # sys.maxint
for i in range(self.n):
for j in range(self.n):
if (not self.row_covered[i]) and (not self.col_covered[j]):
if minval > self.C[i][j]:
minval = self.C[i][j]
return minval
def __find_a_zero(self):
"""Find the first uncovered element with value 0"""
row = -1
col = -1
i = 0
n = self.n
done = False
while not done:
j = 0
while True:
if (self.C[i][j] == 0) and \
(not self.row_covered[i]) and \
(not self.col_covered[j]):
row = i
col = j
done = True
j += 1
if j >= n:
break
i += 1
if i >= n:
done = True
return (row, col)
def __find_star_in_row(self, row):
"""
Find the first starred element in the specified row. Returns
the column index, or -1 if no starred element was found.
"""
col = -1
for j in range(self.n):
if self.marked[row][j] == 1:
col = j
break
return col
def __find_star_in_col(self, col):
"""
Find the first starred element in the specified row. Returns
the row index, or -1 if no starred element was found.
"""
row = -1
for i in range(self.n):
if self.marked[i][col] == 1:
row = i
break
return row
def __find_prime_in_row(self, row):
"""
Find the first prime element in the specified row. Returns
the column index, or -1 if no starred element was found.
"""
col = -1
for j in range(self.n):
if self.marked[row][j] == 2:
col = j
break
return col
def __convert_path(self, path, count):
for i in range(count + 1):
if self.marked[path[i][0]][path[i][1]] == 1:
self.marked[path[i][0]][path[i][1]] = 0
else:
self.marked[path[i][0]][path[i][1]] = 1
def __clear_covers(self):
"""Clear all covered matrix cells"""
for i in range(self.n):
self.row_covered[i] = False
self.col_covered[i] = False
def __erase_primes(self):
"""Erase all prime markings"""
for i in range(self.n):
for j in range(self.n):
if self.marked[i][j] == 2:
self.marked[i][j] = 0
def make_cost_matrix(profit_matrix, inversion_function):
"""
Create a cost matrix from a profit matrix by calling
'inversion_function' to invert each value. The inversion
function must take one numeric argument (of any type) and return
another numeric argument which is presumed to be the cost inverse
of the original profit.
This is a static method. Call it like this:
.. python::
cost_matrix = Munkres.make_cost_matrix(matrix, inversion_func)
For example:
.. python::
cost_matrix = Munkres.make_cost_matrix(matrix, lambda x : sys.maxint - x)
:Parameters:
profit_matrix : list of lists
The matrix to convert from a profit to a cost matrix
inversion_function : function
The function to use to invert each entry in the profit matrix
:rtype: list of lists
:return: The converted matrix
"""
cost_matrix = []
for row in profit_matrix:
cost_matrix.append([inversion_function(value) for value in row])
return cost_matrix
| PaddleDetection/ppdet/metrics/munkres.py/0 | {
"file_path": "PaddleDetection/ppdet/metrics/munkres.py",
"repo_id": "PaddleDetection",
"token_count": 6513
} | 67 |
from .meta_arch import BaseArch
from ppdet.core.workspace import register, create
from paddle import in_dynamic_mode
__all__ = ['CLRNet']
@register
class CLRNet(BaseArch):
__category__ = 'architecture'
def __init__(self,
backbone="CLRResNet",
neck="CLRFPN",
clr_head="CLRHead",
post_process=None):
super(CLRNet, self).__init__()
self.backbone = backbone
self.neck = neck
self.heads = clr_head
self.post_process = post_process
@classmethod
def from_config(cls, cfg, *args, **kwargs):
# backbone
backbone = create(cfg['backbone'])
# fpn
kwargs = {'input_shape': backbone.out_shape}
neck = create(cfg['neck'], **kwargs)
# head
kwargs = {'input_shape': neck.out_shape}
clr_head = create(cfg['clr_head'], **kwargs)
return {
'backbone': backbone,
'neck': neck,
'clr_head': clr_head,
}
def _forward(self):
# Backbone
body_feats = self.backbone(self.inputs['image'])
# neck
neck_feats = self.neck(body_feats)
# CRL Head
if self.training:
output = self.heads(neck_feats, self.inputs)
else:
output = self.heads(neck_feats)
output = {'lanes': output}
# TODO: hard code fix as_lanes=False problem in clrnet_head.py "get_lanes" function for static mode
if in_dynamic_mode():
output = self.heads.get_lanes(output['lanes'])
output = {
"lanes": output,
"img_path": self.inputs['full_img_path'],
"img_name": self.inputs['img_name']
}
return output
def get_loss(self):
return self._forward()
def get_pred(self):
return self._forward()
| PaddleDetection/ppdet/modeling/architectures/clrnet.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/architectures/clrnet.py",
"repo_id": "PaddleDetection",
"token_count": 981
} | 68 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
from ppdet.core.workspace import register, create
from .meta_arch import BaseArch
__all__ = ['PicoDet']
@register
class PicoDet(BaseArch):
"""
Generalized Focal Loss network, see https://arxiv.org/abs/2006.04388
Args:
backbone (object): backbone instance
neck (object): 'FPN' instance
head (object): 'PicoHead' instance
"""
__category__ = 'architecture'
def __init__(self, backbone, neck, head='PicoHead', nms_cpu=False):
super(PicoDet, self).__init__()
self.backbone = backbone
self.neck = neck
self.head = head
self.export_post_process = True
self.export_nms = True
self.nms_cpu = nms_cpu
@classmethod
def from_config(cls, cfg, *args, **kwargs):
backbone = create(cfg['backbone'])
kwargs = {'input_shape': backbone.out_shape}
neck = create(cfg['neck'], **kwargs)
kwargs = {'input_shape': neck.out_shape}
head = create(cfg['head'], **kwargs)
return {
'backbone': backbone,
'neck': neck,
"head": head,
}
def _forward(self):
body_feats = self.backbone(self.inputs)
fpn_feats = self.neck(body_feats)
head_outs = self.head(fpn_feats, self.export_post_process)
if self.training or not self.export_post_process:
return head_outs, None
else:
scale_factor = self.inputs['scale_factor']
bboxes, bbox_num = self.head.post_process(
head_outs,
scale_factor,
export_nms=self.export_nms,
nms_cpu=self.nms_cpu)
return bboxes, bbox_num
def get_loss(self, ):
loss = {}
head_outs, _ = self._forward()
loss_gfl = self.head.get_loss(head_outs, self.inputs)
loss.update(loss_gfl)
total_loss = paddle.add_n(list(loss.values()))
loss.update({'loss': total_loss})
return loss
def get_pred(self):
if not self.export_post_process:
return {'picodet': self._forward()[0]}
elif self.export_nms:
bbox_pred, bbox_num = self._forward()
output = {'bbox': bbox_pred, 'bbox_num': bbox_num}
return output
else:
bboxes, mlvl_scores = self._forward()
output = {'bbox': bboxes, 'scores': mlvl_scores}
return output
| PaddleDetection/ppdet/modeling/architectures/picodet.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/architectures/picodet.py",
"repo_id": "PaddleDetection",
"token_count": 1394
} | 69 |
import paddle
import paddle.nn.functional as F
from ppdet.modeling.losses.clrnet_line_iou_loss import line_iou
def distance_cost(predictions, targets, img_w):
"""
repeat predictions and targets to generate all combinations
use the abs distance as the new distance cost
"""
num_priors = predictions.shape[0]
num_targets = targets.shape[0]
predictions = paddle.repeat_interleave(
predictions, num_targets, axis=0)[..., 6:]
targets = paddle.concat(x=num_priors * [targets])[..., 6:]
invalid_masks = (targets < 0) | (targets >= img_w)
lengths = (~invalid_masks).sum(axis=1)
distances = paddle.abs(x=targets - predictions)
distances[invalid_masks] = 0.0
distances = distances.sum(axis=1) / (lengths.cast("float32") + 1e-09)
distances = distances.reshape([num_priors, num_targets])
return distances
def focal_cost(cls_pred, gt_labels, alpha=0.25, gamma=2, eps=1e-12):
"""
Args:
cls_pred (Tensor): Predicted classification logits, shape
[num_query, num_class].
gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,).
Returns:
torch.Tensor: cls_cost value
"""
cls_pred = F.sigmoid(cls_pred)
neg_cost = -(1 - cls_pred + eps).log() * (1 - alpha) * cls_pred.pow(gamma)
pos_cost = -(cls_pred + eps).log() * alpha * (1 - cls_pred).pow(gamma)
cls_cost = pos_cost.index_select(
gt_labels, axis=1) - neg_cost.index_select(
gt_labels, axis=1)
return cls_cost
def dynamic_k_assign(cost, pair_wise_ious):
"""
Assign grouth truths with priors dynamically.
Args:
cost: the assign cost.
pair_wise_ious: iou of grouth truth and priors.
Returns:
prior_idx: the index of assigned prior.
gt_idx: the corresponding ground truth index.
"""
matching_matrix = paddle.zeros_like(cost)
ious_matrix = pair_wise_ious
ious_matrix[ious_matrix < 0] = 0.0
n_candidate_k = 4
topk_ious, _ = paddle.topk(ious_matrix, n_candidate_k, axis=0)
dynamic_ks = paddle.clip(x=topk_ious.sum(0).cast("int32"), min=1)
num_gt = cost.shape[1]
for gt_idx in range(num_gt):
_, pos_idx = paddle.topk(
x=cost[:, gt_idx], k=dynamic_ks[gt_idx].item(), largest=False)
matching_matrix[pos_idx, gt_idx] = 1.0
del topk_ious, dynamic_ks, pos_idx
matched_gt = matching_matrix.sum(axis=1)
if (matched_gt > 1).sum() > 0:
matched_gt_indices = paddle.nonzero(matched_gt > 1)[:, 0]
cost_argmin = paddle.argmin(
cost.index_select(matched_gt_indices), axis=1)
matching_matrix[matched_gt_indices][0] *= 0.0
matching_matrix[matched_gt_indices, cost_argmin] = 1.0
prior_idx = matching_matrix.sum(axis=1).nonzero()
gt_idx = matching_matrix[prior_idx].argmax(axis=-1)
return prior_idx.flatten(), gt_idx.flatten()
def cdist_paddle(x1, x2, p=2):
assert x1.shape[1] == x2.shape[1]
B, M = x1.shape
# if p == np.inf:
# dist = np.max(np.abs(x1[:, np.newaxis, :] - x2[np.newaxis, :, :]), axis=-1)
if p == 1:
dist = paddle.sum(
paddle.abs(x1.unsqueeze(axis=1) - x2.unsqueeze(axis=0)), axis=-1)
else:
dist = paddle.pow(paddle.sum(paddle.pow(
paddle.abs(x1.unsqueeze(axis=1) - x2.unsqueeze(axis=0)), p),
axis=-1),
1 / p)
return dist
def assign(predictions,
targets,
img_w,
img_h,
distance_cost_weight=3.0,
cls_cost_weight=1.0):
"""
computes dynamicly matching based on the cost, including cls cost and lane similarity cost
Args:
predictions (Tensor): predictions predicted by each stage, shape: (num_priors, 78)
targets (Tensor): lane targets, shape: (num_targets, 78)
return:
matched_row_inds (Tensor): matched predictions, shape: (num_targets)
matched_col_inds (Tensor): matched targets, shape: (num_targets)
"""
predictions = predictions.detach().clone()
predictions[:, 3] *= img_w - 1
predictions[:, 6:] *= img_w - 1
targets = targets.detach().clone()
distances_score = distance_cost(predictions, targets, img_w)
distances_score = 1 - distances_score / paddle.max(x=distances_score) + 0.01
cls_score = focal_cost(predictions[:, :2], targets[:, 1].cast('int64'))
num_priors = predictions.shape[0]
num_targets = targets.shape[0]
target_start_xys = targets[:, 2:4]
target_start_xys[..., 0] *= (img_h - 1)
prediction_start_xys = predictions[:, 2:4]
prediction_start_xys[..., 0] *= (img_h - 1)
start_xys_score = cdist_paddle(
prediction_start_xys, target_start_xys,
p=2).reshape([num_priors, num_targets])
start_xys_score = 1 - start_xys_score / paddle.max(x=start_xys_score) + 0.01
target_thetas = targets[:, 4].unsqueeze(axis=-1)
theta_score = cdist_paddle(
predictions[:, 4].unsqueeze(axis=-1), target_thetas,
p=1).reshape([num_priors, num_targets]) * 180
theta_score = 1 - theta_score / paddle.max(x=theta_score) + 0.01
cost = -(distances_score * start_xys_score * theta_score
)**2 * distance_cost_weight + cls_score * cls_cost_weight
iou = line_iou(predictions[..., 6:], targets[..., 6:], img_w, aligned=False)
matched_row_inds, matched_col_inds = dynamic_k_assign(cost, iou)
return matched_row_inds, matched_col_inds
| PaddleDetection/ppdet/modeling/assigners/clrnet_assigner.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/assigners/clrnet_assigner.py",
"repo_id": "PaddleDetection",
"token_count": 2496
} | 70 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from paddle import ParamAttr
from paddle.regularizer import L2Decay
from paddle.nn.initializer import Constant
from ppdet.modeling.ops import get_act_fn
from ppdet.core.workspace import register, serializable
from ..shape_spec import ShapeSpec
__all__ = ['CSPResNet', 'BasicBlock', 'EffectiveSELayer', 'ConvBNLayer']
class ConvBNLayer(nn.Layer):
def __init__(self,
ch_in,
ch_out,
filter_size=3,
stride=1,
groups=1,
padding=0,
act=None):
super(ConvBNLayer, self).__init__()
self.conv = nn.Conv2D(
in_channels=ch_in,
out_channels=ch_out,
kernel_size=filter_size,
stride=stride,
padding=padding,
groups=groups,
bias_attr=False)
self.bn = nn.BatchNorm2D(
ch_out,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
self.act = get_act_fn(act) if act is None or isinstance(act, (
str, dict)) else act
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.act(x)
return x
class RepVggBlock(nn.Layer):
def __init__(self, ch_in, ch_out, act='relu', alpha=False):
super(RepVggBlock, self).__init__()
self.ch_in = ch_in
self.ch_out = ch_out
self.conv1 = ConvBNLayer(
ch_in, ch_out, 3, stride=1, padding=1, act=None)
self.conv2 = ConvBNLayer(
ch_in, ch_out, 1, stride=1, padding=0, act=None)
self.act = get_act_fn(act) if act is None or isinstance(act, (
str, dict)) else act
if alpha:
self.alpha = self.create_parameter(
shape=[1],
attr=ParamAttr(initializer=Constant(value=1.)),
dtype="float32")
else:
self.alpha = None
def forward(self, x):
if hasattr(self, 'conv'):
y = self.conv(x)
else:
if self.alpha:
y = self.conv1(x) + self.alpha * self.conv2(x)
else:
y = self.conv1(x) + self.conv2(x)
y = self.act(y)
return y
def convert_to_deploy(self):
if not hasattr(self, 'conv'):
self.conv = nn.Conv2D(
in_channels=self.ch_in,
out_channels=self.ch_out,
kernel_size=3,
stride=1,
padding=1,
groups=1)
kernel, bias = self.get_equivalent_kernel_bias()
self.conv.weight.set_value(kernel)
self.conv.bias.set_value(bias)
self.__delattr__('conv1')
self.__delattr__('conv2')
def get_equivalent_kernel_bias(self):
kernel3x3, bias3x3 = self._fuse_bn_tensor(self.conv1)
kernel1x1, bias1x1 = self._fuse_bn_tensor(self.conv2)
if self.alpha:
return kernel3x3 + self.alpha * self._pad_1x1_to_3x3_tensor(
kernel1x1), bias3x3 + self.alpha * bias1x1
else:
return kernel3x3 + self._pad_1x1_to_3x3_tensor(
kernel1x1), bias3x3 + bias1x1
def _pad_1x1_to_3x3_tensor(self, kernel1x1):
if kernel1x1 is None:
return 0
else:
return nn.functional.pad(kernel1x1, [1, 1, 1, 1])
def _fuse_bn_tensor(self, branch):
if branch is None:
return 0, 0
kernel = branch.conv.weight
running_mean = branch.bn._mean
running_var = branch.bn._variance
gamma = branch.bn.weight
beta = branch.bn.bias
eps = branch.bn._epsilon
std = (running_var + eps).sqrt()
t = (gamma / std).reshape((-1, 1, 1, 1))
return kernel * t, beta - running_mean * gamma / std
class BasicBlock(nn.Layer):
def __init__(self,
ch_in,
ch_out,
act='relu',
shortcut=True,
use_alpha=False):
super(BasicBlock, self).__init__()
assert ch_in == ch_out
self.conv1 = ConvBNLayer(ch_in, ch_out, 3, stride=1, padding=1, act=act)
self.conv2 = RepVggBlock(ch_out, ch_out, act=act, alpha=use_alpha)
self.shortcut = shortcut
def forward(self, x):
y = self.conv1(x)
y = self.conv2(y)
if self.shortcut:
return paddle.add(x, y)
else:
return y
class EffectiveSELayer(nn.Layer):
""" Effective Squeeze-Excitation
From `CenterMask : Real-Time Anchor-Free Instance Segmentation` - https://arxiv.org/abs/1911.06667
"""
def __init__(self, channels, act='hardsigmoid'):
super(EffectiveSELayer, self).__init__()
self.fc = nn.Conv2D(channels, channels, kernel_size=1, padding=0)
self.act = get_act_fn(act) if act is None or isinstance(act, (
str, dict)) else act
def forward(self, x):
x_se = x.mean((2, 3), keepdim=True)
x_se = self.fc(x_se)
return x * self.act(x_se)
class CSPResStage(nn.Layer):
def __init__(self,
block_fn,
ch_in,
ch_out,
n,
stride,
act='relu',
attn='eca',
use_alpha=False):
super(CSPResStage, self).__init__()
ch_mid = (ch_in + ch_out) // 2
if stride == 2:
self.conv_down = ConvBNLayer(
ch_in, ch_mid, 3, stride=2, padding=1, act=act)
else:
self.conv_down = None
self.conv1 = ConvBNLayer(ch_mid, ch_mid // 2, 1, act=act)
self.conv2 = ConvBNLayer(ch_mid, ch_mid // 2, 1, act=act)
self.blocks = nn.Sequential(*[
block_fn(
ch_mid // 2,
ch_mid // 2,
act=act,
shortcut=True,
use_alpha=use_alpha) for i in range(n)
])
if attn:
self.attn = EffectiveSELayer(ch_mid, act='hardsigmoid')
else:
self.attn = None
self.conv3 = ConvBNLayer(ch_mid, ch_out, 1, act=act)
def forward(self, x):
if self.conv_down is not None:
x = self.conv_down(x)
y1 = self.conv1(x)
y2 = self.blocks(self.conv2(x))
y = paddle.concat([y1, y2], axis=1)
if self.attn is not None:
y = self.attn(y)
y = self.conv3(y)
return y
@register
@serializable
class CSPResNet(nn.Layer):
__shared__ = ['width_mult', 'depth_mult', 'trt']
def __init__(self,
layers=[3, 6, 6, 3],
channels=[64, 128, 256, 512, 1024],
act='swish',
return_idx=[1, 2, 3],
depth_wise=False,
use_large_stem=False,
width_mult=1.0,
depth_mult=1.0,
trt=False,
use_checkpoint=False,
use_alpha=False,
**args):
super(CSPResNet, self).__init__()
self.use_checkpoint = use_checkpoint
channels = [max(round(c * width_mult), 1) for c in channels]
layers = [max(round(l * depth_mult), 1) for l in layers]
act = get_act_fn(
act, trt=trt) if act is None or isinstance(act,
(str, dict)) else act
if use_large_stem:
self.stem = nn.Sequential(
('conv1', ConvBNLayer(
3, channels[0] // 2, 3, stride=2, padding=1, act=act)),
('conv2', ConvBNLayer(
channels[0] // 2,
channels[0] // 2,
3,
stride=1,
padding=1,
act=act)), ('conv3', ConvBNLayer(
channels[0] // 2,
channels[0],
3,
stride=1,
padding=1,
act=act)))
else:
self.stem = nn.Sequential(
('conv1', ConvBNLayer(
3, channels[0] // 2, 3, stride=2, padding=1, act=act)),
('conv2', ConvBNLayer(
channels[0] // 2,
channels[0],
3,
stride=1,
padding=1,
act=act)))
n = len(channels) - 1
self.stages = nn.Sequential(*[(str(i), CSPResStage(
BasicBlock,
channels[i],
channels[i + 1],
layers[i],
2,
act=act,
use_alpha=use_alpha)) for i in range(n)])
self._out_channels = channels[1:]
self._out_strides = [4 * 2**i for i in range(n)]
self.return_idx = return_idx
if use_checkpoint:
paddle.seed(0)
def forward(self, inputs):
x = inputs['image']
x = self.stem(x)
outs = []
for idx, stage in enumerate(self.stages):
if self.use_checkpoint and self.training:
x = paddle.distributed.fleet.utils.recompute(
stage, x, **{"preserve_rng_state": True})
else:
x = stage(x)
if idx in self.return_idx:
outs.append(x)
return outs
@property
def out_shape(self):
return [
ShapeSpec(
channels=self._out_channels[i], stride=self._out_strides[i])
for i in self.return_idx
]
| PaddleDetection/ppdet/modeling/backbones/cspresnet.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/backbones/cspresnet.py",
"repo_id": "PaddleDetection",
"token_count": 5665
} | 71 |
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from numbers import Integral
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from ppdet.core.workspace import register, serializable
from paddle.regularizer import L2Decay
from paddle.nn.initializer import Uniform
from paddle import ParamAttr
from paddle.nn.initializer import Constant
from paddle.vision.ops import DeformConv2D
from .name_adapter import NameAdapter
from ..shape_spec import ShapeSpec
__all__ = ['ResNet', 'Res5Head', 'Blocks', 'BasicBlock', 'BottleNeck']
ResNet_cfg = {
18: [2, 2, 2, 2],
34: [3, 4, 6, 3],
50: [3, 4, 6, 3],
101: [3, 4, 23, 3],
152: [3, 8, 36, 3],
}
class ConvNormLayer(nn.Layer):
def __init__(self,
ch_in,
ch_out,
filter_size,
stride,
groups=1,
act=None,
norm_type='bn',
norm_decay=0.,
freeze_norm=True,
lr=1.0,
dcn_v2=False):
super(ConvNormLayer, self).__init__()
assert norm_type in ['bn', 'sync_bn']
self.norm_type = norm_type
self.act = act
self.dcn_v2 = dcn_v2
if not self.dcn_v2:
self.conv = nn.Conv2D(
in_channels=ch_in,
out_channels=ch_out,
kernel_size=filter_size,
stride=stride,
padding=(filter_size - 1) // 2,
groups=groups,
weight_attr=ParamAttr(learning_rate=lr),
bias_attr=False)
else:
self.offset_channel = 2 * filter_size**2
self.mask_channel = filter_size**2
self.conv_offset = nn.Conv2D(
in_channels=ch_in,
out_channels=3 * filter_size**2,
kernel_size=filter_size,
stride=stride,
padding=(filter_size - 1) // 2,
weight_attr=ParamAttr(initializer=Constant(0.)),
bias_attr=ParamAttr(initializer=Constant(0.)))
self.conv = DeformConv2D(
in_channels=ch_in,
out_channels=ch_out,
kernel_size=filter_size,
stride=stride,
padding=(filter_size - 1) // 2,
dilation=1,
groups=groups,
weight_attr=ParamAttr(learning_rate=lr),
bias_attr=False)
norm_lr = 0. if freeze_norm else lr
param_attr = ParamAttr(
learning_rate=norm_lr,
regularizer=L2Decay(norm_decay),
trainable=False if freeze_norm else True)
bias_attr = ParamAttr(
learning_rate=norm_lr,
regularizer=L2Decay(norm_decay),
trainable=False if freeze_norm else True)
global_stats = True if freeze_norm else None
if norm_type in ['sync_bn', 'bn']:
self.norm = nn.BatchNorm2D(
ch_out,
weight_attr=param_attr,
bias_attr=bias_attr,
use_global_stats=global_stats)
norm_params = self.norm.parameters()
if freeze_norm:
for param in norm_params:
param.stop_gradient = True
def forward(self, inputs):
if not self.dcn_v2:
out = self.conv(inputs)
else:
offset_mask = self.conv_offset(inputs)
offset, mask = paddle.split(
offset_mask,
num_or_sections=[self.offset_channel, self.mask_channel],
axis=1)
mask = F.sigmoid(mask)
out = self.conv(inputs, offset, mask=mask)
if self.norm_type in ['bn', 'sync_bn']:
out = self.norm(out)
if self.act:
out = getattr(F, self.act)(out)
return out
class SELayer(nn.Layer):
def __init__(self, ch, reduction_ratio=16):
super(SELayer, self).__init__()
self.pool = nn.AdaptiveAvgPool2D(1)
stdv = 1.0 / math.sqrt(ch)
c_ = ch // reduction_ratio
self.squeeze = nn.Linear(
ch,
c_,
weight_attr=paddle.ParamAttr(initializer=Uniform(-stdv, stdv)),
bias_attr=True)
stdv = 1.0 / math.sqrt(c_)
self.extract = nn.Linear(
c_,
ch,
weight_attr=paddle.ParamAttr(initializer=Uniform(-stdv, stdv)),
bias_attr=True)
def forward(self, inputs):
out = self.pool(inputs)
out = paddle.squeeze(out, axis=[2, 3])
out = self.squeeze(out)
out = F.relu(out)
out = self.extract(out)
out = F.sigmoid(out)
out = paddle.unsqueeze(out, axis=[2, 3])
scale = out * inputs
return scale
class BasicBlock(nn.Layer):
expansion = 1
def __init__(self,
ch_in,
ch_out,
stride,
shortcut,
variant='b',
groups=1,
base_width=64,
lr=1.0,
norm_type='bn',
norm_decay=0.,
freeze_norm=True,
dcn_v2=False,
std_senet=False):
super(BasicBlock, self).__init__()
assert groups == 1 and base_width == 64, 'BasicBlock only supports groups=1 and base_width=64'
self.shortcut = shortcut
if not shortcut:
if variant == 'd' and stride == 2:
self.short = nn.Sequential()
self.short.add_sublayer(
'pool',
nn.AvgPool2D(
kernel_size=2, stride=2, padding=0, ceil_mode=True))
self.short.add_sublayer(
'conv',
ConvNormLayer(
ch_in=ch_in,
ch_out=ch_out,
filter_size=1,
stride=1,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr))
else:
self.short = ConvNormLayer(
ch_in=ch_in,
ch_out=ch_out,
filter_size=1,
stride=stride,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr)
self.branch2a = ConvNormLayer(
ch_in=ch_in,
ch_out=ch_out,
filter_size=3,
stride=stride,
act='relu',
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr)
self.branch2b = ConvNormLayer(
ch_in=ch_out,
ch_out=ch_out,
filter_size=3,
stride=1,
act=None,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr,
dcn_v2=dcn_v2)
self.std_senet = std_senet
if self.std_senet:
self.se = SELayer(ch_out)
def forward(self, inputs):
out = self.branch2a(inputs)
out = self.branch2b(out)
if self.std_senet:
out = self.se(out)
if self.shortcut:
short = inputs
else:
short = self.short(inputs)
out = paddle.add(x=out, y=short)
out = F.relu(out)
return out
class BottleNeck(nn.Layer):
expansion = 4
def __init__(self,
ch_in,
ch_out,
stride,
shortcut,
variant='b',
groups=1,
base_width=4,
lr=1.0,
norm_type='bn',
norm_decay=0.,
freeze_norm=True,
dcn_v2=False,
std_senet=False):
super(BottleNeck, self).__init__()
if variant == 'a':
stride1, stride2 = stride, 1
else:
stride1, stride2 = 1, stride
# ResNeXt
width = int(ch_out * (base_width / 64.)) * groups
self.branch2a = ConvNormLayer(
ch_in=ch_in,
ch_out=width,
filter_size=1,
stride=stride1,
groups=1,
act='relu',
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr)
self.branch2b = ConvNormLayer(
ch_in=width,
ch_out=width,
filter_size=3,
stride=stride2,
groups=groups,
act='relu',
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr,
dcn_v2=dcn_v2)
self.branch2c = ConvNormLayer(
ch_in=width,
ch_out=ch_out * self.expansion,
filter_size=1,
stride=1,
groups=1,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr)
self.shortcut = shortcut
if not shortcut:
if variant == 'd' and stride == 2:
self.short = nn.Sequential()
self.short.add_sublayer(
'pool',
nn.AvgPool2D(
kernel_size=2, stride=2, padding=0, ceil_mode=True))
self.short.add_sublayer(
'conv',
ConvNormLayer(
ch_in=ch_in,
ch_out=ch_out * self.expansion,
filter_size=1,
stride=1,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr))
else:
self.short = ConvNormLayer(
ch_in=ch_in,
ch_out=ch_out * self.expansion,
filter_size=1,
stride=stride,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=lr)
self.std_senet = std_senet
if self.std_senet:
self.se = SELayer(ch_out * self.expansion)
def forward(self, inputs):
out = self.branch2a(inputs)
out = self.branch2b(out)
out = self.branch2c(out)
if self.std_senet:
out = self.se(out)
if self.shortcut:
short = inputs
else:
short = self.short(inputs)
out = paddle.add(x=out, y=short)
out = F.relu(out)
return out
class Blocks(nn.Layer):
def __init__(self,
block,
ch_in,
ch_out,
count,
name_adapter,
stage_num,
variant='b',
groups=1,
base_width=64,
lr=1.0,
norm_type='bn',
norm_decay=0.,
freeze_norm=True,
dcn_v2=False,
std_senet=False):
super(Blocks, self).__init__()
self.blocks = []
for i in range(count):
conv_name = name_adapter.fix_layer_warp_name(stage_num, count, i)
layer = self.add_sublayer(
conv_name,
block(
ch_in=ch_in,
ch_out=ch_out,
stride=2 if i == 0 and stage_num != 2 else 1,
shortcut=False if i == 0 else True,
variant=variant,
groups=groups,
base_width=base_width,
lr=lr,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
dcn_v2=dcn_v2,
std_senet=std_senet))
self.blocks.append(layer)
if i == 0:
ch_in = ch_out * block.expansion
def forward(self, inputs):
block_out = inputs
for block in self.blocks:
block_out = block(block_out)
return block_out
@register
@serializable
class ResNet(nn.Layer):
__shared__ = ['norm_type']
def __init__(self,
depth=50,
ch_in=64,
variant='b',
lr_mult_list=[1.0, 1.0, 1.0, 1.0],
groups=1,
base_width=64,
norm_type='bn',
norm_decay=0,
freeze_norm=True,
freeze_at=0,
return_idx=[0, 1, 2, 3],
dcn_v2_stages=[-1],
num_stages=4,
std_senet=False,
freeze_stem_only=False):
"""
Residual Network, see https://arxiv.org/abs/1512.03385
Args:
depth (int): ResNet depth, should be 18, 34, 50, 101, 152.
ch_in (int): output channel of first stage, default 64
variant (str): ResNet variant, supports 'a', 'b', 'c', 'd' currently
lr_mult_list (list): learning rate ratio of different resnet stages(2,3,4,5),
lower learning rate ratio is need for pretrained model
got using distillation(default as [1.0, 1.0, 1.0, 1.0]).
groups (int): group convolution cardinality
base_width (int): base width of each group convolution
norm_type (str): normalization type, 'bn', 'sync_bn' or 'affine_channel'
norm_decay (float): weight decay for normalization layer weights
freeze_norm (bool): freeze normalization layers
freeze_at (int): freeze the backbone at which stage
return_idx (list): index of the stages whose feature maps are returned
dcn_v2_stages (list): index of stages who select deformable conv v2
num_stages (int): total num of stages
std_senet (bool): whether use senet, default True
"""
super(ResNet, self).__init__()
self._model_type = 'ResNet' if groups == 1 else 'ResNeXt'
assert num_stages >= 1 and num_stages <= 4
self.depth = depth
self.variant = variant
self.groups = groups
self.base_width = base_width
self.norm_type = norm_type
self.norm_decay = norm_decay
self.freeze_norm = freeze_norm
self.freeze_at = freeze_at
if isinstance(return_idx, Integral):
return_idx = [return_idx]
assert max(return_idx) < num_stages, \
'the maximum return index must smaller than num_stages, ' \
'but received maximum return index is {} and num_stages ' \
'is {}'.format(max(return_idx), num_stages)
self.return_idx = return_idx
self.num_stages = num_stages
assert len(lr_mult_list) == 4, \
"lr_mult_list length must be 4 but got {}".format(len(lr_mult_list))
if isinstance(dcn_v2_stages, Integral):
dcn_v2_stages = [dcn_v2_stages]
assert max(dcn_v2_stages) < num_stages
if isinstance(dcn_v2_stages, Integral):
dcn_v2_stages = [dcn_v2_stages]
assert max(dcn_v2_stages) < num_stages
self.dcn_v2_stages = dcn_v2_stages
block_nums = ResNet_cfg[depth]
na = NameAdapter(self)
conv1_name = na.fix_c1_stage_name()
if variant in ['c', 'd']:
conv_def = [
[3, ch_in // 2, 3, 2, "conv1_1"],
[ch_in // 2, ch_in // 2, 3, 1, "conv1_2"],
[ch_in // 2, ch_in, 3, 1, "conv1_3"],
]
else:
conv_def = [[3, ch_in, 7, 2, conv1_name]]
self.conv1 = nn.Sequential()
for (c_in, c_out, k, s, _name) in conv_def:
self.conv1.add_sublayer(
_name,
ConvNormLayer(
ch_in=c_in,
ch_out=c_out,
filter_size=k,
stride=s,
groups=1,
act='relu',
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
lr=1.0))
self.ch_in = ch_in
ch_out_list = [64, 128, 256, 512]
block = BottleNeck if depth >= 50 else BasicBlock
self._out_channels = [block.expansion * v for v in ch_out_list]
self._out_strides = [4, 8, 16, 32]
self.res_layers = []
for i in range(num_stages):
lr_mult = lr_mult_list[i]
stage_num = i + 2
res_name = "res{}".format(stage_num)
res_layer = self.add_sublayer(
res_name,
Blocks(
block,
self.ch_in,
ch_out_list[i],
count=block_nums[i],
name_adapter=na,
stage_num=stage_num,
variant=variant,
groups=groups,
base_width=base_width,
lr=lr_mult,
norm_type=norm_type,
norm_decay=norm_decay,
freeze_norm=freeze_norm,
dcn_v2=(i in self.dcn_v2_stages),
std_senet=std_senet))
self.res_layers.append(res_layer)
self.ch_in = self._out_channels[i]
if freeze_at >= 0:
self._freeze_parameters(self.conv1)
if not freeze_stem_only:
for i in range(min(freeze_at + 1, num_stages)):
self._freeze_parameters(self.res_layers[i])
def _freeze_parameters(self, m):
for p in m.parameters():
p.stop_gradient = True
@property
def out_shape(self):
return [
ShapeSpec(
channels=self._out_channels[i], stride=self._out_strides[i])
for i in self.return_idx
]
def forward(self, inputs):
x = inputs['image']
conv1 = self.conv1(x)
x = F.max_pool2d(conv1, kernel_size=3, stride=2, padding=1)
outs = []
for idx, stage in enumerate(self.res_layers):
x = stage(x)
if idx in self.return_idx:
outs.append(x)
return outs
@register
class Res5Head(nn.Layer):
def __init__(self, depth=50):
super(Res5Head, self).__init__()
feat_in, feat_out = [1024, 512]
if depth < 50:
feat_in = 256
na = NameAdapter(self)
block = BottleNeck if depth >= 50 else BasicBlock
self.res5 = Blocks(
block, feat_in, feat_out, count=3, name_adapter=na, stage_num=5)
self.feat_out = feat_out if depth < 50 else feat_out * 4
@property
def out_shape(self):
return [ShapeSpec(
channels=self.feat_out,
stride=16, )]
def forward(self, roi_feat, stage=0):
y = self.res5(roi_feat)
return y
| PaddleDetection/ppdet/modeling/backbones/resnet.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/backbones/resnet.py",
"repo_id": "PaddleDetection",
"token_count": 11533
} | 72 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from paddle.nn.initializer import Constant, Uniform
from ppdet.core.workspace import register
from ppdet.modeling.losses import CTFocalLoss, GIoULoss
class ConvLayer(nn.Layer):
def __init__(self,
ch_in,
ch_out,
kernel_size,
stride=1,
padding=0,
dilation=1,
groups=1,
bias=False):
super(ConvLayer, self).__init__()
bias_attr = False
fan_in = ch_in * kernel_size**2
bound = 1 / math.sqrt(fan_in)
param_attr = paddle.ParamAttr(initializer=Uniform(-bound, bound))
if bias:
bias_attr = paddle.ParamAttr(initializer=Constant(0.))
self.conv = nn.Conv2D(
in_channels=ch_in,
out_channels=ch_out,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
weight_attr=param_attr,
bias_attr=bias_attr)
def forward(self, inputs):
out = self.conv(inputs)
return out
@register
class CenterNetHead(nn.Layer):
"""
Args:
in_channels (int): the channel number of input to CenterNetHead.
num_classes (int): the number of classes, 80 (COCO dataset) by default.
head_planes (int): the channel number in all head, 256 by default.
prior_bias (float): prior bias in heatmap head, -2.19 by default, -4.6 in CenterTrack
regress_ltrb (bool): whether to regress left/top/right/bottom or
width/height for a box, True by default.
size_loss (str): the type of size regression loss, 'L1' by default, can be 'giou'.
loss_weight (dict): the weight of each loss.
add_iou (bool): whether to add iou branch, False by default.
"""
__shared__ = ['num_classes']
def __init__(self,
in_channels,
num_classes=80,
head_planes=256,
prior_bias=-2.19,
regress_ltrb=True,
size_loss='L1',
loss_weight={
'heatmap': 1.0,
'size': 0.1,
'offset': 1.0,
'iou': 0.0,
},
add_iou=False):
super(CenterNetHead, self).__init__()
self.regress_ltrb = regress_ltrb
self.loss_weight = loss_weight
self.add_iou = add_iou
# heatmap head
self.heatmap = nn.Sequential(
ConvLayer(
in_channels, head_planes, kernel_size=3, padding=1, bias=True),
nn.ReLU(),
ConvLayer(
head_planes,
num_classes,
kernel_size=1,
stride=1,
padding=0,
bias=True))
with paddle.no_grad():
self.heatmap[2].conv.bias[:] = prior_bias
# size(ltrb or wh) head
self.size = nn.Sequential(
ConvLayer(
in_channels, head_planes, kernel_size=3, padding=1, bias=True),
nn.ReLU(),
ConvLayer(
head_planes,
4 if regress_ltrb else 2,
kernel_size=1,
stride=1,
padding=0,
bias=True))
self.size_loss = size_loss
# offset head
self.offset = nn.Sequential(
ConvLayer(
in_channels, head_planes, kernel_size=3, padding=1, bias=True),
nn.ReLU(),
ConvLayer(
head_planes, 2, kernel_size=1, stride=1, padding=0, bias=True))
# iou head (optinal)
if self.add_iou and 'iou' in self.loss_weight:
self.iou = nn.Sequential(
ConvLayer(
in_channels,
head_planes,
kernel_size=3,
padding=1,
bias=True),
nn.ReLU(),
ConvLayer(
head_planes,
4 if regress_ltrb else 2,
kernel_size=1,
stride=1,
padding=0,
bias=True))
@classmethod
def from_config(cls, cfg, input_shape):
if isinstance(input_shape, (list, tuple)):
input_shape = input_shape[0]
return {'in_channels': input_shape.channels}
def forward(self, feat, inputs):
heatmap = F.sigmoid(self.heatmap(feat))
size = self.size(feat)
offset = self.offset(feat)
head_outs = {'heatmap': heatmap, 'size': size, 'offset': offset}
if self.add_iou and 'iou' in self.loss_weight:
iou = self.iou(feat)
head_outs.update({'iou': iou})
if self.training:
losses = self.get_loss(inputs, self.loss_weight, head_outs)
return losses
else:
return head_outs
def get_loss(self, inputs, weights, head_outs):
# 1.heatmap(hm) head loss: CTFocalLoss
heatmap = head_outs['heatmap']
heatmap_target = inputs['heatmap']
heatmap = paddle.clip(heatmap, 1e-4, 1 - 1e-4)
ctfocal_loss = CTFocalLoss()
heatmap_loss = ctfocal_loss(heatmap, heatmap_target)
# 2.size(wh) head loss: L1 loss or GIoU loss
size = head_outs['size']
index = inputs['index']
mask = inputs['index_mask']
size = paddle.transpose(size, perm=[0, 2, 3, 1])
size_n, _, _, size_c = size.shape
size = paddle.reshape(size, shape=[size_n, -1, size_c])
index = paddle.unsqueeze(index, 2)
batch_inds = list()
for i in range(size_n):
batch_ind = paddle.full(
shape=[1, index.shape[1], 1], fill_value=i, dtype='int64')
batch_inds.append(batch_ind)
batch_inds = paddle.concat(batch_inds, axis=0)
index = paddle.concat(x=[batch_inds, index], axis=2)
pos_size = paddle.gather_nd(size, index=index)
mask = paddle.unsqueeze(mask, axis=2)
size_mask = paddle.expand_as(mask, pos_size)
size_mask = paddle.cast(size_mask, dtype=pos_size.dtype)
pos_num = size_mask.sum()
size_mask.stop_gradient = True
if self.size_loss == 'L1':
if self.regress_ltrb:
size_target = inputs['size']
# shape: [bs, max_per_img, 4]
else:
if inputs['size'].shape[-1] == 2:
# inputs['size'] is wh, and regress as wh
# shape: [bs, max_per_img, 2]
size_target = inputs['size']
else:
# inputs['size'] is ltrb, but regress as wh
# shape: [bs, max_per_img, 4]
size_target = inputs['size'][:, :, 0:2] + inputs[
'size'][:, :, 2:]
size_target.stop_gradient = True
size_loss = F.l1_loss(
pos_size * size_mask, size_target * size_mask, reduction='sum')
size_loss = size_loss / (pos_num + 1e-4)
elif self.size_loss == 'giou':
size_target = inputs['bbox_xys']
size_target.stop_gradient = True
centers_x = (size_target[:, :, 0:1] + size_target[:, :, 2:3]) / 2.0
centers_y = (size_target[:, :, 1:2] + size_target[:, :, 3:4]) / 2.0
x1 = centers_x - pos_size[:, :, 0:1]
y1 = centers_y - pos_size[:, :, 1:2]
x2 = centers_x + pos_size[:, :, 2:3]
y2 = centers_y + pos_size[:, :, 3:4]
pred_boxes = paddle.concat([x1, y1, x2, y2], axis=-1)
giou_loss = GIoULoss(reduction='sum')
size_loss = giou_loss(
pred_boxes * size_mask,
size_target * size_mask,
iou_weight=size_mask,
loc_reweight=None)
size_loss = size_loss / (pos_num + 1e-4)
# 3.offset(reg) head loss: L1 loss
offset = head_outs['offset']
offset_target = inputs['offset']
offset = paddle.transpose(offset, perm=[0, 2, 3, 1])
offset_n, _, _, offset_c = offset.shape
offset = paddle.reshape(offset, shape=[offset_n, -1, offset_c])
pos_offset = paddle.gather_nd(offset, index=index)
offset_mask = paddle.expand_as(mask, pos_offset)
offset_mask = paddle.cast(offset_mask, dtype=pos_offset.dtype)
pos_num = offset_mask.sum()
offset_mask.stop_gradient = True
offset_target.stop_gradient = True
offset_loss = F.l1_loss(
pos_offset * offset_mask,
offset_target * offset_mask,
reduction='sum')
offset_loss = offset_loss / (pos_num + 1e-4)
# 4.iou head loss: GIoU loss (optinal)
if self.add_iou and 'iou' in self.loss_weight:
iou = head_outs['iou']
iou = paddle.transpose(iou, perm=[0, 2, 3, 1])
iou_n, _, _, iou_c = iou.shape
iou = paddle.reshape(iou, shape=[iou_n, -1, iou_c])
pos_iou = paddle.gather_nd(iou, index=index)
iou_mask = paddle.expand_as(mask, pos_iou)
iou_mask = paddle.cast(iou_mask, dtype=pos_iou.dtype)
pos_num = iou_mask.sum()
iou_mask.stop_gradient = True
gt_bbox_xys = inputs['bbox_xys']
gt_bbox_xys.stop_gradient = True
centers_x = (gt_bbox_xys[:, :, 0:1] + gt_bbox_xys[:, :, 2:3]) / 2.0
centers_y = (gt_bbox_xys[:, :, 1:2] + gt_bbox_xys[:, :, 3:4]) / 2.0
x1 = centers_x - pos_size[:, :, 0:1]
y1 = centers_y - pos_size[:, :, 1:2]
x2 = centers_x + pos_size[:, :, 2:3]
y2 = centers_y + pos_size[:, :, 3:4]
pred_boxes = paddle.concat([x1, y1, x2, y2], axis=-1)
giou_loss = GIoULoss(reduction='sum')
iou_loss = giou_loss(
pred_boxes * iou_mask,
gt_bbox_xys * iou_mask,
iou_weight=iou_mask,
loc_reweight=None)
iou_loss = iou_loss / (pos_num + 1e-4)
losses = {
'heatmap_loss': heatmap_loss,
'size_loss': size_loss,
'offset_loss': offset_loss,
}
det_loss = weights['heatmap'] * heatmap_loss + weights[
'size'] * size_loss + weights['offset'] * offset_loss
if self.add_iou and 'iou' in self.loss_weight:
losses.update({'iou_loss': iou_loss})
det_loss += weights['iou'] * iou_loss
losses.update({'det_loss': det_loss})
return losses
| PaddleDetection/ppdet/modeling/heads/centernet_head.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/heads/centernet_head.py",
"repo_id": "PaddleDetection",
"token_count": 6012
} | 73 |
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle
from ppdet.core.workspace import register
from ppdet.modeling import ops
import paddle.nn as nn
def _to_list(v):
if not isinstance(v, (list, tuple)):
return [v]
return v
@register
class RoIAlign(nn.Layer):
"""
RoI Align module
For more details, please refer to the document of roi_align in
in https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/vision/ops.py
Args:
resolution (int): The output size, default 14
spatial_scale (float): Multiplicative spatial scale factor to translate
ROI coords from their input scale to the scale used when pooling.
default 0.0625
sampling_ratio (int): The number of sampling points in the interpolation
grid, default 0
canconical_level (int): The referring level of FPN layer with
specified level. default 4
canonical_size (int): The referring scale of FPN layer with
specified scale. default 224
start_level (int): The start level of FPN layer to extract RoI feature,
default 0
end_level (int): The end level of FPN layer to extract RoI feature,
default 3
aligned (bool): Whether to add offset to rois' coord in roi_align.
default false
"""
def __init__(self,
resolution=14,
spatial_scale=0.0625,
sampling_ratio=0,
canconical_level=4,
canonical_size=224,
start_level=0,
end_level=3,
aligned=False):
super(RoIAlign, self).__init__()
self.resolution = resolution
self.spatial_scale = _to_list(spatial_scale)
self.sampling_ratio = sampling_ratio
self.canconical_level = canconical_level
self.canonical_size = canonical_size
self.start_level = start_level
self.end_level = end_level
self.aligned = aligned
@classmethod
def from_config(cls, cfg, input_shape):
return {'spatial_scale': [1. / i.stride for i in input_shape]}
def forward(self, feats, roi, rois_num):
roi = paddle.concat(roi) if len(roi) > 1 else roi[0]
if len(feats) == 1:
rois_feat = paddle.vision.ops.roi_align(
x=feats[self.start_level],
boxes=roi,
boxes_num=rois_num,
output_size=self.resolution,
spatial_scale=self.spatial_scale[0],
aligned=self.aligned)
else:
offset = 2
k_min = self.start_level + offset
k_max = self.end_level + offset
if hasattr(paddle.vision.ops, "distribute_fpn_proposals"):
distribute_fpn_proposals = getattr(paddle.vision.ops,
"distribute_fpn_proposals")
else:
distribute_fpn_proposals = ops.distribute_fpn_proposals
rois_dist, restore_index, rois_num_dist = distribute_fpn_proposals(
roi,
k_min,
k_max,
self.canconical_level,
self.canonical_size,
rois_num=rois_num)
rois_feat_list = []
for lvl in range(self.start_level, self.end_level + 1):
roi_feat = paddle.vision.ops.roi_align(
x=feats[lvl],
boxes=rois_dist[lvl],
boxes_num=rois_num_dist[lvl],
output_size=self.resolution,
spatial_scale=self.spatial_scale[lvl],
sampling_ratio=self.sampling_ratio,
aligned=self.aligned)
rois_feat_list.append(roi_feat)
rois_feat_shuffle = paddle.concat(rois_feat_list)
rois_feat = paddle.gather(rois_feat_shuffle, restore_index)
return rois_feat
| PaddleDetection/ppdet/modeling/heads/roi_extractor.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/heads/roi_extractor.py",
"repo_id": "PaddleDetection",
"token_count": 2149
} | 74 |
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from . import yolo_loss
from . import iou_aware_loss
from . import iou_loss
from . import ssd_loss
from . import fcos_loss
from . import solov2_loss
from . import ctfocal_loss
from . import keypoint_loss
from . import jde_loss
from . import fairmot_loss
from . import gfocal_loss
from . import detr_loss
from . import sparsercnn_loss
from . import focal_loss
from . import smooth_l1_loss
from . import probiou_loss
from . import cot_loss
from . import supcontrast
from . import queryinst_loss
from . import clrnet_loss
from . import clrnet_line_iou_loss
from .yolo_loss import *
from .iou_aware_loss import *
from .iou_loss import *
from .ssd_loss import *
from .fcos_loss import *
from .solov2_loss import *
from .ctfocal_loss import *
from .keypoint_loss import *
from .jde_loss import *
from .fairmot_loss import *
from .gfocal_loss import *
from .detr_loss import *
from .sparsercnn_loss import *
from .focal_loss import *
from .smooth_l1_loss import *
from .pose3d_loss import *
from .probiou_loss import *
from .cot_loss import *
from .supcontrast import *
from .queryinst_loss import *
from .clrnet_loss import *
from .clrnet_line_iou_loss import * | PaddleDetection/ppdet/modeling/losses/__init__.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/losses/__init__.py",
"repo_id": "PaddleDetection",
"token_count": 558
} | 75 |
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.nn.functional as F
from ppdet.core.workspace import register
from ppdet.modeling.losses.iou_loss import GIoULoss
from .sparsercnn_loss import HungarianMatcher
__all__ = ['QueryInstLoss']
@register
class QueryInstLoss(object):
__shared__ = ['num_classes']
def __init__(self,
num_classes=80,
focal_loss_alpha=0.25,
focal_loss_gamma=2.0,
class_weight=2.0,
l1_weight=5.0,
giou_weight=2.0,
mask_weight=8.0):
super(QueryInstLoss, self).__init__()
self.num_classes = num_classes
self.focal_loss_alpha = focal_loss_alpha
self.focal_loss_gamma = focal_loss_gamma
self.loss_weights = {
"loss_cls": class_weight,
"loss_bbox": l1_weight,
"loss_giou": giou_weight,
"loss_mask": mask_weight
}
self.giou_loss = GIoULoss(eps=1e-6, reduction='sum')
self.matcher = HungarianMatcher(focal_loss_alpha, focal_loss_gamma,
class_weight, l1_weight, giou_weight)
def loss_classes(self, class_logits, targets, indices, avg_factor):
tgt_labels = paddle.full(
class_logits.shape[:2], self.num_classes, dtype='int32')
if sum(len(v['labels']) for v in targets) > 0:
tgt_classes = paddle.concat([
paddle.gather(
tgt['labels'], tgt_idx, axis=0)
for tgt, (_, tgt_idx) in zip(targets, indices)
])
batch_idx, src_idx = self._get_src_permutation_idx(indices)
for i, (batch_i, src_i) in enumerate(zip(batch_idx, src_idx)):
tgt_labels[int(batch_i), int(src_i)] = tgt_classes[i]
tgt_labels = tgt_labels.flatten(0, 1).unsqueeze(-1)
tgt_labels_onehot = paddle.cast(
tgt_labels == paddle.arange(0, self.num_classes), dtype='float32')
tgt_labels_onehot.stop_gradient = True
src_logits = class_logits.flatten(0, 1)
loss_cls = F.sigmoid_focal_loss(
src_logits,
tgt_labels_onehot,
alpha=self.focal_loss_alpha,
gamma=self.focal_loss_gamma,
reduction='sum') / avg_factor
losses = {'loss_cls': loss_cls * self.loss_weights['loss_cls']}
return losses
def loss_bboxes(self, bbox_pred, targets, indices, avg_factor):
bboxes = paddle.concat([
paddle.gather(
src, src_idx, axis=0)
for src, (src_idx, _) in zip(bbox_pred, indices)
])
tgt_bboxes = paddle.concat([
paddle.gather(
tgt['boxes'], tgt_idx, axis=0)
for tgt, (_, tgt_idx) in zip(targets, indices)
])
tgt_bboxes.stop_gradient = True
im_shapes = paddle.concat([tgt['img_whwh_tgt'] for tgt in targets])
bboxes_norm = bboxes / im_shapes
tgt_bboxes_norm = tgt_bboxes / im_shapes
loss_giou = self.giou_loss(bboxes, tgt_bboxes) / avg_factor
loss_bbox = F.l1_loss(
bboxes_norm, tgt_bboxes_norm, reduction='sum') / avg_factor
losses = {
'loss_bbox': loss_bbox * self.loss_weights['loss_bbox'],
'loss_giou': loss_giou * self.loss_weights['loss_giou']
}
return losses
def loss_masks(self, pos_bbox_pred, mask_logits, targets, indices,
avg_factor):
tgt_segm = [
paddle.gather(
tgt['gt_segm'], tgt_idx, axis=0)
for tgt, (_, tgt_idx) in zip(targets, indices)
]
tgt_masks = []
for i in range(len(indices)):
gt_segm = tgt_segm[i].unsqueeze(1)
if len(gt_segm) == 0:
continue
boxes = pos_bbox_pred[i]
boxes[:, 0::2] = paddle.clip(
boxes[:, 0::2], min=0, max=gt_segm.shape[3])
boxes[:, 1::2] = paddle.clip(
boxes[:, 1::2], min=0, max=gt_segm.shape[2])
boxes_num = paddle.to_tensor([1] * len(boxes), dtype='int32')
gt_mask = paddle.vision.ops.roi_align(
gt_segm,
boxes,
boxes_num,
output_size=mask_logits.shape[-2:],
aligned=True)
tgt_masks.append(gt_mask)
tgt_masks = paddle.concat(tgt_masks).squeeze(1)
tgt_masks = paddle.cast(tgt_masks >= 0.5, dtype='float32')
tgt_masks.stop_gradient = True
tgt_labels = paddle.concat([
paddle.gather(
tgt['labels'], tgt_idx, axis=0)
for tgt, (_, tgt_idx) in zip(targets, indices)
])
mask_label = F.one_hot(tgt_labels, self.num_classes).unsqueeze([2, 3])
mask_label = paddle.expand_as(mask_label, mask_logits)
mask_label.stop_gradient = True
src_masks = paddle.gather_nd(mask_logits, paddle.nonzero(mask_label))
shape = mask_logits.shape
src_masks = paddle.reshape(src_masks, [shape[0], shape[2], shape[3]])
src_masks = F.sigmoid(src_masks)
X = src_masks.flatten(1)
Y = tgt_masks.flatten(1)
inter = paddle.sum(X * Y, 1)
union = paddle.sum(X * X, 1) + paddle.sum(Y * Y, 1)
dice = (2 * inter) / (union + 2e-5)
loss_mask = (1 - dice).sum() / avg_factor
losses = {'loss_mask': loss_mask * self.loss_weights['loss_mask']}
return losses
@staticmethod
def _get_src_permutation_idx(indices):
batch_idx = paddle.concat(
[paddle.full_like(src, i) for i, (src, _) in enumerate(indices)])
src_idx = paddle.concat([src for (src, _) in indices])
return batch_idx, src_idx
| PaddleDetection/ppdet/modeling/losses/queryinst_loss.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/losses/queryinst_loss.py",
"repo_id": "PaddleDetection",
"token_count": 3288
} | 76 |
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from paddle.nn.initializer import Normal
from ppdet.core.workspace import register
from .anchor_generator import AnchorGenerator
from .target_layer import RPNTargetAssign
from .proposal_generator import ProposalGenerator
from ..cls_utils import _get_class_default_kwargs
class RPNFeat(nn.Layer):
"""
Feature extraction in RPN head
Args:
in_channel (int): Input channel
out_channel (int): Output channel
"""
def __init__(self, in_channel=1024, out_channel=1024):
super(RPNFeat, self).__init__()
# rpn feat is shared with each level
self.rpn_conv = nn.Conv2D(
in_channels=in_channel,
out_channels=out_channel,
kernel_size=3,
padding=1,
weight_attr=paddle.ParamAttr(initializer=Normal(
mean=0., std=0.01)))
self.rpn_conv.skip_quant = True
def forward(self, feats):
rpn_feats = []
for feat in feats:
rpn_feats.append(F.relu(self.rpn_conv(feat)))
return rpn_feats
@register
class RPNHead(nn.Layer):
"""
Region Proposal Network
Args:
anchor_generator (dict): configure of anchor generation
rpn_target_assign (dict): configure of rpn targets assignment
train_proposal (dict): configure of proposals generation
at the stage of training
test_proposal (dict): configure of proposals generation
at the stage of prediction
in_channel (int): channel of input feature maps which can be
derived by from_config
"""
__shared__ = ['export_onnx']
__inject__ = ['loss_rpn_bbox']
def __init__(self,
anchor_generator=_get_class_default_kwargs(AnchorGenerator),
rpn_target_assign=_get_class_default_kwargs(RPNTargetAssign),
train_proposal=_get_class_default_kwargs(ProposalGenerator,
12000, 2000),
test_proposal=_get_class_default_kwargs(ProposalGenerator),
in_channel=1024,
export_onnx=False,
loss_rpn_bbox=None):
super(RPNHead, self).__init__()
self.anchor_generator = anchor_generator
self.rpn_target_assign = rpn_target_assign
self.train_proposal = train_proposal
self.test_proposal = test_proposal
self.export_onnx = export_onnx
if isinstance(anchor_generator, dict):
self.anchor_generator = AnchorGenerator(**anchor_generator)
if isinstance(rpn_target_assign, dict):
self.rpn_target_assign = RPNTargetAssign(**rpn_target_assign)
if isinstance(train_proposal, dict):
self.train_proposal = ProposalGenerator(**train_proposal)
if isinstance(test_proposal, dict):
self.test_proposal = ProposalGenerator(**test_proposal)
self.loss_rpn_bbox = loss_rpn_bbox
num_anchors = self.anchor_generator.num_anchors
self.rpn_feat = RPNFeat(in_channel, in_channel)
# rpn head is shared with each level
# rpn roi classification scores
self.rpn_rois_score = nn.Conv2D(
in_channels=in_channel,
out_channels=num_anchors,
kernel_size=1,
padding=0,
weight_attr=paddle.ParamAttr(initializer=Normal(
mean=0., std=0.01)))
self.rpn_rois_score.skip_quant = True
# rpn roi bbox regression deltas
self.rpn_rois_delta = nn.Conv2D(
in_channels=in_channel,
out_channels=4 * num_anchors,
kernel_size=1,
padding=0,
weight_attr=paddle.ParamAttr(initializer=Normal(
mean=0., std=0.01)))
self.rpn_rois_delta.skip_quant = True
@classmethod
def from_config(cls, cfg, input_shape):
# FPN share same rpn head
if isinstance(input_shape, (list, tuple)):
input_shape = input_shape[0]
return {'in_channel': input_shape.channels}
def forward(self, feats, inputs):
rpn_feats = self.rpn_feat(feats)
scores = []
deltas = []
for rpn_feat in rpn_feats:
rrs = self.rpn_rois_score(rpn_feat)
rrd = self.rpn_rois_delta(rpn_feat)
scores.append(rrs)
deltas.append(rrd)
anchors = self.anchor_generator(rpn_feats)
rois, rois_num = self._gen_proposal(scores, deltas, anchors, inputs)
if self.training:
loss = self.get_loss(scores, deltas, anchors, inputs)
return rois, rois_num, loss
else:
return rois, rois_num, None
def _gen_proposal(self, scores, bbox_deltas, anchors, inputs):
"""
scores (list[Tensor]): Multi-level scores prediction
bbox_deltas (list[Tensor]): Multi-level deltas prediction
anchors (list[Tensor]): Multi-level anchors
inputs (dict): ground truth info
"""
prop_gen = self.train_proposal if self.training else self.test_proposal
im_shape = inputs['im_shape']
# Collect multi-level proposals for each batch
# Get 'topk' of them as final output
if self.export_onnx:
# bs = 1 when exporting onnx
onnx_rpn_rois_list = []
onnx_rpn_prob_list = []
onnx_rpn_rois_num_list = []
for rpn_score, rpn_delta, anchor in zip(scores, bbox_deltas,
anchors):
onnx_rpn_rois, onnx_rpn_rois_prob, onnx_rpn_rois_num, onnx_post_nms_top_n = prop_gen(
scores=rpn_score[0:1],
bbox_deltas=rpn_delta[0:1],
anchors=anchor,
im_shape=im_shape[0:1])
onnx_rpn_rois_list.append(onnx_rpn_rois)
onnx_rpn_prob_list.append(onnx_rpn_rois_prob)
onnx_rpn_rois_num_list.append(onnx_rpn_rois_num)
onnx_rpn_rois = paddle.concat(onnx_rpn_rois_list)
onnx_rpn_prob = paddle.concat(onnx_rpn_prob_list).flatten()
onnx_top_n = paddle.to_tensor(onnx_post_nms_top_n).cast('int32')
onnx_num_rois = paddle.shape(onnx_rpn_prob)[0].cast('int32')
k = paddle.minimum(onnx_top_n, onnx_num_rois)
onnx_topk_prob, onnx_topk_inds = paddle.topk(onnx_rpn_prob, k)
onnx_topk_rois = paddle.gather(onnx_rpn_rois, onnx_topk_inds)
# TODO(wangguanzhong): Now bs_rois_collect in export_onnx is moved outside conditional branch
# due to problems in dy2static of paddle. Will fix it when updating paddle framework.
# bs_rois_collect = [onnx_topk_rois]
# bs_rois_num_collect = paddle.shape(onnx_topk_rois)[0]
else:
bs_rois_collect = []
bs_rois_num_collect = []
batch_size = paddle.slice(paddle.shape(im_shape), [0], [0], [1])
# Generate proposals for each level and each batch.
# Discard batch-computing to avoid sorting bbox cross different batches.
for i in range(batch_size):
rpn_rois_list = []
rpn_prob_list = []
rpn_rois_num_list = []
for rpn_score, rpn_delta, anchor in zip(scores, bbox_deltas,
anchors):
rpn_rois, rpn_rois_prob, rpn_rois_num, post_nms_top_n = prop_gen(
scores=rpn_score[i:i + 1],
bbox_deltas=rpn_delta[i:i + 1],
anchors=anchor,
im_shape=im_shape[i:i + 1])
rpn_rois_list.append(rpn_rois)
rpn_prob_list.append(rpn_rois_prob)
rpn_rois_num_list.append(rpn_rois_num)
if len(scores) > 1:
rpn_rois = paddle.concat(rpn_rois_list)
rpn_prob = paddle.concat(rpn_prob_list).flatten()
num_rois = paddle.shape(rpn_prob)[0].cast('int32')
if num_rois > post_nms_top_n:
topk_prob, topk_inds = paddle.topk(rpn_prob,
post_nms_top_n)
topk_rois = paddle.gather(rpn_rois, topk_inds)
else:
topk_rois = rpn_rois
topk_prob = rpn_prob
else:
topk_rois = rpn_rois_list[0]
topk_prob = rpn_prob_list[0].flatten()
bs_rois_collect.append(topk_rois)
bs_rois_num_collect.append(paddle.shape(topk_rois)[0:1])
bs_rois_num_collect = paddle.concat(bs_rois_num_collect)
if self.export_onnx:
output_rois = [onnx_topk_rois]
output_rois_num = paddle.shape(onnx_topk_rois)[0]
else:
output_rois = bs_rois_collect
output_rois_num = bs_rois_num_collect
return output_rois, output_rois_num
def get_loss(self, pred_scores, pred_deltas, anchors, inputs):
"""
pred_scores (list[Tensor]): Multi-level scores prediction
pred_deltas (list[Tensor]): Multi-level deltas prediction
anchors (list[Tensor]): Multi-level anchors
inputs (dict): ground truth info, including im, gt_bbox, gt_score
"""
anchors = [paddle.reshape(a, shape=(-1, 4)) for a in anchors]
anchors = paddle.concat(anchors)
scores = [
paddle.reshape(
paddle.transpose(
v, perm=[0, 2, 3, 1]),
shape=(v.shape[0], -1, 1)) for v in pred_scores
]
scores = paddle.concat(scores, axis=1)
deltas = [
paddle.reshape(
paddle.transpose(
v, perm=[0, 2, 3, 1]),
shape=(v.shape[0], -1, 4)) for v in pred_deltas
]
deltas = paddle.concat(deltas, axis=1)
score_tgt, bbox_tgt, loc_tgt, norm = self.rpn_target_assign(inputs,
anchors)
scores = paddle.reshape(x=scores, shape=(-1, ))
deltas = paddle.reshape(x=deltas, shape=(-1, 4))
score_tgt = paddle.concat(score_tgt)
score_tgt.stop_gradient = True
pos_mask = score_tgt == 1
pos_ind = paddle.nonzero(pos_mask)
valid_mask = score_tgt >= 0
valid_ind = paddle.nonzero(valid_mask)
# cls loss
if valid_ind.shape[0] == 0:
loss_rpn_cls = paddle.zeros([1], dtype='float32')
else:
score_pred = paddle.gather(scores, valid_ind)
score_label = paddle.gather(score_tgt, valid_ind).cast('float32')
score_label.stop_gradient = True
loss_rpn_cls = F.binary_cross_entropy_with_logits(
logit=score_pred, label=score_label, reduction="sum")
# reg loss
if pos_ind.shape[0] == 0:
loss_rpn_reg = paddle.zeros([1], dtype='float32')
else:
loc_pred = paddle.gather(deltas, pos_ind)
loc_tgt = paddle.concat(loc_tgt)
loc_tgt = paddle.gather(loc_tgt, pos_ind)
loc_tgt.stop_gradient = True
if self.loss_rpn_bbox is None:
loss_rpn_reg = paddle.abs(loc_pred - loc_tgt).sum()
else:
loss_rpn_reg = self.loss_rpn_bbox(loc_pred, loc_tgt).sum()
return {
'loss_rpn_cls': loss_rpn_cls / norm,
'loss_rpn_reg': loss_rpn_reg / norm
}
| PaddleDetection/ppdet/modeling/proposal_generator/rpn_head.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/proposal_generator/rpn_head.py",
"repo_id": "PaddleDetection",
"token_count": 6509
} | 77 |
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Modified from Deformable-DETR (https://github.com/fundamentalvision/Deformable-DETR)
# Copyright (c) 2020 SenseTime. All Rights Reserved.
# Modified from detrex (https://github.com/IDEA-Research/detrex)
# Copyright 2022 The IDEA Authors. All rights reserved.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from paddle import ParamAttr
from paddle.regularizer import L2Decay
from ppdet.core.workspace import register
from ..layers import MultiHeadAttention
from .position_encoding import PositionEmbedding
from ..heads.detr_head import MLP
from .deformable_transformer import MSDeformableAttention
from ..initializer import (linear_init_, constant_, xavier_uniform_, normal_,
bias_init_with_prob)
from .utils import (_get_clones, get_valid_ratio,
get_contrastive_denoising_training_group,
get_sine_pos_embed, inverse_sigmoid)
__all__ = ['GroupDINOTransformer']
class DINOTransformerEncoderLayer(nn.Layer):
def __init__(self,
d_model=256,
n_head=8,
dim_feedforward=1024,
dropout=0.,
activation="relu",
n_levels=4,
n_points=4,
weight_attr=None,
bias_attr=None):
super(DINOTransformerEncoderLayer, self).__init__()
# self attention
self.self_attn = MSDeformableAttention(d_model, n_head, n_levels,
n_points, 1.0)
self.dropout1 = nn.Dropout(dropout)
self.norm1 = nn.LayerNorm(
d_model,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
# ffn
self.linear1 = nn.Linear(d_model, dim_feedforward, weight_attr,
bias_attr)
self.activation = getattr(F, activation)
self.dropout2 = nn.Dropout(dropout)
self.linear2 = nn.Linear(dim_feedforward, d_model, weight_attr,
bias_attr)
self.dropout3 = nn.Dropout(dropout)
self.norm2 = nn.LayerNorm(
d_model,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
self._reset_parameters()
def _reset_parameters(self):
linear_init_(self.linear1)
linear_init_(self.linear2)
xavier_uniform_(self.linear1.weight)
xavier_uniform_(self.linear2.weight)
def with_pos_embed(self, tensor, pos):
return tensor if pos is None else tensor + pos
def forward_ffn(self, src):
src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
src = src + self.dropout3(src2)
src = self.norm2(src)
return src
def forward(self,
src,
reference_points,
spatial_shapes,
level_start_index,
src_mask=None,
query_pos_embed=None):
# self attention
src2 = self.self_attn(
self.with_pos_embed(src, query_pos_embed), reference_points, src,
spatial_shapes, level_start_index, src_mask)
src = src + self.dropout1(src2)
src = self.norm1(src)
# ffn
src = self.forward_ffn(src)
return src
class DINOTransformerEncoder(nn.Layer):
def __init__(self, encoder_layer, num_layers):
super(DINOTransformerEncoder, self).__init__()
self.layers = _get_clones(encoder_layer, num_layers)
self.num_layers = num_layers
@staticmethod
def get_reference_points(spatial_shapes, valid_ratios, offset=0.5):
valid_ratios = valid_ratios.unsqueeze(1)
reference_points = []
for i, (H, W) in enumerate(spatial_shapes):
ref_y, ref_x = paddle.meshgrid(
paddle.arange(end=H) + offset, paddle.arange(end=W) + offset)
ref_y = ref_y.flatten().unsqueeze(0) / (valid_ratios[:, :, i, 1] *
H)
ref_x = ref_x.flatten().unsqueeze(0) / (valid_ratios[:, :, i, 0] *
W)
reference_points.append(paddle.stack((ref_x, ref_y), axis=-1))
reference_points = paddle.concat(reference_points, 1).unsqueeze(2)
reference_points = reference_points * valid_ratios
return reference_points
def forward(self,
feat,
spatial_shapes,
level_start_index,
feat_mask=None,
query_pos_embed=None,
valid_ratios=None):
if valid_ratios is None:
valid_ratios = paddle.ones(
[feat.shape[0], spatial_shapes.shape[0], 2])
reference_points = self.get_reference_points(spatial_shapes,
valid_ratios)
for layer in self.layers:
feat = layer(feat, reference_points, spatial_shapes,
level_start_index, feat_mask, query_pos_embed)
return feat
class DINOTransformerDecoderLayer(nn.Layer):
def __init__(self,
d_model=256,
n_head=8,
dim_feedforward=1024,
dropout=0.,
activation="relu",
n_levels=4,
n_points=4,
dual_queries=False,
dual_groups=0,
weight_attr=None,
bias_attr=None):
super(DINOTransformerDecoderLayer, self).__init__()
# self attention
self.self_attn = MultiHeadAttention(d_model, n_head, dropout=dropout)
self.dropout1 = nn.Dropout(dropout)
self.norm1 = nn.LayerNorm(
d_model,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
# cross attention
self.cross_attn = MSDeformableAttention(d_model, n_head, n_levels,
n_points, 1.0)
self.dropout2 = nn.Dropout(dropout)
self.norm2 = nn.LayerNorm(
d_model,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
# ffn
self.linear1 = nn.Linear(d_model, dim_feedforward, weight_attr,
bias_attr)
self.activation = getattr(F, activation)
self.dropout3 = nn.Dropout(dropout)
self.linear2 = nn.Linear(dim_feedforward, d_model, weight_attr,
bias_attr)
self.dropout4 = nn.Dropout(dropout)
self.norm3 = nn.LayerNorm(
d_model,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
# for dual groups
self.dual_queries = dual_queries
self.dual_groups = dual_groups
self.n_head = n_head
self._reset_parameters()
def _reset_parameters(self):
linear_init_(self.linear1)
linear_init_(self.linear2)
xavier_uniform_(self.linear1.weight)
xavier_uniform_(self.linear2.weight)
def with_pos_embed(self, tensor, pos):
return tensor if pos is None else tensor + pos
def forward_ffn(self, tgt):
return self.linear2(self.dropout3(self.activation(self.linear1(tgt))))
def forward(self,
tgt,
reference_points,
memory,
memory_spatial_shapes,
memory_level_start_index,
attn_mask=None,
memory_mask=None,
query_pos_embed=None):
# self attention
q = k = self.with_pos_embed(tgt, query_pos_embed)
if self.dual_queries:
dual_groups = self.dual_groups
bs, num_queries, n_model = paddle.shape(q)
q = paddle.concat(q.split(dual_groups + 1, axis=1), axis=0)
k = paddle.concat(k.split(dual_groups + 1, axis=1), axis=0)
tgt = paddle.concat(tgt.split(dual_groups + 1, axis=1), axis=0)
g_num_queries = num_queries // (dual_groups + 1)
if attn_mask is None or attn_mask[0] is None:
attn_mask = None
else:
# [(dual_groups + 1), g_num_queries, g_num_queries]
attn_mask = paddle.concat(
[sa_mask.unsqueeze(0) for sa_mask in attn_mask], axis=0)
# [1, (dual_groups + 1), 1, g_num_queries, g_num_queries]
# --> [bs, (dual_groups + 1), nhead, g_num_queries, g_num_queries]
# --> [bs * (dual_groups + 1), nhead, g_num_queries, g_num_queries]
attn_mask = attn_mask.unsqueeze(0).unsqueeze(2).tile(
[bs, 1, self.n_head, 1, 1])
attn_mask = attn_mask.reshape([
bs * (dual_groups + 1), self.n_head, g_num_queries,
g_num_queries
])
if attn_mask is not None:
attn_mask = attn_mask.astype('bool')
tgt2 = self.self_attn(q, k, value=tgt, attn_mask=attn_mask)
tgt = tgt + self.dropout1(tgt2)
tgt = self.norm2(tgt)
# trace back
if self.dual_queries:
tgt = paddle.concat(tgt.split(dual_groups + 1, axis=0), axis=1)
# cross attention
tgt2 = self.cross_attn(
self.with_pos_embed(tgt, query_pos_embed), reference_points, memory,
memory_spatial_shapes, memory_level_start_index, memory_mask)
tgt = tgt + self.dropout2(tgt2)
tgt = self.norm1(tgt)
# ffn
tgt2 = self.forward_ffn(tgt)
tgt = tgt + self.dropout4(tgt2)
tgt = self.norm3(tgt)
return tgt
class DINOTransformerDecoder(nn.Layer):
def __init__(self,
hidden_dim,
decoder_layer,
num_layers,
return_intermediate=True):
super(DINOTransformerDecoder, self).__init__()
self.layers = _get_clones(decoder_layer, num_layers)
self.hidden_dim = hidden_dim
self.num_layers = num_layers
self.return_intermediate = return_intermediate
self.norm = nn.LayerNorm(
hidden_dim,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
def forward(self,
tgt,
reference_points,
memory,
memory_spatial_shapes,
memory_level_start_index,
bbox_head,
query_pos_head,
valid_ratios=None,
attn_mask=None,
memory_mask=None):
if valid_ratios is None:
valid_ratios = paddle.ones(
[memory.shape[0], memory_spatial_shapes.shape[0], 2])
output = tgt
intermediate = []
inter_ref_bboxes = []
for i, layer in enumerate(self.layers):
reference_points_input = reference_points.unsqueeze(
2) * valid_ratios.tile([1, 1, 2]).unsqueeze(1)
query_pos_embed = get_sine_pos_embed(
reference_points_input[..., 0, :], self.hidden_dim // 2)
query_pos_embed = query_pos_head(query_pos_embed)
output = layer(output, reference_points_input, memory,
memory_spatial_shapes, memory_level_start_index,
attn_mask, memory_mask, query_pos_embed)
inter_ref_bbox = F.sigmoid(bbox_head[i](output) + inverse_sigmoid(
reference_points))
if self.return_intermediate:
intermediate.append(self.norm(output))
inter_ref_bboxes.append(inter_ref_bbox)
reference_points = inter_ref_bbox.detach()
if self.return_intermediate:
return paddle.stack(intermediate), paddle.stack(inter_ref_bboxes)
return output, reference_points
@register
class GroupDINOTransformer(nn.Layer):
__shared__ = ['num_classes', 'hidden_dim']
def __init__(self,
num_classes=80,
hidden_dim=256,
num_queries=900,
position_embed_type='sine',
return_intermediate_dec=True,
backbone_feat_channels=[512, 1024, 2048],
num_levels=4,
num_encoder_points=4,
num_decoder_points=4,
nhead=8,
num_encoder_layers=6,
num_decoder_layers=6,
dim_feedforward=1024,
dropout=0.,
activation="relu",
pe_temperature=10000,
pe_offset=-0.5,
num_denoising=100,
label_noise_ratio=0.5,
box_noise_scale=1.0,
learnt_init_query=True,
use_input_proj=True,
dual_queries=False,
dual_groups=0,
eps=1e-2):
super(GroupDINOTransformer, self).__init__()
assert position_embed_type in ['sine', 'learned'], \
f'ValueError: position_embed_type not supported {position_embed_type}!'
assert len(backbone_feat_channels) <= num_levels
self.hidden_dim = hidden_dim
self.nhead = nhead
self.num_levels = num_levels
self.num_classes = num_classes
self.num_queries = num_queries
self.eps = eps
self.num_decoder_layers = num_decoder_layers
self.use_input_proj = use_input_proj
if use_input_proj:
# backbone feature projection
self._build_input_proj_layer(backbone_feat_channels)
# Transformer module
encoder_layer = DINOTransformerEncoderLayer(
hidden_dim, nhead, dim_feedforward, dropout, activation, num_levels,
num_encoder_points)
self.encoder = DINOTransformerEncoder(encoder_layer, num_encoder_layers)
decoder_layer = DINOTransformerDecoderLayer(
hidden_dim,
nhead,
dim_feedforward,
dropout,
activation,
num_levels,
num_decoder_points,
dual_queries=dual_queries,
dual_groups=dual_groups)
self.decoder = DINOTransformerDecoder(hidden_dim, decoder_layer,
num_decoder_layers,
return_intermediate_dec)
# denoising part
self.denoising_class_embed = nn.Embedding(
num_classes,
hidden_dim,
weight_attr=ParamAttr(initializer=nn.initializer.Normal()))
self.num_denoising = num_denoising
self.label_noise_ratio = label_noise_ratio
self.box_noise_scale = box_noise_scale
# for dual group
self.dual_queries = dual_queries
self.dual_groups = dual_groups
if self.dual_queries:
self.denoising_class_embed_groups = nn.LayerList([
nn.Embedding(
num_classes,
hidden_dim,
weight_attr=ParamAttr(initializer=nn.initializer.Normal()))
for _ in range(self.dual_groups)
])
# position embedding
self.position_embedding = PositionEmbedding(
hidden_dim // 2,
temperature=pe_temperature,
normalize=True if position_embed_type == 'sine' else False,
embed_type=position_embed_type,
offset=pe_offset)
self.level_embed = nn.Embedding(num_levels, hidden_dim)
# decoder embedding
self.learnt_init_query = learnt_init_query
if learnt_init_query:
self.tgt_embed = nn.Embedding(num_queries, hidden_dim)
normal_(self.tgt_embed.weight)
if self.dual_queries:
self.tgt_embed_dual = nn.LayerList([
nn.Embedding(num_queries, hidden_dim)
for _ in range(self.dual_groups)
])
for dual_tgt_module in self.tgt_embed_dual:
normal_(dual_tgt_module.weight)
self.query_pos_head = MLP(2 * hidden_dim,
hidden_dim,
hidden_dim,
num_layers=2)
# encoder head
self.enc_output = nn.Sequential(
nn.Linear(hidden_dim, hidden_dim),
nn.LayerNorm(
hidden_dim,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0))))
if self.dual_queries:
self.enc_output = _get_clones(self.enc_output, self.dual_groups + 1)
else:
self.enc_output = _get_clones(self.enc_output, 1)
self.enc_score_head = nn.Linear(hidden_dim, num_classes)
self.enc_bbox_head = MLP(hidden_dim, hidden_dim, 4, num_layers=3)
if self.dual_queries:
self.enc_bbox_head_dq = nn.LayerList([
MLP(hidden_dim, hidden_dim, 4, num_layers=3)
for i in range(self.dual_groups)
])
self.enc_score_head_dq = nn.LayerList([
nn.Linear(hidden_dim, num_classes)
for i in range(self.dual_groups)
])
# decoder head
self.dec_score_head = nn.LayerList([
nn.Linear(hidden_dim, num_classes)
for _ in range(num_decoder_layers)
])
self.dec_bbox_head = nn.LayerList([
MLP(hidden_dim, hidden_dim, 4, num_layers=3)
for _ in range(num_decoder_layers)
])
self._reset_parameters()
def _reset_parameters(self):
# class and bbox head init
bias_cls = bias_init_with_prob(0.01)
linear_init_(self.enc_score_head)
constant_(self.enc_score_head.bias, bias_cls)
constant_(self.enc_bbox_head.layers[-1].weight)
constant_(self.enc_bbox_head.layers[-1].bias)
for cls_, reg_ in zip(self.dec_score_head, self.dec_bbox_head):
linear_init_(cls_)
constant_(cls_.bias, bias_cls)
constant_(reg_.layers[-1].weight)
constant_(reg_.layers[-1].bias)
for enc_output in self.enc_output:
linear_init_(enc_output[0])
xavier_uniform_(enc_output[0].weight)
normal_(self.level_embed.weight)
if self.learnt_init_query:
xavier_uniform_(self.tgt_embed.weight)
xavier_uniform_(self.query_pos_head.layers[0].weight)
xavier_uniform_(self.query_pos_head.layers[1].weight)
normal_(self.denoising_class_embed.weight)
if self.use_input_proj:
for l in self.input_proj:
xavier_uniform_(l[0].weight)
constant_(l[0].bias)
@classmethod
def from_config(cls, cfg, input_shape):
return {'backbone_feat_channels': [i.channels for i in input_shape], }
def _build_input_proj_layer(self, backbone_feat_channels):
self.input_proj = nn.LayerList()
for in_channels in backbone_feat_channels:
self.input_proj.append(
nn.Sequential(
('conv', nn.Conv2D(
in_channels, self.hidden_dim, kernel_size=1)),
('norm', nn.GroupNorm(
32,
self.hidden_dim,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0))))))
in_channels = backbone_feat_channels[-1]
for _ in range(self.num_levels - len(backbone_feat_channels)):
self.input_proj.append(
nn.Sequential(
('conv', nn.Conv2D(
in_channels,
self.hidden_dim,
kernel_size=3,
stride=2,
padding=1)), ('norm', nn.GroupNorm(
32,
self.hidden_dim,
weight_attr=ParamAttr(regularizer=L2Decay(0.0)),
bias_attr=ParamAttr(regularizer=L2Decay(0.0))))))
in_channels = self.hidden_dim
def _get_encoder_input(self, feats, pad_mask=None):
if self.use_input_proj:
# get projection features
proj_feats = [
self.input_proj[i](feat) for i, feat in enumerate(feats)
]
if self.num_levels > len(proj_feats):
len_srcs = len(proj_feats)
for i in range(len_srcs, self.num_levels):
if i == len_srcs:
proj_feats.append(self.input_proj[i](feats[-1]))
else:
proj_feats.append(self.input_proj[i](proj_feats[-1]))
else:
proj_feats = feats
# get encoder inputs
feat_flatten = []
mask_flatten = []
lvl_pos_embed_flatten = []
spatial_shapes = []
valid_ratios = []
for i, feat in enumerate(proj_feats):
bs, _, h, w = paddle.shape(feat)
spatial_shapes.append(paddle.concat([h, w]))
# [b,c,h,w] -> [b,h*w,c]
feat_flatten.append(feat.flatten(2).transpose([0, 2, 1]))
if pad_mask is not None:
mask = F.interpolate(pad_mask.unsqueeze(0), size=(h, w))[0]
else:
mask = paddle.ones([bs, h, w])
valid_ratios.append(get_valid_ratio(mask))
# [b, h*w, c]
pos_embed = self.position_embedding(mask).flatten(1, 2)
lvl_pos_embed = pos_embed + self.level_embed.weight[i].reshape(
[1, 1, -1])
lvl_pos_embed_flatten.append(lvl_pos_embed)
if pad_mask is not None:
# [b, h*w]
mask_flatten.append(mask.flatten(1))
# [b, l, c]
feat_flatten = paddle.concat(feat_flatten, 1)
# [b, l]
mask_flatten = None if pad_mask is None else paddle.concat(mask_flatten,
1)
# [b, l, c]
lvl_pos_embed_flatten = paddle.concat(lvl_pos_embed_flatten, 1)
# [num_levels, 2]
spatial_shapes = paddle.to_tensor(
paddle.stack(spatial_shapes).astype('int64'))
# [l] start index of each level
level_start_index = paddle.concat([
paddle.zeros(
[1], dtype='int64'), spatial_shapes.prod(1).cumsum(0)[:-1]
])
# [b, num_levels, 2]
valid_ratios = paddle.stack(valid_ratios, 1)
return (feat_flatten, spatial_shapes, level_start_index, mask_flatten,
lvl_pos_embed_flatten, valid_ratios)
def forward(self, feats, pad_mask=None, gt_meta=None):
# input projection and embedding
(feat_flatten, spatial_shapes, level_start_index, mask_flatten,
lvl_pos_embed_flatten,
valid_ratios) = self._get_encoder_input(feats, pad_mask)
# encoder
memory = self.encoder(feat_flatten, spatial_shapes, level_start_index,
mask_flatten, lvl_pos_embed_flatten, valid_ratios)
# prepare denoising training
if self.training:
denoising_class, denoising_bbox, attn_mask, dn_meta = \
get_contrastive_denoising_training_group(gt_meta,
self.num_classes,
self.num_queries,
self.denoising_class_embed.weight,
self.num_denoising,
self.label_noise_ratio,
self.box_noise_scale)
if self.dual_queries:
denoising_class_groups = []
denoising_bbox_groups = []
attn_mask_groups = []
dn_meta_groups = []
for g_id in range(self.dual_groups):
denoising_class_gid, denoising_bbox_gid, attn_mask_gid, dn_meta_gid = \
get_contrastive_denoising_training_group(gt_meta,
self.num_classes,
self.num_queries,
self.denoising_class_embed_groups[g_id].weight,
self.num_denoising,
self.label_noise_ratio,
self.box_noise_scale)
denoising_class_groups.append(denoising_class_gid)
denoising_bbox_groups.append(denoising_bbox_gid)
attn_mask_groups.append(attn_mask_gid)
dn_meta_groups.append(dn_meta_gid)
# combine
denoising_class = [denoising_class] + denoising_class_groups
denoising_bbox = [denoising_bbox] + denoising_bbox_groups
attn_mask = [attn_mask] + attn_mask_groups
dn_meta = [dn_meta] + dn_meta_groups
else:
denoising_class, denoising_bbox, attn_mask, dn_meta = None, None, None, None
target, init_ref_points, enc_topk_bboxes, enc_topk_logits = \
self._get_decoder_input(
memory, spatial_shapes, mask_flatten, denoising_class,
denoising_bbox)
# decoder
inter_feats, inter_ref_bboxes = self.decoder(
target, init_ref_points, memory, spatial_shapes, level_start_index,
self.dec_bbox_head, self.query_pos_head, valid_ratios, attn_mask,
mask_flatten)
# solve hang during distributed training
inter_feats[0] += self.denoising_class_embed.weight[0, 0] * 0.
if self.dual_queries:
for g_id in range(self.dual_groups):
inter_feats[0] += self.denoising_class_embed_groups[
g_id].weight[0, 0] * 0.0
out_bboxes = []
out_logits = []
for i in range(self.num_decoder_layers):
out_logits.append(self.dec_score_head[i](inter_feats[i]))
if i == 0:
out_bboxes.append(
F.sigmoid(self.dec_bbox_head[i](inter_feats[i]) +
inverse_sigmoid(init_ref_points)))
else:
out_bboxes.append(
F.sigmoid(self.dec_bbox_head[i](inter_feats[i]) +
inverse_sigmoid(inter_ref_bboxes[i - 1])))
out_bboxes = paddle.stack(out_bboxes)
out_logits = paddle.stack(out_logits)
return (out_bboxes, out_logits, enc_topk_bboxes, enc_topk_logits,
dn_meta)
def _get_encoder_output_anchors(self,
memory,
spatial_shapes,
memory_mask=None,
grid_size=0.05):
output_anchors = []
idx = 0
for lvl, (h, w) in enumerate(spatial_shapes):
if memory_mask is not None:
mask_ = memory_mask[:, idx:idx + h * w].reshape([-1, h, w])
valid_H = paddle.sum(mask_[:, :, 0], 1)
valid_W = paddle.sum(mask_[:, 0, :], 1)
else:
valid_H, valid_W = h, w
grid_y, grid_x = paddle.meshgrid(
paddle.arange(
end=h, dtype=memory.dtype),
paddle.arange(
end=w, dtype=memory.dtype))
grid_xy = paddle.stack([grid_x, grid_y], -1)
valid_WH = paddle.stack([valid_W, valid_H], -1).reshape(
[-1, 1, 1, 2]).astype(grid_xy.dtype)
grid_xy = (grid_xy.unsqueeze(0) + 0.5) / valid_WH
wh = paddle.ones_like(grid_xy) * grid_size * (2.0**lvl)
output_anchors.append(
paddle.concat([grid_xy, wh], -1).reshape([-1, h * w, 4]))
idx += h * w
output_anchors = paddle.concat(output_anchors, 1)
valid_mask = ((output_anchors > self.eps) *
(output_anchors < 1 - self.eps)).all(-1, keepdim=True)
output_anchors = paddle.log(output_anchors / (1 - output_anchors))
if memory_mask is not None:
valid_mask = (valid_mask * (memory_mask.unsqueeze(-1) > 0)) > 0
output_anchors = paddle.where(valid_mask, output_anchors,
paddle.to_tensor(float("inf")))
memory = paddle.where(valid_mask, memory, paddle.to_tensor(0.))
if self.dual_queries:
output_memory = [
self.enc_output[g_id](memory)
for g_id in range(self.dual_groups + 1)
]
else:
output_memory = self.enc_output[0](memory)
return output_memory, output_anchors
def _get_decoder_input(self,
memory,
spatial_shapes,
memory_mask=None,
denoising_class=None,
denoising_bbox=None):
bs, _, _ = memory.shape
# prepare input for decoder
output_memory, output_anchors = self._get_encoder_output_anchors(
memory, spatial_shapes, memory_mask)
if self.dual_queries:
enc_outputs_class = self.enc_score_head(output_memory[0])
enc_outputs_coord_unact = self.enc_bbox_head(output_memory[
0]) + output_anchors
else:
enc_outputs_class = self.enc_score_head(output_memory)
enc_outputs_coord_unact = self.enc_bbox_head(
output_memory) + output_anchors
_, topk_ind = paddle.topk(
enc_outputs_class.max(-1), self.num_queries, axis=1)
# extract region proposal boxes
batch_ind = paddle.arange(end=bs, dtype=topk_ind.dtype)
batch_ind = batch_ind.unsqueeze(-1).tile([1, self.num_queries])
topk_ind = paddle.stack([batch_ind, topk_ind], axis=-1)
topk_coords_unact = paddle.gather_nd(enc_outputs_coord_unact,
topk_ind) # unsigmoided.
enc_topk_bboxes = F.sigmoid(topk_coords_unact)
reference_points = enc_topk_bboxes.detach()
enc_topk_logits = paddle.gather_nd(enc_outputs_class, topk_ind)
if self.dual_queries:
enc_topk_logits_groups = []
enc_topk_bboxes_groups = []
reference_points_groups = []
topk_ind_groups = []
for g_id in range(self.dual_groups):
enc_outputs_class_gid = self.enc_score_head_dq[g_id](
output_memory[g_id + 1])
enc_outputs_coord_unact_gid = self.enc_bbox_head_dq[g_id](
output_memory[g_id + 1]) + output_anchors
_, topk_ind_gid = paddle.topk(
enc_outputs_class_gid.max(-1), self.num_queries, axis=1)
# extract region proposal boxes
batch_ind = paddle.arange(end=bs, dtype=topk_ind_gid.dtype)
batch_ind = batch_ind.unsqueeze(-1).tile([1, self.num_queries])
topk_ind_gid = paddle.stack([batch_ind, topk_ind_gid], axis=-1)
topk_coords_unact_gid = paddle.gather_nd(
enc_outputs_coord_unact_gid, topk_ind_gid) # unsigmoided.
enc_topk_bboxes_gid = F.sigmoid(topk_coords_unact_gid)
reference_points_gid = enc_topk_bboxes_gid.detach()
enc_topk_logits_gid = paddle.gather_nd(enc_outputs_class_gid,
topk_ind_gid)
# append and combine
topk_ind_groups.append(topk_ind_gid)
enc_topk_logits_groups.append(enc_topk_logits_gid)
enc_topk_bboxes_groups.append(enc_topk_bboxes_gid)
reference_points_groups.append(reference_points_gid)
enc_topk_bboxes = paddle.concat(
[enc_topk_bboxes] + enc_topk_bboxes_groups, 1)
enc_topk_logits = paddle.concat(
[enc_topk_logits] + enc_topk_logits_groups, 1)
reference_points = paddle.concat(
[reference_points] + reference_points_groups, 1)
topk_ind = paddle.concat([topk_ind] + topk_ind_groups, 1)
# extract region features
if self.learnt_init_query:
target = self.tgt_embed.weight.unsqueeze(0).tile([bs, 1, 1])
if self.dual_queries:
target = paddle.concat([target] + [
self.tgt_embed_dual[g_id].weight.unsqueeze(0).tile(
[bs, 1, 1]) for g_id in range(self.dual_groups)
], 1)
else:
if self.dual_queries:
target = paddle.gather_nd(output_memory[0], topk_ind)
target_groups = []
for g_id in range(self.dual_groups):
target_gid = paddle.gather_nd(output_memory[g_id + 1],
topk_ind_groups[g_id])
target_groups.append(target_gid)
target = paddle.concat([target] + target_groups, 1).detach()
else:
target = paddle.gather_nd(output_memory, topk_ind).detach()
if denoising_bbox is not None:
if isinstance(denoising_bbox, list) and isinstance(
denoising_class, list) and self.dual_queries:
if denoising_bbox[0] is not None:
reference_points_list = paddle.split(
reference_points, self.dual_groups + 1, axis=1)
reference_points = paddle.concat(
[
paddle.concat(
[ref, ref_], axis=1)
for ref, ref_ in zip(denoising_bbox,
reference_points_list)
],
axis=1)
target_list = paddle.split(
target, self.dual_groups + 1, axis=1)
target = paddle.concat(
[
paddle.concat(
[tgt, tgt_], axis=1)
for tgt, tgt_ in zip(denoising_class, target_list)
],
axis=1)
else:
reference_points, target = reference_points, target
else:
reference_points = paddle.concat(
[denoising_bbox, reference_points], 1)
target = paddle.concat([denoising_class, target], 1)
return target, reference_points, enc_topk_bboxes, enc_topk_logits
| PaddleDetection/ppdet/modeling/transformers/group_detr_transformer.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/transformers/group_detr_transformer.py",
"repo_id": "PaddleDetection",
"token_count": 19907
} | 78 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from ppdet.core.workspace import load_config, merge_config, create
from ppdet.utils.checkpoint import load_weight, load_pretrain_weight
from ppdet.utils.logger import setup_logger
from ppdet.core.workspace import register, serializable
from paddle.utils import try_import
logger = setup_logger(__name__)
@register
@serializable
class OFA(object):
def __init__(self, ofa_config):
super(OFA, self).__init__()
self.ofa_config = ofa_config
def __call__(self, model, param_state_dict):
paddleslim = try_import('paddleslim')
from paddleslim.nas.ofa import OFA, RunConfig, utils
from paddleslim.nas.ofa.convert_super import Convert, supernet
task = self.ofa_config['task']
expand_ratio = self.ofa_config['expand_ratio']
skip_neck = self.ofa_config['skip_neck']
skip_head = self.ofa_config['skip_head']
run_config = self.ofa_config['RunConfig']
if 'skip_layers' in run_config:
skip_layers = run_config['skip_layers']
else:
skip_layers = []
# supernet config
sp_config = supernet(expand_ratio=expand_ratio)
# convert to supernet
model = Convert(sp_config).convert(model)
skip_names = []
if skip_neck:
skip_names.append('neck.')
if skip_head:
skip_names.append('head.')
for name, sublayer in model.named_sublayers():
for n in skip_names:
if n in name:
skip_layers.append(name)
run_config['skip_layers'] = skip_layers
run_config = RunConfig(**run_config)
# build ofa model
ofa_model = OFA(model, run_config=run_config)
ofa_model.set_epoch(0)
ofa_model.set_task(task)
input_spec = [{
"image": paddle.ones(
shape=[1, 3, 640, 640], dtype='float32'),
"im_shape": paddle.full(
[1, 2], 640, dtype='float32'),
"scale_factor": paddle.ones(
shape=[1, 2], dtype='float32')
}]
ofa_model._clear_search_space(input_spec=input_spec)
ofa_model._build_ss = True
check_ss = ofa_model._sample_config('expand_ratio', phase=None)
# tokenize the search space
ofa_model.tokenize()
# check token map, search cands and search space
logger.info('Token map is {}'.format(ofa_model.token_map))
logger.info('Search candidates is {}'.format(ofa_model.search_cands))
logger.info('The length of search_space is {}, search_space is {}'.
format(len(ofa_model._ofa_layers), ofa_model._ofa_layers))
# set model state dict into ofa model
utils.set_state_dict(ofa_model.model, param_state_dict)
return ofa_model
| PaddleDetection/ppdet/slim/ofa.py/0 | {
"file_path": "PaddleDetection/ppdet/slim/ofa.py",
"repo_id": "PaddleDetection",
"token_count": 1341
} | 79 |
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import numpy as np
__all__ = ['SmoothedValue', 'TrainingStats']
class SmoothedValue(object):
"""Track a series of values and provide access to smoothed values over a
window or the global series average.
"""
def __init__(self, window_size=20, fmt=None):
if fmt is None:
fmt = "{median:.4f} ({avg:.4f})"
self.deque = collections.deque(maxlen=window_size)
self.fmt = fmt
self.total = 0.
self.count = 0
def update(self, value, n=1):
self.deque.append(value)
self.count += n
self.total += value * n
@property
def median(self):
return np.median(self.deque)
@property
def avg(self):
return np.mean(self.deque)
@property
def max(self):
return np.max(self.deque)
@property
def value(self):
return self.deque[-1]
@property
def global_avg(self):
return self.total / self.count
def __str__(self):
return self.fmt.format(
median=self.median, avg=self.avg, max=self.max, value=self.value)
class TrainingStats(object):
def __init__(self, window_size, delimiter=' '):
self.meters = None
self.window_size = window_size
self.delimiter = delimiter
def update(self, stats):
if self.meters is None:
self.meters = {
k: SmoothedValue(self.window_size)
for k in stats.keys()
}
for k, v in self.meters.items():
v.update(float(stats[k]))
def get(self, extras=None):
stats = collections.OrderedDict()
if extras:
for k, v in extras.items():
stats[k] = v
for k, v in self.meters.items():
stats[k] = format(v.median, '.6f')
return stats
def log(self, extras=None):
d = self.get(extras)
strs = []
for k, v in d.items():
strs.append("{}: {}".format(k, str(v)))
return self.delimiter.join(strs)
| PaddleDetection/ppdet/utils/stats.py/0 | {
"file_path": "PaddleDetection/ppdet/utils/stats.py",
"repo_id": "PaddleDetection",
"token_count": 1130
} | 80 |
#!/bin/bash
source test_tipc/utils_func.sh
FILENAME=$1
MODE="serving_infer"
# parser model_name
dataline=$(cat ${FILENAME})
IFS=$'\n'
lines=(${dataline})
model_name=$(func_parser_value "${lines[1]}")
echo "ppdet serving_python_infer: ${model_name}"
python=$(func_parser_value "${lines[2]}")
filename_key=$(func_parser_key "${lines[3]}")
filename_value=$(func_parser_value "${lines[3]}")
# parser export params
save_export_key=$(func_parser_key "${lines[5]}")
save_export_value=$(func_parser_value "${lines[5]}")
export_weight_key=$(func_parser_key "${lines[6]}")
export_weight_value=$(func_parser_value "${lines[6]}")
norm_export=$(func_parser_value "${lines[7]}")
pact_export=$(func_parser_value "${lines[8]}")
fpgm_export=$(func_parser_value "${lines[9]}")
distill_export=$(func_parser_value "${lines[10]}")
export_key1=$(func_parser_key "${lines[11]}")
export_value1=$(func_parser_value "${lines[11]}")
export_key2=$(func_parser_key "${lines[12]}")
export_value2=$(func_parser_value "${lines[12]}")
kl_quant_export=$(func_parser_value "${lines[13]}")
# parser serving params
infer_mode_list=$(func_parser_value "${lines[15]}")
infer_is_quant_list=$(func_parser_value "${lines[16]}")
web_service_py=$(func_parser_value "${lines[17]}")
model_dir_key=$(func_parser_key "${lines[18]}")
opt_key=$(func_parser_key "${lines[19]}")
opt_use_gpu_list=$(func_parser_value "${lines[19]}")
web_service_key1=$(func_parser_key "${lines[20]}")
web_service_value1=$(func_parser_value "${lines[20]}")
http_client_py=$(func_parser_value "${lines[21]}")
infer_image_key=$(func_parser_key "${lines[22]}")
infer_image_value=$(func_parser_value "${lines[22]}")
http_client_key1=$(func_parser_key "${lines[23]}")
http_client_value1=$(func_parser_value "${lines[23]}")
LOG_PATH="./test_tipc/output/${model_name}/${MODE}"
mkdir -p ${LOG_PATH}
status_log="${LOG_PATH}/results_serving_python.log"
function func_serving_inference(){
IFS='|'
_python=$1
_log_path=$2
_service_script=$3
_client_script=$4
_set_model_dir=$5
_set_image_file=$6
set_web_service_params1=$(func_set_params "${web_service_key1}" "${web_service_value1}")
set_http_client_params1=$(func_set_params "${http_client_key1}" "${http_client_value1}")
# inference
for opt in ${opt_use_gpu_list[*]}; do
device_type=$(func_parser_key "${opt}")
server_log_path="${_log_path}/python_server_${device_type}.log"
client_log_path="${_log_path}/python_client_${device_type}.log"
opt_value=$(func_parser_value "${opt}")
_set_opt=$(func_set_params "${opt_key}" "${opt_value}")
# run web service
web_service_cmd="${_python} ${_service_script} ${_set_model_dir} ${_set_opt} ${set_web_service_params1} > ${server_log_path} 2>&1 &"
eval $web_service_cmd
last_status=${PIPESTATUS[0]}
cat ${server_log_path}
status_check $last_status "${web_service_cmd}" "${status_log}" "${model_name}" "${server_log_path}"
sleep 5s
# run http client
http_client_cmd="${_python} ${_client_script} ${_set_image_file} ${set_http_client_params1} > ${client_log_path} 2>&1"
eval $http_client_cmd
last_status=${PIPESTATUS[0]}
cat ${client_log_path}
status_check $last_status "${http_client_cmd}" "${status_log}" "${model_name}" "${client_log_path}"
ps ux | grep -E 'web_service' | awk '{print $2}' | xargs kill -s 9
sleep 2s
done
}
# set cuda device
GPUID=$3
if [ ${#GPUID} -le 0 ];then
env="export CUDA_VISIBLE_DEVICES=0"
else
env="export CUDA_VISIBLE_DEVICES=${GPUID}"
fi
eval $env
# run serving infer
Count=0
IFS="|"
infer_quant_flag=(${infer_is_quant_list})
for infer_mode in ${infer_mode_list[*]}; do
if [ ${infer_mode} != "null" ]; then
# run export
case ${infer_mode} in
norm) run_export=${norm_export} ;;
quant) run_export=${pact_export} ;;
fpgm) run_export=${fpgm_export} ;;
distill) run_export=${distill_export} ;;
kl_quant) run_export=${kl_quant_export} ;;
*) echo "Undefined infer_mode!"; exit 1;
esac
set_export_weight=$(func_set_params "${export_weight_key}" "${export_weight_value}")
set_save_export_dir=$(func_set_params "${save_export_key}" "${save_export_value}")
set_filename=$(func_set_params "${filename_key}" "${model_name}")
export_log_path="${LOG_PATH}/export.log"
export_cmd="${python} ${run_export} ${set_export_weight} ${set_filename} ${set_save_export_dir} "
echo $export_cmd
eval "${export_cmd} > ${export_log_path} 2>&1"
status_export=$?
cat ${export_log_path}
status_check $status_export "${export_cmd}" "${status_log}" "${model_name}" "${export_log_path}"
fi
#run inference
set_export_model_dir=$(func_set_params "${model_dir_key}" "${save_export_value}/${model_name}")
set_infer_image_file=$(func_set_params "${infer_image_key}" "${infer_image_value}")
is_quant=${infer_quant_flag[Count]}
func_serving_inference "${python}" "${LOG_PATH}" "${web_service_py}" "${http_client_py}" "${set_export_model_dir}" ${set_infer_image_file}
Count=$(($Count + 1))
done
eval "unset CUDA_VISIBLE_DEVICES"
| PaddleDetection/test_tipc/test_serving_infer_python.sh/0 | {
"file_path": "PaddleDetection/test_tipc/test_serving_infer_python.sh",
"repo_id": "PaddleDetection",
"token_count": 2278
} | 81 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from tqdm import tqdm
def slice_data(image_dir, dataset_json_path, output_dir, slice_size,
overlap_ratio):
try:
from sahi.scripts.slice_coco import slice
except Exception as e:
raise RuntimeError(
'Unable to use sahi to slice images, please install sahi, for example: `pip install sahi`, see https://github.com/obss/sahi'
)
tqdm.write(
f" slicing for slice_size={slice_size}, overlap_ratio={overlap_ratio}")
slice(
image_dir=image_dir,
dataset_json_path=dataset_json_path,
output_dir=output_dir,
slice_size=slice_size,
overlap_ratio=overlap_ratio, )
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--image_dir', type=str, default=None, help="The image folder path.")
parser.add_argument(
'--json_path', type=str, default=None, help="Dataset json path.")
parser.add_argument(
'--output_dir', type=str, default=None, help="Output dir.")
parser.add_argument(
'--slice_size', type=int, default=500, help="slice_size")
parser.add_argument(
'--overlap_ratio', type=float, default=0.25, help="overlap_ratio")
args = parser.parse_args()
slice_data(args.image_dir, args.json_path, args.output_dir, args.slice_size,
args.overlap_ratio)
if __name__ == "__main__":
main()
| PaddleDetection/tools/slice_image.py/0 | {
"file_path": "PaddleDetection/tools/slice_image.py",
"repo_id": "PaddleDetection",
"token_count": 764
} | 82 |
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/euryale.iml" filepath="$PROJECT_DIR$/.idea/euryale.iml" />
</modules>
</component>
</project> | euryale/.idea/modules.xml/0 | {
"file_path": "euryale/.idea/modules.xml",
"repo_id": "euryale",
"token_count": 106
} | 83 |
from typing import Optional
class FauxPilotException(Exception):
def __init__(self, message: str, error_type: Optional[str] = None, param: Optional[str] = None,
code: Optional[int] = None):
super().__init__(message)
self.message = message
self.error_type = error_type
self.param = param
self.code = code
def json(self):
return {
'error': {
'message': self.message,
'type': self.error_type,
'param': self.param,
'code': self.code
}
}
| fauxpilot/copilot_proxy/utils/errors.py/0 | {
"file_path": "fauxpilot/copilot_proxy/utils/errors.py",
"repo_id": "fauxpilot",
"token_count": 299
} | 84 |
[flake8]
max-line-length = 120
exclude = venv
| fauxpilot/setup.cfg/0 | {
"file_path": "fauxpilot/setup.cfg",
"repo_id": "fauxpilot",
"token_count": 19
} | 85 |
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/get-data.iml" filepath="$PROJECT_DIR$/.idea/get-data.iml" />
</modules>
</component>
</project> | get-data/.idea/modules.xml/0 | {
"file_path": "get-data/.idea/modules.xml",
"repo_id": "get-data",
"token_count": 106
} | 86 |
[
{
"name": "\ud83d\udcd6 - READER: \ud83d\udc7e PersonalCopilot",
"time_stats": {
"total": 0.10838679474545643,
"n": 9644,
"mean": 1.1238780044116178e-05,
"variance": 5.1431733001504095e-11,
"std_dev": 7.171592082759873e-06,
"min": 7.11000757291913e-06,
"max": 0.0001664989977143705,
"total_human": "0 seconds",
"mean_human": "0.01 milliseconds",
"std_dev_human": "0.01 milliseconds",
"min_human": "0.01 milliseconds",
"max_human": "0.17 milliseconds",
"global_mean": 0.006774174671591027,
"global_mean_human": "0 seconds",
"global_min": 0.0061978003941476345,
"global_min_human": "0 seconds",
"global_max": 0.0073161813779734075,
"global_max_human": "0 seconds",
"global_std_dev": 0.00039528389438462145,
"global_std_dev_human": "0 seconds"
},
"stats": {
"input_files": 9644,
"doc_len": {
"total": 1037776757,
"n": 9644,
"mean": 107608.53971381173,
"variance": 3329684304523.6875,
"std_dev": 1824742.2570115724,
"min": 1,
"max": 166066508
},
"documents": 0
}
},
{
"name": "\ud83d\udd3b - FILTER: \ud83e\uddd1\ud83c\udffd\u200d\ud83d\udcbb Code Filter",
"time_stats": {
"total": 62.37044457584852,
"n": 9644,
"mean": 0.006467279611763638,
"variance": 0.008838503929920619,
"std_dev": 0.09401331783274441,
"min": 1.8900027498602867e-06,
"max": 8.20601651398465,
"total_human": "1 minute and 2 seconds",
"mean_human": "6.47 milliseconds",
"std_dev_human": "94.01 milliseconds",
"min_human": "0.00 milliseconds",
"max_human": "8 seconds and 206.02 milliseconds",
"global_mean": 3.8981527859905327,
"global_mean_human": "3 seconds",
"global_min": 0.36137380747823045,
"global_min_human": "0 seconds",
"global_max": 8.908396344748326,
"global_max_human": "8 seconds",
"global_std_dev": 3.052454139076243,
"global_std_dev_human": "3 seconds"
},
"stats": {
"total": 9644,
"dropped": 4445,
"forwarded": 5199,
"doc_len": {
"total": 666096707,
"n": 5199,
"mean": 128120.15906905176,
"variance": 786538251733.3838,
"std_dev": 886869.9181578907,
"min": 3,
"max": 11190333
}
}
},
{
"name": "\ud83d\udcbd - WRITER: \ud83d\udc3f Jsonl",
"time_stats": {
"total": 94.36194424587302,
"n": 5199,
"mean": 0.018150018127692444,
"variance": 0.018105554603978825,
"std_dev": 0.13455688241029823,
"min": 3.652001032605767e-05,
"max": 1.1002157580223866,
"total_human": "1 minute and 34 seconds",
"mean_human": "18.15 milliseconds",
"std_dev_human": "134.56 milliseconds",
"min_human": "0.04 milliseconds",
"max_human": "1 second and 100.22 milliseconds",
"global_mean": 5.897621515367064,
"global_mean_human": "5 seconds",
"global_min": 0.24520569550804794,
"global_min_human": "0 seconds",
"global_max": 11.386275148193818,
"global_max_human": "11 seconds",
"global_std_dev": 5.554415149906991,
"global_std_dev_human": "5 seconds"
},
"stats": {
"XXXXX.jsonl.gz": 5199,
"total": 5199,
"doc_len": {
"total": 666096707,
"n": 5199,
"mean": 128120.15906905176,
"variance": 786538251733.3838,
"std_dev": 886869.9181578907,
"min": 3,
"max": 11190333
}
}
}
] | get-data/logs/2024-07-05_01-48-57_ziwdg/stats.json/0 | {
"file_path": "get-data/logs/2024-07-05_01-48-57_ziwdg/stats.json",
"repo_id": "get-data",
"token_count": 2559
} | 87 |
"""
Courtesy: Sayak Paul and Chansung Park.
"""
import os
import pandas as pd
from nbformat import reads, NO_CONVERT
from tqdm import tqdm
from datasets import Dataset
from typing import Dict
from huggingface_hub import HfApi, create_repo
import tempfile
import subprocess
MIRROR_DIRECTORY = "hf_public_repos"
DATASET_ID = "hf-stack-v1"
SERIALIZE_IN_CHUNKS = False
FEATHER_FORMAT = "ftr"
PARQUET_FORMAT = "parquet"
# Block the following formats.
IMAGE = ["png", "jpg", "jpeg", "gif"]
VIDEO = ["mp4", "jfif"]
DOC = [
"key",
"PDF",
"pdf",
"docx",
"xlsx",
"pptx",
]
AUDIO = ["flac", "ogg", "mid", "webm", "wav", "mp3"]
ARCHIVE = ["jar", "aar", "gz", "zip", "bz2"]
MODEL = ["onnx", "pickle", "model", "neuron"]
OTHERS = [
"npy",
"index",
"inv",
"index",
"DS_Store",
"rdb",
"pack",
"idx",
"glb",
"gltf",
"len",
"otf",
"unitypackage",
"ttf",
"xz",
"pcm",
"opus",
]
ANTI_FOMATS = tuple(IMAGE + VIDEO + DOC + AUDIO + ARCHIVE + OTHERS)
def upload_to_hub(file_format: str, repo_id: str):
"""Moves all the files matching `file_format` to a folder and
uploads the folder to the Hugging Face Hub."""
api = HfApi()
repo_id = create_repo(repo_id=repo_id, exist_ok=True, repo_type="dataset").repo_id
with tempfile.TemporaryDirectory() as tmpdirname:
os.makedirs(tmpdirname, exist_ok=True)
command = f"mv *.{file_format} {tmpdirname}"
_ = subprocess.run(command.split())
api.upload_folder(repo_id=repo_id, folder_path=tmpdirname, repo_type="dataset")
def filter_code_cell(cell) -> bool:
"""Filters a code cell w.r.t shell commands, etc."""
only_shell = cell["source"].startswith("!")
only_magic = "%%capture" in cell["source"]
if only_shell or only_magic:
return False
else:
return True
def process_file(directory_name: str, file_path: str) -> Dict[str, str]:
"""Processes a single file."""
try:
with open(file_path, "r", encoding="utf-8") as file:
content = file.read()
if file_path.endswith("ipynb"):
# Code courtesy: Chansung Park and Sayak Paul.
code_cell_str = ""
notebook = reads(content, NO_CONVERT)
code_cells = [c for c in notebook["cells"] if c["cell_type"] == "code" if filter_code_cell(c)]
for cell in code_cells:
code_cell_str += cell["source"]
content = code_cell_str
except Exception:
content = ""
return {
"repo_id": directory_name,
"file_path": file_path,
"content": content,
}
def read_repository_files(directory) -> pd.DataFrame:
"""Reads the files from the locally cloned repositories."""
file_paths = []
df = pd.DataFrame(columns=["repo_id", "file_path", "content"])
chunk_flag = 0
# Recursively find all files within the directory
for root, _, files in os.walk(directory):
for file in files:
file_path = os.path.join(root, file)
if not file_path.endswith(ANTI_FOMATS) and all(
k not in file_path for k in [".git", "__pycache__", "xcodeproj"]
):
file_paths.append((os.path.dirname(root), file_path))
# Process files sequentially.
print(f"Total file paths: {len(file_paths)}.")
print("Reading file contents...")
for i, (directory_name, file_path) in enumerate(tqdm(file_paths)):
file_content = process_file(directory_name, file_path)
if file_content["content"] != "":
temp_df = pd.DataFrame.from_dict([file_content])
df = pd.concat([df, temp_df])
if SERIALIZE_IN_CHUNKS and len(df) != 0 and (len(df) % SERIALIZE_IN_CHUNKS == 0):
df_path = f"df_chunk_{chunk_flag}_{len(df)}.{FEATHER_FORMAT}"
print(f"Serializing dataframe to {df_path}...")
df.reset_index().to_parquet(df_path)
del df
df = pd.DataFrame(columns=["repo_id", "file_path", "content"])
chunk_flag += 1
return df
if __name__ == "__main__":
df = read_repository_files(MIRROR_DIRECTORY)
print("DataFrame created, creating dataset...")
upload_to_hub(file_format=PARQUET_FORMAT, repo_id=DATASET_ID)
print(f"{FEATHER_FORMAT} files uploaded to the Hub.")
if not SERIALIZE_IN_CHUNKS:
dataset = Dataset.from_pandas(df)
dataset.push_to_hub(DATASET_ID, private=True)
| get-data/prepare_dataset_legacy.py/0 | {
"file_path": "get-data/prepare_dataset_legacy.py",
"repo_id": "get-data",
"token_count": 2078
} | 88 |
import mxnet as mx
import numpy as np
import math
import cv2
from config import config
class LossValueMetric(mx.metric.EvalMetric):
def __init__(self):
self.axis = 1
super(LossValueMetric, self).__init__('lossvalue',
axis=self.axis,
output_names=None,
label_names=None)
self.losses = []
def update(self, labels, preds):
loss = preds[0].asnumpy()[0]
self.sum_metric += loss
self.num_inst += 1.0
class NMEMetric(mx.metric.EvalMetric):
def __init__(self):
self.axis = 1
super(NMEMetric, self).__init__('NME',
axis=self.axis,
output_names=None,
label_names=None)
#self.losses = []
self.count = 0
def cal_nme(self, label, pred_label):
nme = []
for b in range(pred_label.shape[0]):
record = [None] * 6
item = []
if label.ndim == 4:
_heatmap = label[b][36]
if np.count_nonzero(_heatmap) == 0:
continue
else: #ndim==3
#print(label[b])
if np.count_nonzero(label[b]) == 0:
continue
for p in range(pred_label.shape[1]):
if label.ndim == 4:
heatmap_gt = label[b][p]
ind_gt = np.unravel_index(np.argmax(heatmap_gt, axis=None),
heatmap_gt.shape)
ind_gt = np.array(ind_gt)
else:
ind_gt = label[b][p]
#ind_gt = ind_gt.astype(np.int)
#print(ind_gt)
heatmap_pred = pred_label[b][p]
heatmap_pred = cv2.resize(
heatmap_pred,
(config.input_img_size, config.input_img_size))
ind_pred = np.unravel_index(np.argmax(heatmap_pred, axis=None),
heatmap_pred.shape)
ind_pred = np.array(ind_pred)
#print(ind_gt.shape)
#print(ind_pred)
if p == 36:
#print('b', b, p, ind_gt, np.count_nonzero(heatmap_gt))
record[0] = ind_gt
elif p == 39:
record[1] = ind_gt
elif p == 42:
record[2] = ind_gt
elif p == 45:
record[3] = ind_gt
if record[4] is None or record[5] is None:
record[4] = ind_gt
record[5] = ind_gt
else:
record[4] = np.minimum(record[4], ind_gt)
record[5] = np.maximum(record[5], ind_gt)
#print(ind_gt.shape, ind_pred.shape)
value = np.sqrt(np.sum(np.square(ind_gt - ind_pred)))
item.append(value)
_nme = np.mean(item)
if config.landmark_type == '2d':
left_eye = (record[0] + record[1]) / 2
right_eye = (record[2] + record[3]) / 2
_dist = np.sqrt(np.sum(np.square(left_eye - right_eye)))
#print('eye dist', _dist, left_eye, right_eye)
_nme /= _dist
else:
#_dist = np.sqrt(float(label.shape[2]*label.shape[3]))
_dist = np.sqrt(np.sum(np.square(record[5] - record[4])))
#print(_dist)
_nme /= _dist
nme.append(_nme)
return np.mean(nme)
def update(self, labels, preds):
self.count += 1
label = labels[0].asnumpy()
pred_label = preds[-1].asnumpy()
nme = self.cal_nme(label, pred_label)
#print('nme', nme)
#nme = np.mean(nme)
self.sum_metric += np.mean(nme)
self.num_inst += 1.0
| insightface/alignment/heatmap/metric.py/0 | {
"file_path": "insightface/alignment/heatmap/metric.py",
"repo_id": "insightface",
"token_count": 2514
} | 89 |
# Training performance report on NVIDIA A10
[NVIDIA A10 Tensor Core GPU](https://www.nvidia.com/en-us/data-center/products/a10-gpu/)
We can use A10 to train deep learning models by its FP16 and TF32 supports.
## Test Server Spec
| Key | Value |
| ------------ | ------------------------------------------------ |
| System | ServMax G408-X2 Rackmountable Server |
| CPU | 2 x Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz |
| Memory | 384GB, 12 x Samsung 32GB DDR4-2933 |
| GPU | 8 x NVIDIA A10 22GB |
| Cooling | 2x Customized GPU Kit for GPU support FAN-1909L2 |
| Hard Drive | Intel SSD S4500 1.9TB/SATA/TLC/2.5" |
| OS | Ubuntu 16.04.7 LTS |
| Installation | CUDA 11.1, cuDNN 8.0.5 |
| Installation | Python 3.7.10 |
| Installation | PyTorch 1.9.0 (conda) |
This server is donated by [AMAX](https://www.amaxchina.com/), many thanks!
## Experiments on arcface_torch
We report training speed in following table, please also note that:
1. The training dataset is in mxnet record format and located on SSD hard drive.
2. Embedding-size are all set to 512.
3. We use a large dataset which contains about 618K identities to simulate real cases.
| Dataset | Classes | Backbone | Batch-size | FP16 | TF32 | Samples/sec |
| ----------- | ------- | ----------- | ---------- | ---- | ---- | ----------- |
| WebFace600K | 618K | IResNet-50 | 1024 | × | × | ~2040 |
| WebFace600K | 618K | IResNet-50 | 1024 | × | √ | ~2255 |
| WebFace600K | 618K | IResNet-50 | 1024 | √ | × | ~3300 |
| WebFace600K | 618K | IResNet-50 | 1024 | √ | √ | ~3360 |
| WebFace600K | 618K | IResNet-50 | 2048 | √ | √ | ~3940 |
| WebFace600K | 618K | IResNet-100 | 1024 | √ | √ | ~2210 |
| WebFace600K | 618K | IResNet-180 | 1024 | √ | √ | ~1410 |
| insightface/benchmarks/train/nvidia_a10.md/0 | {
"file_path": "insightface/benchmarks/train/nvidia_a10.md",
"repo_id": "insightface",
"token_count": 1072
} | 90 |
import sys
from torch.utils.data import Dataset, DataLoader
import os
import os.path as osp
import glob
import numpy as np
import random
import cv2
import pickle as pkl
import json
import h5py
import torch
from scipy.io import loadmat
class LSPDataset(Dataset):
def __init__(self):
super(LSPDataset, self).__init__()
filename = "../data/joints.mat"
self.joints = loadmat(filename)['joints']
self.joints = np.transpose(self.joints, (2, 1, 0))
# 1. right ankle 2. right knee 3. right hip
# 4. left hip 5. left knee 6. left ankle
# 7. right wrist 8. right elbow 9. right shoulder
# 10.left shoulder 11. left elbow 12. left wrist
# 13. neck # 14. headtop
self.original_joints = self.joints.copy()
self.joints[..., 0] = (self.joints[..., 0] - 55.0) / 55.0
self.joints[..., 1] = (self.joints[..., 1] - 90.0) / 90.0
thorax = (self.joints[:, 8:9] + self.joints[:, 9:10]) / 2
pelvis = (self.joints[:, 2:3] + self.joints[:, 3:4]) / 2
spine = (thorax + pelvis) / 2
self.joints = np.concatenate((self.joints, thorax, pelvis, spine), axis=1)
lsp2h36m_indices = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 12, 13]
self.joints = self.joints[:, lsp2h36m_indices].astype(np.float32)[..., :2]
self.joints = self.joints - self.joints[:, 13:14]
self.joints[:, 13:14] = 1e-5
factor = np.linalg.norm(self.joints[:, 13:14] - self.joints[:, -1:], axis=2).mean()
self.joints = self.joints / factor / 10.0
thorax = (self.original_joints[:, 8:9] + self.original_joints[:, 9:10]) / 2
pelvis = (self.original_joints[:, 2:3] + self.original_joints[:, 3:4]) / 2
spine = (thorax + pelvis) / 2
self.original_joints = np.concatenate((self.original_joints, thorax, pelvis, spine), axis=1)
self.original_joints = self.original_joints[:, lsp2h36m_indices].astype(np.float32)[..., :2]
for index in range(len(self.joints)):
if index + 1 not in [1003, 1120, 1262, 1273, 1312, 1379, 1639, 1723, 1991, 209, 387, 879]:
continue
with open("demo_input/" + str(index + 1) + ".pkl", "wb") as f:
pkl.dump({'joints_2d': self.joints[index], "original_joints_2d": self.original_joints[index]}, f)
def __getitem__(self, index):
return self.joints[index], self.original_joints[index]
def __len__(self):
return len(self.joints)
| insightface/body/human_pose/ambiguity_aware/lib/dataloader/lsp.py/0 | {
"file_path": "insightface/body/human_pose/ambiguity_aware/lib/dataloader/lsp.py",
"repo_id": "insightface",
"token_count": 1224
} | 91 |
## Challenges
<div align="left">
<img src="https://insightface.ai/assets/img/custom/logo3.jpg" width="240"/>
</div>
## Introduction
These benchmarks are maintained by [InsightFace](https://insightface.ai)
<div align="left">
<img src="https://insightface.ai/assets/img/custom/thumb_ifrt.png" width="480"/>
</div>
## Supported Benchmarks
- [MFR-Ongoing](mfr) (Ongoing version of iccv21-mfr)
- [MFR21 (ICCVW'2021)](iccv21-mfr)
- [LFR19 (ICCVW'2019)](iccv19-lfr)
| insightface/challenges/README.md/0 | {
"file_path": "insightface/challenges/README.md",
"repo_id": "insightface",
"token_count": 192
} | 92 |
简体中文 | [English](README_en.md)
# 人脸检测模型
* [1. 简介](#简介)
* [2. 模型库](#模型库)
* [3. 安装](#安装)
* [4. 数据准备](#数据准备)
* [5. 参数配置](#参数配置)
* [6. 训练与评估](#训练与评估)
* [6.1 训练](#训练)
* [6.2 在WIDER-FACE数据集上评估](#评估)
* [6.3 推理部署](#推理部署)
* [6.4 推理速度提升](#推理速度提升)
* [6.5 人脸检测demo](#人脸检测demo)
* [7. 参考文献](#参考文献)
<a name="简介"></a>
## 1. 简介
`Arcface-Paddle`是基于PaddlePaddle实现的,开源深度人脸检测、识别工具。`Arcface-Paddle`目前提供了三个预训练模型,包括用于人脸检测的 `BlazeFace`、用于人脸识别的 `ArcFace` 和 `MobileFace`。
- 本部分内容为人脸检测部分,基于PaddleDetection进行开发。
- 人脸识别相关内容可以参考:[人脸识别](../../recognition/arcface_paddle/README_cn.md)。
- 基于PaddleInference的Whl包预测部署内容可以参考:[Whl包预测部署](https://github.com/littletomatodonkey/insight-face-paddle)。
<a name="模型库"></a>
## 2. 模型库
### WIDER-FACE数据集上的mAP
| 网络结构 | 输入尺寸 | 图片个数/GPU | epoch数量 | Easy/Medium/Hard Set | CPU预测时延 | GPU 预测时延 | 模型大小(MB) | 预训练模型地址 | inference模型地址 | 配置文件 |
|:------------:|:--------:|:----:|:-------:|:-------:|:-------:|:---------:|:----------:|:---------:|:---------:|:--------:|
| BlazeFace-FPN-SSH | 640 | 8 | 1000 | 0.9187 / 0.8979 / 0.8168 | 31.7ms | 5.6ms | 0.646 |[下载链接](https://paddledet.bj.bcebos.com/models/blazeface_fpn_ssh_1000e.pdparams) | [下载链接](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/blazeface_fpn_ssh_1000e_v1.0_infer.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1/configs/face_detection/blazeface_fpn_ssh_1000e.yml) |
| RetinaFace | 480x640 | - | - | - / - / 0.8250 | 182.0ms | 17.4ms | 1.680 | - | - | - |
**注意:**
- 我们使用多尺度评估策略得到`Easy/Medium/Hard Set`里的mAP。具体细节请参考[在WIDER-FACE数据集上评估](#评估)。
- 测量速度时我们使用640*640的分辨,在 Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz cpu,CPU线程数设置为5,更多细节请参考[推理速度提升](#推理速度提升)。
- `RetinaFace`的速度测试代码参考自:[../retinaface/README.md](../retinaface/README.md).
- 测试环境为
- CPU: Intel(R) Xeon(R) Gold 6184 CPU @ 2.40GHz
- GPU: a single NVIDIA Tesla V100
<a name="安装"></a>
## 3. 安装
请参考[安装教程](../../recognition/arcface_paddle/install_ch.md)安装PaddlePaddle以及PaddleDetection。
<a name="数据准备"></a>
## 4. 数据准备
我们使用[WIDER-FACE数据集](http://shuoyang1213.me/WIDERFACE/)进行训练和模型测试,官方网站提供了详细的数据介绍。
- WIDER-Face数据源:
使用如下目录结构加载`wider_face`类型的数据集:
```
dataset/wider_face/
├── wider_face_split
│ ├── wider_face_train_bbx_gt.txt
│ ├── wider_face_val_bbx_gt.txt
├── WIDER_train
│ ├── images
│ │ ├── 0--Parade
│ │ │ ├── 0_Parade_marchingband_1_100.jpg
│ │ │ ├── 0_Parade_marchingband_1_381.jpg
│ │ │ │ ...
│ │ ├── 10--People_Marching
│ │ │ ...
├── WIDER_val
│ ├── images
│ │ ├── 0--Parade
│ │ │ ├── 0_Parade_marchingband_1_1004.jpg
│ │ │ ├── 0_Parade_marchingband_1_1045.jpg
│ │ │ │ ...
│ │ ├── 10--People_Marching
│ │ │ ...
```
- 手动下载数据集:
要下载WIDER-FACE数据集,请运行以下命令:
```
cd dataset/wider_face && ./download_wider_face.sh
```
<a name="参数配置"></a>
## 5. 参数配置
我们使用 `configs/face_detection/blazeface_fpn_ssh_1000e.yml`配置进行训练,配置文件摘要如下:
```yaml
_BASE_: [
'../datasets/wider_face.yml',
'../runtime.yml',
'_base_/optimizer_1000e.yml',
'_base_/blazeface_fpn.yml',
'_base_/face_reader.yml',
]
weights: output/blazeface_fpn_ssh_1000e/model_final
multi_scale_eval: True
```
`blazeface_fpn_ssh_1000e.yml` 配置需要依赖其他的配置文件,在该例子中需要依赖:
```
wider_face.yml:主要说明了训练数据和验证数据的路径
runtime.yml:主要说明了公共的运行参数,比如是否使用GPU、每多少个epoch存储checkpoint等
optimizer_1000e.yml:主要说明了学习率和优化器的配置
blazeface_fpn.yml:主要说明模型和主干网络的情况
face_reader.yml:主要说明数据读取器配置,如batch size,并发加载子进程数等,同时包含读取后预处理操作,如resize、数据增强等等
```
根据实际情况,修改上述文件,比如数据集路径、batch size等。
基础模型的配置可以参考`configs/face_detection/_base_/blazeface.yml`;
改进模型增加FPN和SSH的neck结构,配置文件可以参考`configs/face_detection/_base_/blazeface_fpn.yml`,可以根据需求配置FPN和SSH,具体如下:
```yaml
BlazeNet:
blaze_filters: [[24, 24], [24, 24], [24, 48, 2], [48, 48], [48, 48]]
double_blaze_filters: [[48, 24, 96, 2], [96, 24, 96], [96, 24, 96],
[96, 24, 96, 2], [96, 24, 96], [96, 24, 96]]
act: hard_swish # 配置backbone中BlazeBlock的激活函数,基础模型为relu,增加FPN和SSH时需使用hard_swish
BlazeNeck:
neck_type : fpn_ssh # 可选only_fpn、only_ssh和fpn_ssh
in_channel: [96,96]
```
<a name="训练与评估"></a>
## 6. 训练与评估
<a name="训练"></a>
### 6.1 训练
首先,下载预训练模型文件:
```bash
wget https://paddledet.bj.bcebos.com/models/pretrained/blazenet_pretrain.pdparams
```
PaddleDetection提供了单卡/多卡训练模式,满足用户多种训练需求
* GPU单卡训练
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -o pretrain_weight=blazenet_pretrain
```
* GPU多卡训练
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3 #windows和Mac下不需要执行该命令
python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -o pretrain_weight=blazenet_pretrain
```
* 模型恢复训练
在日常训练过程中,有的用户由于一些原因导致训练中断,用户可以使用-r的命令恢复训练
```bash
export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
python tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -r output/blazeface_fan_ssh_1000e/100
```
* 训练策略
`BlazeFace`训练是以每卡`batch_size=32`在4卡GPU上进行训练(总`batch_size`是128),学习率为0.002,并且训练1000epoch。
**注意:** 人脸检测模型目前不支持边训练边评估。
<a name="评估"></a>
### 6.2 在WIDER-FACE数据集上评估
- 步骤一:评估并生成结果文件:
```shell
python -u tools/eval.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml \
-o weights=output/blazeface_fpn_ssh_1000e/model_final \
multi_scale_eval=True BBoxPostProcess.nms.score_threshold=0.1
```
设置`multi_scale_eval=True`进行多尺度评估,评估完成后,将在`output/pred`中生成txt格式的测试结果。
- 步骤二:下载官方评估脚本和Ground Truth文件:
```
wget http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/eval_script/eval_tools.zip
unzip eval_tools.zip && rm -f eval_tools.zip
```
- 步骤三:开始评估
方法一:python评估。
```bash
git clone https://github.com/wondervictor/WiderFace-Evaluation.git
cd WiderFace-Evaluation
# 编译
python3 setup.py build_ext --inplace
# 开始评估
python3 evaluation.py -p /path/to/PaddleDetection/output/pred -g /path/to/eval_tools/ground_truth
```
方法二:MatLab评估。
```bash
# 在`eval_tools/wider_eval.m`中修改保存结果路径和绘制曲线的名称:
pred_dir = './pred';
legend_name = 'Paddle-BlazeFace';
`wider_eval.m` 是评估模块的主要执行程序。运行命令如下:
matlab -nodesktop -nosplash -nojvm -r "run wider_eval.m;quit;"
```
<a name="推理部署"></a>
### 6.3 推理部署
在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。
在PaddleDetection中提供了 `tools/export_model.py`脚本来导出模型
```bash
python tools/export_model.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml --output_dir=./inference_model \
-o weights=output/blazeface_fpn_ssh_1000e/best_model BBoxPostProcess.nms.score_threshold=0.1
```
预测模型会导出到`inference_model/blazeface_fpn_ssh_1000e`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`,`model.pdmodel` 如果不指定文件夹,模型则会导出在`output_inference`
* 这里将nms后处理`score_threshold`修改为0.1,因为mAP基本没有影响的情况下,GPU预测速度能够大幅提升。更多关于模型导出的文档,请参考[模型导出文档](https://github.com/PaddlePaddle/PaddleDetection/deploy/EXPORT_MODEL.md)
PaddleDetection提供了PaddleInference、PaddleServing、PaddleLite多种部署形式,支持服务端、移动端、嵌入式等多种平台,提供了完善的Python和C++部署方案。
* 在这里,我们以Python为例,说明如何使用PaddleInference进行模型部署
```bash
python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_file=demo/road554.png --use_gpu=True
```
* 同时`infer.py`提供了丰富的接口,用户进行接入视频文件、摄像头进行预测,更多内容请参考[Python端预测部署](https://github.com/PaddlePaddle/PaddleDetection/deploy/python.md)
* 更多关于预测部署的文档,请参考[预测部署文档](https://github.com/PaddlePaddle/PaddleDetection/deploy/README.md) 。
<a name="推理速度提升"></a>
### 6.4 推理速度提升
如果想要复现我们提供的速度指标,请修改预测模型配置文件`./inference_model/blazeface_fpn_ssh_1000e/infer_cfg.yml`中的输入尺寸,如下所示:
```yaml
mode: fluid
draw_threshold: 0.5
metric: WiderFace
arch: Face
min_subgraph_size: 3
Preprocess:
- is_scale: false
mean:
- 123
- 117
- 104
std:
- 127.502231
- 127.502231
- 127.502231
type: NormalizeImage
- interp: 1
keep_ratio: false
target_size:
- 640
- 640
type: Resize
- type: Permute
label_list:
- face
```
如果希望模型在cpu环境下更快推理,可安装[paddlepaddle_gpu-0.0.0](https://paddle-wheel.bj.bcebos.com/develop-cpu-mkl/paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl) (mkldnn的依赖)可开启mkldnn加速推理。
```bash
# 使用GPU测速:
python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_dir=./path/images --run_benchmark=True --use_gpu=True
# 使用cpu测速:
# 下载paddle whl包
wget https://paddle-wheel.bj.bcebos.com/develop-cpu-mkl/paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl
# 安装paddlepaddle_gpu-0.0.0
pip install paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl
# 推理
python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_dir=./path/images --enable_mkldnn=True --run_benchmark=True --cpu_threads=5
```
<a name="人脸检测demo"></a>
### 6.5 人脸检测demo
本节介绍基于提供的BlazeFace模型进行人脸检测。
先下载待检测图像与字体文件。
```bash
# 下载用于人脸检测的示例图像
wget https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/demo/friends/query/friends1.jpg
# 下载字体,用于可视化
wget https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/SourceHanSansCN-Medium.otf
```
示例图像如下所示。
<div align="center">
<img src="https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/demo/friends/query/friends1.jpg" width = "800" />
</div>
检测的示例命令如下。
```shell
python3.7 test_blazeface.py --input=friends1.jpg --output="./output"
```
最终可视化结果保存在`output`目录下,可视化结果如下所示。
<div align="center">
<img src="https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/demo/friends/output/friends1.jpg" width = "800" />
</div>
更多关于参数解释,索引库构建、人脸识别、whl包预测部署的内容可以参考:[Whl包预测部署](https://github.com/littletomatodonkey/insight-face-paddle)。
<a name="参考文献"></a>
## 7. 参考文献
```
@misc{long2020ppyolo,
title={PP-YOLO: An Effective and Efficient Implementation of Object Detector},
author={Xiang Long and Kaipeng Deng and Guanzhong Wang and Yang Zhang and Qingqing Dang and Yuan Gao and Hui Shen and Jianguo Ren and Shumin Han and Errui Ding and Shilei Wen},
year={2020},
eprint={2007.12099},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{ppdet2019,
title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.},
author={PaddlePaddle Authors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},
year={2019}
}
@article{bazarevsky2019blazeface,
title={BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs},
author={Valentin Bazarevsky and Yury Kartynnik and Andrey Vakunov and Karthik Raveendran and Matthias Grundmann},
year={2019},
eprint={1907.05047},
archivePrefix={arXiv}
}
```
| insightface/detection/blazeface_paddle/README_cn.md/0 | {
"file_path": "insightface/detection/blazeface_paddle/README_cn.md",
"repo_id": "insightface",
"token_count": 7587
} | 93 |
"""
RPN:
data =
{'data': [num_images, c, h, w],
'im_info': [num_images, 4] (optional)}
label =
{'gt_boxes': [num_boxes, 5] (optional),
'label': [batch_size, 1] <- [batch_size, num_anchors, feat_height, feat_width],
'bbox_target': [batch_size, num_anchors, feat_height, feat_width],
'bbox_weight': [batch_size, num_anchors, feat_height, feat_width]}
"""
from __future__ import print_function
import sys
import logging
import datetime
import numpy as np
import numpy.random as npr
from ..logger import logger
from ..config import config
from .image import get_image, tensor_vstack, get_crop_image
from ..processing.generate_anchor import generate_anchors, anchors_plane
from ..processing.bbox_transform import bbox_overlaps, bbox_transform, landmark_transform
STAT = {0: 0, 8: 0, 16: 0, 32: 0}
def get_rpn_testbatch(roidb):
"""
return a dict of testbatch
:param roidb: ['image', 'flipped']
:return: data, label, im_info
"""
assert len(roidb) == 1, 'Single batch only'
imgs, roidb = get_image(roidb)
im_array = imgs[0]
im_info = np.array([roidb[0]['im_info']], dtype=np.float32)
data = {'data': im_array, 'im_info': im_info}
label = {}
return data, label, im_info
def get_rpn_batch(roidb):
"""
prototype for rpn batch: data, im_info, gt_boxes
:param roidb: ['image', 'flipped'] + ['gt_boxes', 'boxes', 'gt_classes']
:return: data, label
"""
assert len(roidb) == 1, 'Single batch only'
imgs, roidb = get_image(roidb)
im_array = imgs[0]
im_info = np.array([roidb[0]['im_info']], dtype=np.float32)
# gt boxes: (x1, y1, x2, y2, cls)
if roidb[0]['gt_classes'].size > 0:
gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0]
gt_boxes = np.empty((roidb[0]['boxes'].shape[0], 5), dtype=np.float32)
gt_boxes[:, 0:4] = roidb[0]['boxes'][gt_inds, :]
gt_boxes[:, 4] = roidb[0]['gt_classes'][gt_inds]
else:
gt_boxes = np.empty((0, 5), dtype=np.float32)
data = {'data': im_array, 'im_info': im_info}
label = {'gt_boxes': gt_boxes}
return data, label
def get_crop_batch(roidb):
"""
prototype for rpn batch: data, im_info, gt_boxes
:param roidb: ['image', 'flipped'] + ['gt_boxes', 'boxes', 'gt_classes']
:return: data, label
"""
#assert len(roidb) == 1, 'Single batch only'
data_list = []
label_list = []
imgs, roidb = get_crop_image(roidb)
assert len(imgs) == len(roidb)
for i in range(len(imgs)):
im_array = imgs[i]
im_info = np.array([roidb[i]['im_info']], dtype=np.float32)
# gt boxes: (x1, y1, x2, y2, cls)
if roidb[i]['gt_classes'].size > 0:
gt_inds = np.where(roidb[i]['gt_classes'] != 0)[0]
gt_boxes = np.empty((roidb[i]['boxes'].shape[0], 5),
dtype=np.float32)
gt_boxes[:, 0:4] = roidb[i]['boxes'][gt_inds, :]
gt_boxes[:, 4] = roidb[i]['gt_classes'][gt_inds]
if config.USE_BLUR:
gt_blur = roidb[i]['blur']
if config.FACE_LANDMARK:
#gt_landmarks = np.empty((roidb[i]['landmarks'].shape[0], 11), dtype=np.float32)
gt_landmarks = roidb[i]['landmarks'][gt_inds, :, :]
if config.HEAD_BOX:
gt_boxes_head = np.empty((roidb[i]['boxes_head'].shape[0], 5),
dtype=np.float32)
gt_boxes_head[:, 0:4] = roidb[i]['boxes_head'][gt_inds, :]
gt_boxes_head[:, 4] = roidb[i]['gt_classes'][gt_inds]
else:
gt_boxes = np.empty((0, 5), dtype=np.float32)
if config.USE_BLUR:
gt_blur = np.empty((0, ), dtype=np.float32)
if config.FACE_LANDMARK:
gt_landmarks = np.empty((0, 5, 3), dtype=np.float32)
if config.HEAD_BOX:
gt_boxes_head = np.empty((0, 5), dtype=np.float32)
data = {'data': im_array, 'im_info': im_info}
label = {'gt_boxes': gt_boxes}
if config.USE_BLUR:
label['gt_blur'] = gt_blur
if config.FACE_LANDMARK:
label['gt_landmarks'] = gt_landmarks
if config.HEAD_BOX:
label['gt_boxes_head'] = gt_boxes_head
data_list.append(data)
label_list.append(label)
return data_list, label_list
def assign_anchor_fpn(feat_shape,
gt_label,
im_info,
landmark=False,
prefix='face',
select_stride=0):
"""
assign ground truth boxes to anchor positions
:param feat_shape: infer output shape
:param gt_boxes: assign ground truth
:param im_info: filter out anchors overlapped with edges
:return: tuple
labels: of shape (batch_size, 1) <- (batch_size, num_anchors, feat_height, feat_width)
bbox_targets: of shape (batch_size, num_anchors * 4, feat_height, feat_width)
bbox_weights: mark the assigned anchors
"""
def _unmap(data, count, inds, fill=0):
"""" unmap a subset inds of data into original data of size count """
if len(data.shape) == 1:
ret = np.empty((count, ), dtype=np.float32)
ret.fill(fill)
ret[inds] = data
else:
ret = np.empty((count, ) + data.shape[1:], dtype=np.float32)
ret.fill(fill)
ret[inds, :] = data
return ret
global STAT
DEBUG = False
im_info = im_info[0]
gt_boxes = gt_label['gt_boxes']
# clean up boxes
nonneg = np.where(gt_boxes[:, 4] != -1)[0]
gt_boxes = gt_boxes[nonneg]
if config.USE_BLUR:
gt_blur = gt_label['gt_blur']
gt_blur = gt_blur[nonneg]
if landmark:
gt_landmarks = gt_label['gt_landmarks']
gt_landmarks = gt_landmarks[nonneg]
assert gt_boxes.shape[0] == gt_landmarks.shape[0]
#scales = np.array(scales, dtype=np.float32)
feat_strides = config.RPN_FEAT_STRIDE
bbox_pred_len = 4
landmark_pred_len = 10
if config.USE_BLUR:
gt_boxes[:, 4] = gt_blur
bbox_pred_len = 5
if config.USE_OCCLUSION:
landmark_pred_len = 15
anchors_list = []
anchors_num_list = []
inds_inside_list = []
feat_infos = []
A_list = []
for i in range(len(feat_strides)):
stride = feat_strides[i]
sstride = str(stride)
base_size = config.RPN_ANCHOR_CFG[sstride]['BASE_SIZE']
allowed_border = config.RPN_ANCHOR_CFG[sstride]['ALLOWED_BORDER']
ratios = config.RPN_ANCHOR_CFG[sstride]['RATIOS']
scales = config.RPN_ANCHOR_CFG[sstride]['SCALES']
base_anchors = generate_anchors(base_size=base_size,
ratios=list(ratios),
scales=np.array(scales,
dtype=np.float32),
stride=stride,
dense_anchor=config.DENSE_ANCHOR)
num_anchors = base_anchors.shape[0]
feat_height, feat_width = feat_shape[i][-2:]
feat_stride = feat_strides[i]
feat_infos.append([feat_height, feat_width])
A = num_anchors
A_list.append(A)
K = feat_height * feat_width
all_anchors = anchors_plane(feat_height, feat_width, feat_stride,
base_anchors)
all_anchors = all_anchors.reshape((K * A, 4))
#print('anchor0', stride, all_anchors[0])
total_anchors = int(K * A)
anchors_num_list.append(total_anchors)
# only keep anchors inside the image
inds_inside = np.where(
(all_anchors[:, 0] >= -allowed_border)
& (all_anchors[:, 1] >= -allowed_border)
& (all_anchors[:, 2] < im_info[1] + allowed_border)
& (all_anchors[:, 3] < im_info[0] + allowed_border))[0]
if DEBUG:
print('total_anchors', total_anchors)
print('inds_inside', len(inds_inside))
# keep only inside anchors
anchors = all_anchors[inds_inside, :]
#print('AA', anchors.shape, len(inds_inside))
anchors_list.append(anchors)
inds_inside_list.append(inds_inside)
# Concat anchors from each level
anchors = np.concatenate(anchors_list)
for i in range(1, len(inds_inside_list)):
inds_inside_list[i] = inds_inside_list[i] + sum(anchors_num_list[:i])
inds_inside = np.concatenate(inds_inside_list)
total_anchors = sum(anchors_num_list)
#print('total_anchors', anchors.shape[0], len(inds_inside), file=sys.stderr)
# label: 1 is positive, 0 is negative, -1 is dont care
labels = np.empty((len(inds_inside), ), dtype=np.float32)
labels.fill(-1)
#print('BB', anchors.shape, len(inds_inside))
#print('gt_boxes', gt_boxes.shape, file=sys.stderr)
if gt_boxes.size > 0:
# overlap between the anchors and the gt boxes
# overlaps (ex, gt)
overlaps = bbox_overlaps(anchors.astype(np.float),
gt_boxes.astype(np.float))
argmax_overlaps = overlaps.argmax(axis=1)
#print('AAA', argmax_overlaps.shape)
max_overlaps = overlaps[np.arange(len(inds_inside)), argmax_overlaps]
gt_argmax_overlaps = overlaps.argmax(axis=0)
gt_max_overlaps = overlaps[gt_argmax_overlaps,
np.arange(overlaps.shape[1])]
gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]
if not config.TRAIN.RPN_CLOBBER_POSITIVES:
# assign bg labels first so that positive labels can clobber them
labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
# fg label: for each gt, anchor with highest overlap
if config.TRAIN.RPN_FORCE_POSITIVE:
labels[gt_argmax_overlaps] = 1
# fg label: above threshold IoU
labels[max_overlaps >= config.TRAIN.RPN_POSITIVE_OVERLAP] = 1
if config.TRAIN.RPN_CLOBBER_POSITIVES:
# assign bg labels last so that negative labels can clobber positives
labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
else:
labels[:] = 0
fg_inds = np.where(labels == 1)[0]
#print('fg count', len(fg_inds))
# subsample positive labels if we have too many
if config.TRAIN.RPN_ENABLE_OHEM == 0:
fg_inds = np.where(labels == 1)[0]
num_fg = int(config.TRAIN.RPN_FG_FRACTION *
config.TRAIN.RPN_BATCH_SIZE)
if len(fg_inds) > num_fg:
disable_inds = npr.choice(fg_inds,
size=(len(fg_inds) - num_fg),
replace=False)
if DEBUG:
disable_inds = fg_inds[:(len(fg_inds) - num_fg)]
labels[disable_inds] = -1
# subsample negative labels if we have too many
num_bg = config.TRAIN.RPN_BATCH_SIZE - np.sum(labels == 1)
bg_inds = np.where(labels == 0)[0]
if len(bg_inds) > num_bg:
disable_inds = npr.choice(bg_inds,
size=(len(bg_inds) - num_bg),
replace=False)
if DEBUG:
disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
labels[disable_inds] = -1
#fg_inds = np.where(labels == 1)[0]
#num_fg = len(fg_inds)
#num_bg = num_fg*int(1.0/config.TRAIN.RPN_FG_FRACTION-1)
#bg_inds = np.where(labels == 0)[0]
#if len(bg_inds) > num_bg:
# disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)
# if DEBUG:
# disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
# labels[disable_inds] = -1
else:
fg_inds = np.where(labels == 1)[0]
num_fg = len(fg_inds)
bg_inds = np.where(labels == 0)[0]
num_bg = len(bg_inds)
#print('anchor stat', num_fg, num_bg)
bbox_targets = np.zeros((len(inds_inside), bbox_pred_len),
dtype=np.float32)
if gt_boxes.size > 0:
#print('GT', gt_boxes.shape, gt_boxes[argmax_overlaps, :4].shape)
bbox_targets[:, :] = bbox_transform(anchors,
gt_boxes[argmax_overlaps, :])
#bbox_targets[:,4] = gt_blur
bbox_weights = np.zeros((len(inds_inside), bbox_pred_len),
dtype=np.float32)
#bbox_weights[labels == 1, :] = np.array(config.TRAIN.RPN_BBOX_WEIGHTS)
bbox_weights[labels == 1, 0:4] = 1.0
if bbox_pred_len > 4:
bbox_weights[labels == 1, 4:bbox_pred_len] = 0.1
if landmark:
landmark_targets = np.zeros((len(inds_inside), landmark_pred_len),
dtype=np.float32)
#landmark_weights = np.zeros((len(inds_inside), 10), dtype=np.float32)
landmark_weights = np.zeros((len(inds_inside), landmark_pred_len),
dtype=np.float32)
#landmark_weights[labels == 1, :] = np.array(config.TRAIN.RPN_LANDMARK_WEIGHTS)
if landmark_pred_len == 10:
landmark_weights[labels == 1, :] = 1.0
elif landmark_pred_len == 15:
v = [1.0, 1.0, 0.1] * 5
assert len(v) == 15
landmark_weights[labels == 1, :] = np.array(v)
else:
assert False
#TODO here
if gt_landmarks.size > 0:
#print('AAA',argmax_overlaps)
a_landmarks = gt_landmarks[argmax_overlaps, :, :]
landmark_targets[:] = landmark_transform(anchors, a_landmarks)
invalid = np.where(a_landmarks[:, 0, 2] < 0.0)[0]
#assert len(invalid)==0
#landmark_weights[invalid, :] = np.array(config.TRAIN.RPN_INVALID_LANDMARK_WEIGHTS)
landmark_weights[invalid, :] = 0.0
#if DEBUG:
# _sums = bbox_targets[labels == 1, :].sum(axis=0)
# _squared_sums = (bbox_targets[labels == 1, :] ** 2).sum(axis=0)
# _counts = np.sum(labels == 1)
# means = _sums / (_counts + 1e-14)
# stds = np.sqrt(_squared_sums / _counts - means ** 2)
# print 'means', means
# print 'stdevs', stds
# map up to original set of anchors
#print(labels.shape, total_anchors, inds_inside.shape, inds_inside[0], inds_inside[-1])
labels = _unmap(labels, total_anchors, inds_inside, fill=-1)
bbox_targets = _unmap(bbox_targets, total_anchors, inds_inside, fill=0)
bbox_weights = _unmap(bbox_weights, total_anchors, inds_inside, fill=0)
if landmark:
landmark_targets = _unmap(landmark_targets,
total_anchors,
inds_inside,
fill=0)
landmark_weights = _unmap(landmark_weights,
total_anchors,
inds_inside,
fill=0)
#print('CC', anchors.shape, len(inds_inside))
#if DEBUG:
# if gt_boxes.size > 0:
# print 'rpn: max max_overlaps', np.max(max_overlaps)
# print 'rpn: num_positives', np.sum(labels == 1)
# print 'rpn: num_negatives', np.sum(labels == 0)
# _fg_sum = np.sum(labels == 1)
# _bg_sum = np.sum(labels == 0)
# _count = 1
# print 'rpn: num_positive avg', _fg_sum / _count
# print 'rpn: num_negative avg', _bg_sum / _count
# resahpe
label_list = list()
bbox_target_list = list()
bbox_weight_list = list()
if landmark:
landmark_target_list = list()
landmark_weight_list = list()
anchors_num_range = [0] + anchors_num_list
label = {}
for i in range(len(feat_strides)):
stride = feat_strides[i]
feat_height, feat_width = feat_infos[i]
A = A_list[i]
_label = labels[sum(anchors_num_range[:i +
1]):sum(anchors_num_range[:i +
1]) +
anchors_num_range[i + 1]]
if select_stride > 0 and stride != select_stride:
#print('set', stride, select_stride)
_label[:] = -1
#print('_label', _label.shape, select_stride)
#_fg_inds = np.where(_label == 1)[0]
#n_fg = len(_fg_inds)
#STAT[0]+=1
#STAT[stride]+=n_fg
#if STAT[0]%100==0:
# print('rpn_stat', STAT, file=sys.stderr)
bbox_target = bbox_targets[sum(anchors_num_range[:i + 1]
):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
bbox_weight = bbox_weights[sum(anchors_num_range[:i + 1]
):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
if landmark:
landmark_target = landmark_targets[
sum(anchors_num_range[:i + 1]):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
landmark_weight = landmark_weights[
sum(anchors_num_range[:i + 1]):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
_label = _label.reshape(
(1, feat_height, feat_width, A)).transpose(0, 3, 1, 2)
_label = _label.reshape((1, A * feat_height * feat_width))
bbox_target = bbox_target.reshape(
(1, feat_height * feat_width,
A * bbox_pred_len)).transpose(0, 2, 1)
bbox_weight = bbox_weight.reshape(
(1, feat_height * feat_width, A * bbox_pred_len)).transpose(
(0, 2, 1))
label['%s_label_stride%d' % (prefix, stride)] = _label
label['%s_bbox_target_stride%d' % (prefix, stride)] = bbox_target
label['%s_bbox_weight_stride%d' % (prefix, stride)] = bbox_weight
if landmark:
landmark_target = landmark_target.reshape(
(1, feat_height * feat_width,
A * landmark_pred_len)).transpose(0, 2, 1)
landmark_weight = landmark_weight.reshape(
(1, feat_height * feat_width,
A * landmark_pred_len)).transpose((0, 2, 1))
label['%s_landmark_target_stride%d' %
(prefix, stride)] = landmark_target
label['%s_landmark_weight_stride%d' %
(prefix, stride)] = landmark_weight
#print('in_rpn', stride,_label.shape, bbox_target.shape, bbox_weight.shape, file=sys.stderr)
label_list.append(_label)
#print('DD', _label.shape)
bbox_target_list.append(bbox_target)
bbox_weight_list.append(bbox_weight)
if landmark:
landmark_target_list.append(landmark_target)
landmark_weight_list.append(landmark_weight)
label_concat = np.concatenate(label_list, axis=1)
bbox_target_concat = np.concatenate(bbox_target_list, axis=2)
bbox_weight_concat = np.concatenate(bbox_weight_list, axis=2)
#fg_inds = np.where(label_concat[0] == 1)[0]
#print('fg_inds_in_rpn2', fg_inds, file=sys.stderr)
label.update({
'%s_label' % prefix: label_concat,
'%s_bbox_target' % prefix: bbox_target_concat,
'%s_bbox_weight' % prefix: bbox_weight_concat
})
if landmark:
landmark_target_concat = np.concatenate(landmark_target_list, axis=2)
landmark_weight_concat = np.concatenate(landmark_weight_list, axis=2)
label['%s_landmark_target' % prefix] = landmark_target_concat
label['%s_landmark_weight' % prefix] = landmark_weight_concat
return label
class AA:
def __init__(self, feat_shape):
self.feat_shape = feat_shape
feat_strides = config.RPN_FEAT_STRIDE
anchors_list = []
anchors_num_list = []
inds_inside_list = []
feat_infos = []
A_list = []
DEBUG = False
for i in range(len(feat_strides)):
stride = feat_strides[i]
sstride = str(stride)
base_size = config.RPN_ANCHOR_CFG[sstride]['BASE_SIZE']
allowed_border = config.RPN_ANCHOR_CFG[sstride]['ALLOWED_BORDER']
ratios = config.RPN_ANCHOR_CFG[sstride]['RATIOS']
scales = config.RPN_ANCHOR_CFG[sstride]['SCALES']
base_anchors = generate_anchors(base_size=base_size,
ratios=list(ratios),
scales=np.array(scales,
dtype=np.float32),
stride=stride,
dense_anchor=config.DENSE_ANCHOR)
num_anchors = base_anchors.shape[0]
feat_height, feat_width = feat_shape[i][-2:]
feat_stride = feat_strides[i]
feat_infos.append([feat_height, feat_width])
A = num_anchors
A_list.append(A)
K = feat_height * feat_width
all_anchors = anchors_plane(feat_height, feat_width, feat_stride,
base_anchors)
all_anchors = all_anchors.reshape((K * A, 4))
#print('anchor0', stride, all_anchors[0])
total_anchors = int(K * A)
anchors_num_list.append(total_anchors)
# only keep anchors inside the image
inds_inside = np.where(
(all_anchors[:, 0] >= -allowed_border)
& (all_anchors[:, 1] >= -allowed_border)
& (all_anchors[:, 2] < config.SCALES[0][1] + allowed_border) &
(all_anchors[:, 3] < config.SCALES[0][1] + allowed_border))[0]
if DEBUG:
print('total_anchors', total_anchors)
print('inds_inside', len(inds_inside))
# keep only inside anchors
anchors = all_anchors[inds_inside, :]
#print('AA', anchors.shape, len(inds_inside))
anchors_list.append(anchors)
inds_inside_list.append(inds_inside)
anchors = np.concatenate(anchors_list)
for i in range(1, len(inds_inside_list)):
inds_inside_list[i] = inds_inside_list[i] + sum(
anchors_num_list[:i])
inds_inside = np.concatenate(inds_inside_list)
#self.anchors_list = anchors_list
#self.inds_inside_list = inds_inside_list
self.anchors = anchors
self.inds_inside = inds_inside
self.anchors_num_list = anchors_num_list
self.feat_infos = feat_infos
self.A_list = A_list
self._times = [0.0, 0.0, 0.0, 0.0]
@staticmethod
def _unmap(data, count, inds, fill=0):
"""" unmap a subset inds of data into original data of size count """
if len(data.shape) == 1:
ret = np.empty((count, ), dtype=np.float32)
ret.fill(fill)
ret[inds] = data
else:
ret = np.empty((count, ) + data.shape[1:], dtype=np.float32)
ret.fill(fill)
ret[inds, :] = data
return ret
def assign_anchor_fpn(self,
gt_label,
im_info,
landmark=False,
prefix='face',
select_stride=0):
#ta = datetime.datetime.now()
im_info = im_info[0]
gt_boxes = gt_label['gt_boxes']
# clean up boxes
nonneg = np.where(gt_boxes[:, 4] != -1)[0]
gt_boxes = gt_boxes[nonneg]
if config.USE_BLUR:
gt_blur = gt_label['gt_blur']
gt_blur = gt_blur[nonneg]
if landmark:
gt_landmarks = gt_label['gt_landmarks']
gt_landmarks = gt_landmarks[nonneg]
assert gt_boxes.shape[0] == gt_landmarks.shape[0]
#scales = np.array(scales, dtype=np.float32)
feat_strides = config.RPN_FEAT_STRIDE
bbox_pred_len = 4
landmark_pred_len = 10
if config.USE_BLUR:
gt_boxes[:, 4] = gt_blur
bbox_pred_len = 5
if config.USE_OCCLUSION:
landmark_pred_len = 15
#anchors_list = self.anchors_list
#inds_inside_list = self.inds_inside_list
anchors = self.anchors
inds_inside = self.inds_inside
anchors_num_list = self.anchors_num_list
feat_infos = self.feat_infos
A_list = self.A_list
total_anchors = sum(anchors_num_list)
#print('total_anchors', anchors.shape[0], len(inds_inside), file=sys.stderr)
# label: 1 is positive, 0 is negative, -1 is dont care
labels = np.empty((len(inds_inside), ), dtype=np.float32)
labels.fill(-1)
#print('BB', anchors.shape, len(inds_inside))
#print('gt_boxes', gt_boxes.shape, file=sys.stderr)
#tb = datetime.datetime.now()
#self._times[0] += (tb-ta).total_seconds()
#ta = datetime.datetime.now()
if gt_boxes.size > 0:
# overlap between the anchors and the gt boxes
# overlaps (ex, gt)
overlaps = bbox_overlaps(anchors.astype(np.float),
gt_boxes.astype(np.float))
argmax_overlaps = overlaps.argmax(axis=1)
#print('AAA', argmax_overlaps.shape)
max_overlaps = overlaps[np.arange(len(inds_inside)),
argmax_overlaps]
gt_argmax_overlaps = overlaps.argmax(axis=0)
gt_max_overlaps = overlaps[gt_argmax_overlaps,
np.arange(overlaps.shape[1])]
gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]
if not config.TRAIN.RPN_CLOBBER_POSITIVES:
# assign bg labels first so that positive labels can clobber them
labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
# fg label: for each gt, anchor with highest overlap
if config.TRAIN.RPN_FORCE_POSITIVE:
labels[gt_argmax_overlaps] = 1
# fg label: above threshold IoU
labels[max_overlaps >= config.TRAIN.RPN_POSITIVE_OVERLAP] = 1
if config.TRAIN.RPN_CLOBBER_POSITIVES:
# assign bg labels last so that negative labels can clobber positives
labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
else:
labels[:] = 0
fg_inds = np.where(labels == 1)[0]
#print('fg count', len(fg_inds))
# subsample positive labels if we have too many
if config.TRAIN.RPN_ENABLE_OHEM == 0:
fg_inds = np.where(labels == 1)[0]
num_fg = int(config.TRAIN.RPN_FG_FRACTION *
config.TRAIN.RPN_BATCH_SIZE)
if len(fg_inds) > num_fg:
disable_inds = npr.choice(fg_inds,
size=(len(fg_inds) - num_fg),
replace=False)
if DEBUG:
disable_inds = fg_inds[:(len(fg_inds) - num_fg)]
labels[disable_inds] = -1
# subsample negative labels if we have too many
num_bg = config.TRAIN.RPN_BATCH_SIZE - np.sum(labels == 1)
bg_inds = np.where(labels == 0)[0]
if len(bg_inds) > num_bg:
disable_inds = npr.choice(bg_inds,
size=(len(bg_inds) - num_bg),
replace=False)
if DEBUG:
disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
labels[disable_inds] = -1
#fg_inds = np.where(labels == 1)[0]
#num_fg = len(fg_inds)
#num_bg = num_fg*int(1.0/config.TRAIN.RPN_FG_FRACTION-1)
#bg_inds = np.where(labels == 0)[0]
#if len(bg_inds) > num_bg:
# disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)
# if DEBUG:
# disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
# labels[disable_inds] = -1
else:
fg_inds = np.where(labels == 1)[0]
num_fg = len(fg_inds)
bg_inds = np.where(labels == 0)[0]
num_bg = len(bg_inds)
#print('anchor stat', num_fg, num_bg)
bbox_targets = np.zeros((len(inds_inside), bbox_pred_len),
dtype=np.float32)
if gt_boxes.size > 0:
#print('GT', gt_boxes.shape, gt_boxes[argmax_overlaps, :4].shape)
bbox_targets[:, :] = bbox_transform(anchors,
gt_boxes[argmax_overlaps, :])
#bbox_targets[:,4] = gt_blur
#tb = datetime.datetime.now()
#self._times[1] += (tb-ta).total_seconds()
#ta = datetime.datetime.now()
bbox_weights = np.zeros((len(inds_inside), bbox_pred_len),
dtype=np.float32)
#bbox_weights[labels == 1, :] = np.array(config.TRAIN.RPN_BBOX_WEIGHTS)
bbox_weights[labels == 1, 0:4] = 1.0
if bbox_pred_len > 4:
bbox_weights[labels == 1, 4:bbox_pred_len] = 0.1
if landmark:
landmark_targets = np.zeros((len(inds_inside), landmark_pred_len),
dtype=np.float32)
#landmark_weights = np.zeros((len(inds_inside), 10), dtype=np.float32)
landmark_weights = np.zeros((len(inds_inside), landmark_pred_len),
dtype=np.float32)
#landmark_weights[labels == 1, :] = np.array(config.TRAIN.RPN_LANDMARK_WEIGHTS)
if landmark_pred_len == 10:
landmark_weights[labels == 1, :] = 1.0
elif landmark_pred_len == 15:
v = [1.0, 1.0, 0.1] * 5
assert len(v) == 15
landmark_weights[labels == 1, :] = np.array(v)
else:
assert False
#TODO here
if gt_landmarks.size > 0:
#print('AAA',argmax_overlaps)
a_landmarks = gt_landmarks[argmax_overlaps, :, :]
landmark_targets[:] = landmark_transform(anchors, a_landmarks)
invalid = np.where(a_landmarks[:, 0, 2] < 0.0)[0]
#assert len(invalid)==0
#landmark_weights[invalid, :] = np.array(config.TRAIN.RPN_INVALID_LANDMARK_WEIGHTS)
landmark_weights[invalid, :] = 0.0
#tb = datetime.datetime.now()
#self._times[2] += (tb-ta).total_seconds()
#ta = datetime.datetime.now()
#if DEBUG:
# _sums = bbox_targets[labels == 1, :].sum(axis=0)
# _squared_sums = (bbox_targets[labels == 1, :] ** 2).sum(axis=0)
# _counts = np.sum(labels == 1)
# means = _sums / (_counts + 1e-14)
# stds = np.sqrt(_squared_sums / _counts - means ** 2)
# print 'means', means
# print 'stdevs', stds
# map up to original set of anchors
#print(labels.shape, total_anchors, inds_inside.shape, inds_inside[0], inds_inside[-1])
labels = AA._unmap(labels, total_anchors, inds_inside, fill=-1)
bbox_targets = AA._unmap(bbox_targets,
total_anchors,
inds_inside,
fill=0)
bbox_weights = AA._unmap(bbox_weights,
total_anchors,
inds_inside,
fill=0)
if landmark:
landmark_targets = AA._unmap(landmark_targets,
total_anchors,
inds_inside,
fill=0)
landmark_weights = AA._unmap(landmark_weights,
total_anchors,
inds_inside,
fill=0)
#print('CC', anchors.shape, len(inds_inside))
bbox_targets[:,
0::4] = bbox_targets[:, 0::4] / config.TRAIN.BBOX_STDS[0]
bbox_targets[:,
1::4] = bbox_targets[:, 1::4] / config.TRAIN.BBOX_STDS[1]
bbox_targets[:,
2::4] = bbox_targets[:, 2::4] / config.TRAIN.BBOX_STDS[2]
bbox_targets[:,
3::4] = bbox_targets[:, 3::4] / config.TRAIN.BBOX_STDS[3]
landmark_targets /= config.TRAIN.LANDMARK_STD
#print('applied STD')
#if DEBUG:
# if gt_boxes.size > 0:
# print 'rpn: max max_overlaps', np.max(max_overlaps)
# print 'rpn: num_positives', np.sum(labels == 1)
# print 'rpn: num_negatives', np.sum(labels == 0)
# _fg_sum = np.sum(labels == 1)
# _bg_sum = np.sum(labels == 0)
# _count = 1
# print 'rpn: num_positive avg', _fg_sum / _count
# print 'rpn: num_negative avg', _bg_sum / _count
# resahpe
label_list = list()
bbox_target_list = list()
bbox_weight_list = list()
if landmark:
landmark_target_list = list()
landmark_weight_list = list()
anchors_num_range = [0] + anchors_num_list
label = {}
for i in range(len(feat_strides)):
stride = feat_strides[i]
feat_height, feat_width = feat_infos[i]
A = A_list[i]
_label = labels[sum(anchors_num_range[:i + 1]
):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
if select_stride > 0 and stride != select_stride:
#print('set', stride, select_stride)
_label[:] = -1
#print('_label', _label.shape, select_stride)
#_fg_inds = np.where(_label == 1)[0]
#n_fg = len(_fg_inds)
#STAT[0]+=1
#STAT[stride]+=n_fg
#if STAT[0]%100==0:
# print('rpn_stat', STAT, file=sys.stderr)
bbox_target = bbox_targets[sum(anchors_num_range[:i + 1]
):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
bbox_weight = bbox_weights[sum(anchors_num_range[:i + 1]
):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
if landmark:
landmark_target = landmark_targets[
sum(anchors_num_range[:i +
1]):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
landmark_weight = landmark_weights[
sum(anchors_num_range[:i +
1]):sum(anchors_num_range[:i + 1]) +
anchors_num_range[i + 1]]
_label = _label.reshape(
(1, feat_height, feat_width, A)).transpose(0, 3, 1, 2)
_label = _label.reshape((1, A * feat_height * feat_width))
bbox_target = bbox_target.reshape(
(1, feat_height * feat_width,
A * bbox_pred_len)).transpose(0, 2, 1)
bbox_weight = bbox_weight.reshape(
(1, feat_height * feat_width, A * bbox_pred_len)).transpose(
(0, 2, 1))
label['%s_label_stride%d' % (prefix, stride)] = _label
label['%s_bbox_target_stride%d' % (prefix, stride)] = bbox_target
label['%s_bbox_weight_stride%d' % (prefix, stride)] = bbox_weight
if landmark:
landmark_target = landmark_target.reshape(
(1, feat_height * feat_width,
A * landmark_pred_len)).transpose(0, 2, 1)
landmark_weight = landmark_weight.reshape(
(1, feat_height * feat_width,
A * landmark_pred_len)).transpose((0, 2, 1))
label['%s_landmark_target_stride%d' %
(prefix, stride)] = landmark_target
label['%s_landmark_weight_stride%d' %
(prefix, stride)] = landmark_weight
#print('in_rpn', stride,_label.shape, bbox_target.shape, bbox_weight.shape, file=sys.stderr)
label_list.append(_label)
#print('DD', _label.shape)
bbox_target_list.append(bbox_target)
bbox_weight_list.append(bbox_weight)
if landmark:
landmark_target_list.append(landmark_target)
landmark_weight_list.append(landmark_weight)
label_concat = np.concatenate(label_list, axis=1)
bbox_target_concat = np.concatenate(bbox_target_list, axis=2)
bbox_weight_concat = np.concatenate(bbox_weight_list, axis=2)
#fg_inds = np.where(label_concat[0] == 1)[0]
#print('fg_inds_in_rpn2', fg_inds, file=sys.stderr)
label.update({
'%s_label' % prefix: label_concat,
'%s_bbox_target' % prefix: bbox_target_concat,
'%s_bbox_weight' % prefix: bbox_weight_concat
})
if landmark:
landmark_target_concat = np.concatenate(landmark_target_list,
axis=2)
landmark_weight_concat = np.concatenate(landmark_weight_list,
axis=2)
label['%s_landmark_target' % prefix] = landmark_target_concat
label['%s_landmark_weight' % prefix] = landmark_weight_concat
#tb = datetime.datetime.now()
#self._times[3] += (tb-ta).total_seconds()
#ta = datetime.datetime.now()
#print(self._times)
return label
| insightface/detection/retinaface/rcnn/io/rpn.py/0 | {
"file_path": "insightface/detection/retinaface/rcnn/io/rpn.py",
"repo_id": "insightface",
"token_count": 20905
} | 94 |
/**************************************************************************
* Microsoft COCO Toolbox. version 2.0
* Data, paper, and tutorials available at: http://mscoco.org/
* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
* Licensed under the Simplified BSD License [see coco/license.txt]
**************************************************************************/
#pragma once
typedef unsigned int uint;
typedef unsigned long siz;
typedef unsigned char byte;
typedef double* BB;
typedef struct { siz h, w, m; uint *cnts; } RLE;
/* Initialize/destroy RLE. */
void rleInit( RLE *R, siz h, siz w, siz m, uint *cnts );
void rleFree( RLE *R );
/* Initialize/destroy RLE array. */
void rlesInit( RLE **R, siz n );
void rlesFree( RLE **R, siz n );
/* Encode binary masks using RLE. */
void rleEncode( RLE *R, const byte *mask, siz h, siz w, siz n );
/* Decode binary masks encoded via RLE. */
void rleDecode( const RLE *R, byte *mask, siz n );
/* Compute union or intersection of encoded masks. */
void rleMerge( const RLE *R, RLE *M, siz n, int intersect );
/* Compute area of encoded masks. */
void rleArea( const RLE *R, siz n, uint *a );
/* Compute intersection over union between masks. */
void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o );
/* Compute non-maximum suppression between bounding masks */
void rleNms( RLE *dt, siz n, uint *keep, double thr );
/* Compute intersection over union between bounding boxes. */
void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o );
/* Compute non-maximum suppression between bounding boxes */
void bbNms( BB dt, siz n, uint *keep, double thr );
/* Get bounding boxes surrounding encoded masks. */
void rleToBbox( const RLE *R, BB bb, siz n );
/* Convert bounding boxes to encoded masks. */
void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n );
/* Convert polygon to encoded mask. */
void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w );
/* Get compressed string representation of encoded mask. */
char* rleToString( const RLE *R );
/* Convert from compressed string representation of encoded mask. */
void rleFrString( RLE *R, char *s, siz h, siz w );
| insightface/detection/retinaface/rcnn/pycocotools/maskApi.h/0 | {
"file_path": "insightface/detection/retinaface/rcnn/pycocotools/maskApi.h",
"repo_id": "insightface",
"token_count": 733
} | 95 |
import argparse
import logging
import pprint
import mxnet as mx
from ..config import config, default, generate_config
from ..symbol import *
from ..core import callback, metric
from ..core.loader import AnchorLoaderFPN
from ..core.module import MutableModule
from ..utils.load_data import load_gt_roidb, merge_roidb, filter_roidb
from ..utils.load_model import load_param
def train_rpn(network, dataset, image_set, root_path, dataset_path, frequent,
kvstore, work_load_list, no_flip, no_shuffle, resume, ctx,
pretrained, epoch, prefix, begin_epoch, end_epoch, train_shared,
lr, lr_step):
# set up logger
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# setup config
assert config.TRAIN.BATCH_IMAGES == 1
# load symbol
sym = eval('get_' + network + '_rpn')()
feat_sym = []
for stride in config.RPN_FEAT_STRIDE:
feat_sym.append(sym.get_internals()['rpn_cls_score_stride%s_output' %
stride])
# setup multi-gpu
batch_size = len(ctx)
input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size
# print config
pprint.pprint(config)
# load dataset and prepare imdb for training
image_sets = [iset for iset in image_set.split('+')]
roidbs = [
load_gt_roidb(dataset,
image_set,
root_path,
dataset_path,
flip=not no_flip) for image_set in image_sets
]
roidb = merge_roidb(roidbs)
roidb = filter_roidb(roidb)
# load training data
#train_data = AnchorLoaderFPN(feat_sym, roidb, batch_size=input_batch_size, shuffle=not no_shuffle,
# ctx=ctx, work_load_list=work_load_list,
# feat_stride=config.RPN_FEAT_STRIDE, anchor_scales=config.ANCHOR_SCALES,
# anchor_ratios=config.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING,
# allowed_border=9999)
train_data = AnchorLoaderFPN(feat_sym,
roidb,
batch_size=input_batch_size,
shuffle=not no_shuffle,
ctx=ctx,
work_load_list=work_load_list)
# infer max shape
max_data_shape = [('data', (input_batch_size, 3,
max([v[0] for v in config.SCALES]),
max([v[1] for v in config.SCALES])))]
max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape)
print 'providing maximum shape', max_data_shape, max_label_shape
# infer shape
data_shape_dict = dict(train_data.provide_data + train_data.provide_label)
arg_shape, out_shape, aux_shape = sym.infer_shape(**data_shape_dict)
arg_shape_dict = dict(zip(sym.list_arguments(), arg_shape))
out_shape_dict = zip(sym.list_outputs(), out_shape)
aux_shape_dict = dict(zip(sym.list_auxiliary_states(), aux_shape))
print 'output shape'
pprint.pprint(out_shape_dict)
# load and initialize params
if resume:
arg_params, aux_params = load_param(prefix, begin_epoch, convert=True)
else:
arg_params, aux_params = load_param(pretrained, epoch, convert=True)
init = mx.init.Xavier(factor_type="in",
rnd_type='gaussian',
magnitude=2)
init_internal = mx.init.Normal(sigma=0.01)
for k in sym.list_arguments():
if k in data_shape_dict:
continue
if k not in arg_params:
print 'init', k
arg_params[k] = mx.nd.zeros(shape=arg_shape_dict[k])
if not k.endswith('bias'):
init_internal(k, arg_params[k])
for k in sym.list_auxiliary_states():
if k not in aux_params:
print 'init', k
aux_params[k] = mx.nd.zeros(shape=aux_shape_dict[k])
init(k, aux_params[k])
# check parameter shapes
for k in sym.list_arguments():
if k in data_shape_dict:
continue
assert k in arg_params, k + ' not initialized'
assert arg_params[k].shape == arg_shape_dict[k], \
'shape inconsistent for ' + k + ' inferred ' + str(arg_shape_dict[k]) + ' provided ' + str(arg_params[k].shape)
for k in sym.list_auxiliary_states():
assert k in aux_params, k + ' not initialized'
assert aux_params[k].shape == aux_shape_dict[k], \
'shape inconsistent for ' + k + ' inferred ' + str(aux_shape_dict[k]) + ' provided ' + str(aux_params[k].shape)
# create solver
data_names = [k[0] for k in train_data.provide_data]
label_names = [k[0] for k in train_data.provide_label]
if train_shared:
fixed_param_prefix = config.FIXED_PARAMS_SHARED
else:
fixed_param_prefix = config.FIXED_PARAMS
mod = MutableModule(sym,
data_names=data_names,
label_names=label_names,
logger=logger,
context=ctx,
work_load_list=work_load_list,
max_data_shapes=max_data_shape,
max_label_shapes=max_label_shape,
fixed_param_prefix=fixed_param_prefix)
# decide training params
# metric
eval_metric = metric.RPNAccMetric()
cls_metric = metric.RPNLogLossMetric()
bbox_metric = metric.RPNL1LossMetric()
eval_metrics = mx.metric.CompositeEvalMetric()
for child_metric in [eval_metric, cls_metric, bbox_metric]:
eval_metrics.add(child_metric)
# callback
batch_end_callback = []
batch_end_callback.append(
mx.callback.Speedometer(train_data.batch_size, frequent=frequent))
epoch_end_callback = mx.callback.do_checkpoint(prefix)
# decide learning rate
base_lr = lr
lr_factor = 0.1
lr_epoch = [int(epoch) for epoch in lr_step.split(',')]
lr_epoch_diff = [
epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch
]
lr = base_lr * (lr_factor**(len(lr_epoch) - len(lr_epoch_diff)))
lr_iters = [
int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff
]
print 'lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters
lr_scheduler = mx.lr_scheduler.MultiFactorScheduler(lr_iters, lr_factor)
# optimizer
optimizer_params = {
'momentum': 0.9,
'wd': 0.0001,
'learning_rate': lr,
'lr_scheduler': lr_scheduler,
'rescale_grad': (1.0 / batch_size),
'clip_gradient': 5
}
# train
mod.fit(train_data,
eval_metric=eval_metrics,
epoch_end_callback=epoch_end_callback,
batch_end_callback=batch_end_callback,
kvstore=kvstore,
optimizer='sgd',
optimizer_params=optimizer_params,
arg_params=arg_params,
aux_params=aux_params,
begin_epoch=begin_epoch,
num_epoch=end_epoch)
def parse_args():
parser = argparse.ArgumentParser(
description='Train a Region Proposal Network')
# general
parser.add_argument('--network',
help='network name',
default=default.network,
type=str)
parser.add_argument('--dataset',
help='dataset name',
default=default.dataset,
type=str)
args, rest = parser.parse_known_args()
generate_config(args.network, args.dataset)
parser.add_argument('--image_set',
help='image_set name',
default=default.image_set,
type=str)
parser.add_argument('--root_path',
help='output data folder',
default=default.root_path,
type=str)
parser.add_argument('--dataset_path',
help='dataset path',
default=default.dataset_path,
type=str)
# training
parser.add_argument('--frequent',
help='frequency of logging',
default=default.frequent,
type=int)
parser.add_argument('--kvstore',
help='the kv-store type',
default=default.kvstore,
type=str)
parser.add_argument('--work_load_list',
help='work load for different devices',
default=None,
type=list)
parser.add_argument('--no_flip',
help='disable flip images',
action='store_true')
parser.add_argument('--no_shuffle',
help='disable random shuffle',
action='store_true')
parser.add_argument('--resume',
help='continue training',
action='store_true')
# rpn
parser.add_argument('--gpus',
help='GPU device to train with',
default='0',
type=str)
parser.add_argument('--pretrained',
help='pretrained model prefix',
default=default.pretrained,
type=str)
parser.add_argument('--pretrained_epoch',
help='pretrained model epoch',
default=default.pretrained_epoch,
type=int)
parser.add_argument('--prefix',
help='new model prefix',
default=default.rpn_prefix,
type=str)
parser.add_argument('--begin_epoch',
help='begin epoch of training',
default=0,
type=int)
parser.add_argument('--end_epoch',
help='end epoch of training',
default=default.rpn_epoch,
type=int)
parser.add_argument('--lr',
help='base learning rate',
default=default.rpn_lr,
type=float)
parser.add_argument('--lr_step',
help='learning rate steps (in epoch)',
default=default.rpn_lr_step,
type=str)
parser.add_argument('--train_shared',
help='second round train shared params',
action='store_true')
args = parser.parse_args()
return args
def main():
args = parse_args()
print 'Called with argument:', args
ctx = [mx.gpu(int(i)) for i in args.gpus.split(',')]
train_rpn(args.network,
args.dataset,
args.image_set,
args.root_path,
args.dataset_path,
args.frequent,
args.kvstore,
args.work_load_list,
args.no_flip,
args.no_shuffle,
args.resume,
ctx,
args.pretrained,
args.pretrained_epoch,
args.prefix,
args.begin_epoch,
args.end_epoch,
train_shared=args.train_shared,
lr=args.lr,
lr_step=args.lr_step)
if __name__ == '__main__':
main()
| insightface/detection/retinaface/rcnn/tools/train_rpn.py/0 | {
"file_path": "insightface/detection/retinaface/rcnn/tools/train_rpn.py",
"repo_id": "insightface",
"token_count": 6176
} | 96 |
# dataset settings
dataset_type = 'WIDERFaceDataset'
data_root = 'data/WIDERFace/'
img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile', to_float32=True),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PhotoMetricDistortion',
brightness_delta=32,
contrast_range=(0.5, 1.5),
saturation_range=(0.5, 1.5),
hue_delta=18),
dict(
type='Expand',
mean=img_norm_cfg['mean'],
to_rgb=img_norm_cfg['to_rgb'],
ratio_range=(1, 4)),
dict(
type='MinIoURandomCrop',
min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
min_crop_size=0.3),
dict(type='Resize', img_scale=(300, 300), keep_ratio=False),
dict(type='Normalize', **img_norm_cfg),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(300, 300),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=False),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
samples_per_gpu=60,
workers_per_gpu=2,
train=dict(
type='RepeatDataset',
times=2,
dataset=dict(
type=dataset_type,
ann_file=data_root + 'train.txt',
img_prefix=data_root + 'WIDER_train/',
min_size=17,
pipeline=train_pipeline)),
val=dict(
type=dataset_type,
ann_file=data_root + 'val.txt',
img_prefix=data_root + 'WIDER_val/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'val.txt',
img_prefix=data_root + 'WIDER_val/',
pipeline=test_pipeline))
| insightface/detection/scrfd/configs/_base_/datasets/wider_face.py/0 | {
"file_path": "insightface/detection/scrfd/configs/_base_/datasets/wider_face.py",
"repo_id": "insightface",
"token_count": 1019
} | 97 |
import torch
def images_to_levels(target, num_levels):
"""Convert targets by image to targets by feature level.
[target_img0, target_img1] -> [target_level0, target_level1, ...]
"""
target = torch.stack(target, 0)
level_targets = []
start = 0
for n in num_levels:
end = start + n
# level_targets.append(target[:, start:end].squeeze(0))
level_targets.append(target[:, start:end])
start = end
return level_targets
def anchor_inside_flags(flat_anchors,
valid_flags,
img_shape,
allowed_border=0):
"""Check whether the anchors are inside the border.
Args:
flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4).
valid_flags (torch.Tensor): An existing valid flags of anchors.
img_shape (tuple(int)): Shape of current image.
allowed_border (int, optional): The border to allow the valid anchor.
Defaults to 0.
Returns:
torch.Tensor: Flags indicating whether the anchors are inside a \
valid range.
"""
img_h, img_w = img_shape[:2]
if allowed_border >= 0:
inside_flags = valid_flags & \
(flat_anchors[:, 0] >= -allowed_border) & \
(flat_anchors[:, 1] >= -allowed_border) & \
(flat_anchors[:, 2] < img_w + allowed_border) & \
(flat_anchors[:, 3] < img_h + allowed_border)
else:
inside_flags = valid_flags
return inside_flags
def calc_region(bbox, ratio, featmap_size=None):
"""Calculate a proportional bbox region.
The bbox center are fixed and the new h' and w' is h * ratio and w * ratio.
Args:
bbox (Tensor): Bboxes to calculate regions, shape (n, 4).
ratio (float): Ratio of the output region.
featmap_size (tuple): Feature map size used for clipping the boundary.
Returns:
tuple: x1, y1, x2, y2
"""
x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long()
y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long()
x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long()
y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long()
if featmap_size is not None:
x1 = x1.clamp(min=0, max=featmap_size[1])
y1 = y1.clamp(min=0, max=featmap_size[0])
x2 = x2.clamp(min=0, max=featmap_size[1])
y2 = y2.clamp(min=0, max=featmap_size[0])
return (x1, y1, x2, y2)
| insightface/detection/scrfd/mmdet/core/anchor/utils.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/core/anchor/utils.py",
"repo_id": "insightface",
"token_count": 1128
} | 98 |
import numpy as np
import torch
from ..builder import BBOX_CODERS
from .base_bbox_coder import BaseBBoxCoder
@BBOX_CODERS.register_module()
class DeltaXYWHBBoxCoder(BaseBBoxCoder):
"""Delta XYWH BBox coder.
Following the practice in `R-CNN <https://arxiv.org/abs/1311.2524>`_,
this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and
decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2).
Args:
target_means (Sequence[float]): Denormalizing means of target for
delta coordinates
target_stds (Sequence[float]): Denormalizing standard deviation of
target for delta coordinates
clip_border (bool, optional): Whether clip the objects outside the
border of the image. Defaults to True.
"""
def __init__(self,
target_means=(0., 0., 0., 0.),
target_stds=(1., 1., 1., 1.),
clip_border=True):
super(BaseBBoxCoder, self).__init__()
self.means = target_means
self.stds = target_stds
self.clip_border = clip_border
def encode(self, bboxes, gt_bboxes):
"""Get box regression transformation deltas that can be used to
transform the ``bboxes`` into the ``gt_bboxes``.
Args:
bboxes (torch.Tensor): Source boxes, e.g., object proposals.
gt_bboxes (torch.Tensor): Target of the transformation, e.g.,
ground-truth boxes.
Returns:
torch.Tensor: Box transformation deltas
"""
assert bboxes.size(0) == gt_bboxes.size(0)
assert bboxes.size(-1) == gt_bboxes.size(-1) == 4
encoded_bboxes = bbox2delta(bboxes, gt_bboxes, self.means, self.stds)
return encoded_bboxes
def decode(self,
bboxes,
pred_bboxes,
max_shape=None,
wh_ratio_clip=16 / 1000):
"""Apply transformation `pred_bboxes` to `boxes`.
Args:
boxes (torch.Tensor): Basic boxes.
pred_bboxes (torch.Tensor): Encoded boxes with shape
max_shape (tuple[int], optional): Maximum shape of boxes.
Defaults to None.
wh_ratio_clip (float, optional): The allowed ratio between
width and height.
Returns:
torch.Tensor: Decoded boxes.
"""
assert pred_bboxes.size(0) == bboxes.size(0)
decoded_bboxes = delta2bbox(bboxes, pred_bboxes, self.means, self.stds,
max_shape, wh_ratio_clip, self.clip_border)
return decoded_bboxes
def bbox2delta(proposals, gt, means=(0., 0., 0., 0.), stds=(1., 1., 1., 1.)):
"""Compute deltas of proposals w.r.t. gt.
We usually compute the deltas of x, y, w, h of proposals w.r.t ground
truth bboxes to get regression target.
This is the inverse function of :func:`delta2bbox`.
Args:
proposals (Tensor): Boxes to be transformed, shape (N, ..., 4)
gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4)
means (Sequence[float]): Denormalizing means for delta coordinates
stds (Sequence[float]): Denormalizing standard deviation for delta
coordinates
Returns:
Tensor: deltas with shape (N, 4), where columns represent dx, dy,
dw, dh.
"""
assert proposals.size() == gt.size()
proposals = proposals.float()
gt = gt.float()
px = (proposals[..., 0] + proposals[..., 2]) * 0.5
py = (proposals[..., 1] + proposals[..., 3]) * 0.5
pw = proposals[..., 2] - proposals[..., 0]
ph = proposals[..., 3] - proposals[..., 1]
gx = (gt[..., 0] + gt[..., 2]) * 0.5
gy = (gt[..., 1] + gt[..., 3]) * 0.5
gw = gt[..., 2] - gt[..., 0]
gh = gt[..., 3] - gt[..., 1]
dx = (gx - px) / pw
dy = (gy - py) / ph
dw = torch.log(gw / pw)
dh = torch.log(gh / ph)
deltas = torch.stack([dx, dy, dw, dh], dim=-1)
means = deltas.new_tensor(means).unsqueeze(0)
stds = deltas.new_tensor(stds).unsqueeze(0)
deltas = deltas.sub_(means).div_(stds)
return deltas
def delta2bbox(rois,
deltas,
means=(0., 0., 0., 0.),
stds=(1., 1., 1., 1.),
max_shape=None,
wh_ratio_clip=16 / 1000,
clip_border=True):
"""Apply deltas to shift/scale base boxes.
Typically the rois are anchor or proposed bounding boxes and the deltas are
network outputs used to shift/scale those boxes.
This is the inverse function of :func:`bbox2delta`.
Args:
rois (Tensor): Boxes to be transformed. Has shape (N, 4)
deltas (Tensor): Encoded offsets with respect to each roi.
Has shape (N, 4 * num_classes). Note N = num_anchors * W * H when
rois is a grid of anchors. Offset encoding follows [1]_.
means (Sequence[float]): Denormalizing means for delta coordinates
stds (Sequence[float]): Denormalizing standard deviation for delta
coordinates
max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W)
wh_ratio_clip (float): Maximum aspect ratio for boxes.
clip_border (bool, optional): Whether clip the objects outside the
border of the image. Defaults to True.
Returns:
Tensor: Boxes with shape (N, 4), where columns represent
tl_x, tl_y, br_x, br_y.
References:
.. [1] https://arxiv.org/abs/1311.2524
Example:
>>> rois = torch.Tensor([[ 0., 0., 1., 1.],
>>> [ 0., 0., 1., 1.],
>>> [ 0., 0., 1., 1.],
>>> [ 5., 5., 5., 5.]])
>>> deltas = torch.Tensor([[ 0., 0., 0., 0.],
>>> [ 1., 1., 1., 1.],
>>> [ 0., 0., 2., -1.],
>>> [ 0.7, -1.9, -0.5, 0.3]])
>>> delta2bbox(rois, deltas, max_shape=(32, 32))
tensor([[0.0000, 0.0000, 1.0000, 1.0000],
[0.1409, 0.1409, 2.8591, 2.8591],
[0.0000, 0.3161, 4.1945, 0.6839],
[5.0000, 5.0000, 5.0000, 5.0000]])
"""
means = deltas.new_tensor(means).view(1, -1).repeat(1, deltas.size(1) // 4)
stds = deltas.new_tensor(stds).view(1, -1).repeat(1, deltas.size(1) // 4)
denorm_deltas = deltas * stds + means
dx = denorm_deltas[:, 0::4]
dy = denorm_deltas[:, 1::4]
dw = denorm_deltas[:, 2::4]
dh = denorm_deltas[:, 3::4]
max_ratio = np.abs(np.log(wh_ratio_clip))
dw = dw.clamp(min=-max_ratio, max=max_ratio)
dh = dh.clamp(min=-max_ratio, max=max_ratio)
# Compute center of each roi
px = ((rois[:, 0] + rois[:, 2]) * 0.5).unsqueeze(1).expand_as(dx)
py = ((rois[:, 1] + rois[:, 3]) * 0.5).unsqueeze(1).expand_as(dy)
# Compute width/height of each roi
pw = (rois[:, 2] - rois[:, 0]).unsqueeze(1).expand_as(dw)
ph = (rois[:, 3] - rois[:, 1]).unsqueeze(1).expand_as(dh)
# Use exp(network energy) to enlarge/shrink each roi
gw = pw * dw.exp()
gh = ph * dh.exp()
# Use network energy to shift the center of each roi
gx = px + pw * dx
gy = py + ph * dy
# Convert center-xy/width/height to top-left, bottom-right
x1 = gx - gw * 0.5
y1 = gy - gh * 0.5
x2 = gx + gw * 0.5
y2 = gy + gh * 0.5
if clip_border and max_shape is not None:
x1 = x1.clamp(min=0, max=max_shape[1])
y1 = y1.clamp(min=0, max=max_shape[0])
x2 = x2.clamp(min=0, max=max_shape[1])
y2 = y2.clamp(min=0, max=max_shape[0])
bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size())
return bboxes
| insightface/detection/scrfd/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py",
"repo_id": "insightface",
"token_count": 3794
} | 99 |
import torch
from ..builder import BBOX_SAMPLERS
from .base_sampler import BaseSampler
@BBOX_SAMPLERS.register_module()
class RandomSampler(BaseSampler):
"""Random sampler.
Args:
num (int): Number of samples
pos_fraction (float): Fraction of positive samples
neg_pos_up (int, optional): Upper bound number of negative and
positive samples. Defaults to -1.
add_gt_as_proposals (bool, optional): Whether to add ground truth
boxes as proposals. Defaults to True.
"""
def __init__(self,
num,
pos_fraction,
neg_pos_ub=-1,
add_gt_as_proposals=True,
**kwargs):
from mmdet.core.bbox import demodata
super(RandomSampler, self).__init__(num, pos_fraction, neg_pos_ub,
add_gt_as_proposals)
self.rng = demodata.ensure_rng(kwargs.get('rng', None))
def random_choice(self, gallery, num):
"""Random select some elements from the gallery.
If `gallery` is a Tensor, the returned indices will be a Tensor;
If `gallery` is a ndarray or list, the returned indices will be a
ndarray.
Args:
gallery (Tensor | ndarray | list): indices pool.
num (int): expected sample num.
Returns:
Tensor or ndarray: sampled indices.
"""
assert len(gallery) >= num
is_tensor = isinstance(gallery, torch.Tensor)
if not is_tensor:
if torch.cuda.is_available():
device = torch.cuda.current_device()
else:
device = 'cpu'
gallery = torch.tensor(gallery, dtype=torch.long, device=device)
perm = torch.randperm(gallery.numel(), device=gallery.device)[:num]
rand_inds = gallery[perm]
if not is_tensor:
rand_inds = rand_inds.cpu().numpy()
return rand_inds
def _sample_pos(self, assign_result, num_expected, **kwargs):
"""Randomly sample some positive samples."""
pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False)
if pos_inds.numel() != 0:
pos_inds = pos_inds.squeeze(1)
if pos_inds.numel() <= num_expected:
return pos_inds
else:
return self.random_choice(pos_inds, num_expected)
def _sample_neg(self, assign_result, num_expected, **kwargs):
"""Randomly sample some negative samples."""
neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
if neg_inds.numel() != 0:
neg_inds = neg_inds.squeeze(1)
if len(neg_inds) <= num_expected:
return neg_inds
else:
return self.random_choice(neg_inds, num_expected)
| insightface/detection/scrfd/mmdet/core/bbox/samplers/random_sampler.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/core/bbox/samplers/random_sampler.py",
"repo_id": "insightface",
"token_count": 1336
} | 100 |
import numpy as np
import torch
from torch.nn.modules.utils import _pair
def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list,
cfg):
"""Compute mask target for positive proposals in multiple images.
Args:
pos_proposals_list (list[Tensor]): Positive proposals in multiple
images.
pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each
positive proposals.
gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of
each image.
cfg (dict): Config dict that specifies the mask size.
Returns:
list[Tensor]: Mask target of each image.
"""
cfg_list = [cfg for _ in range(len(pos_proposals_list))]
mask_targets = map(mask_target_single, pos_proposals_list,
pos_assigned_gt_inds_list, gt_masks_list, cfg_list)
mask_targets = list(mask_targets)
if len(mask_targets) > 0:
mask_targets = torch.cat(mask_targets)
return mask_targets
def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg):
"""Compute mask target for each positive proposal in the image.
Args:
pos_proposals (Tensor): Positive proposals.
pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals.
gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap
or Polygon.
cfg (dict): Config dict that indicate the mask size.
Returns:
Tensor: Mask target of each positive proposals in the image.
"""
device = pos_proposals.device
mask_size = _pair(cfg.mask_size)
num_pos = pos_proposals.size(0)
if num_pos > 0:
proposals_np = pos_proposals.cpu().numpy()
maxh, maxw = gt_masks.height, gt_masks.width
proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw)
proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh)
pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy()
mask_targets = gt_masks.crop_and_resize(
proposals_np, mask_size, device=device,
inds=pos_assigned_gt_inds).to_ndarray()
mask_targets = torch.from_numpy(mask_targets).float().to(device)
else:
mask_targets = pos_proposals.new_zeros((0, ) + mask_size)
return mask_targets
| insightface/detection/scrfd/mmdet/core/mask/mask_target.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/core/mask/mask_target.py",
"repo_id": "insightface",
"token_count": 1051
} | 101 |
import itertools
import logging
import os.path as osp
import tempfile
from collections import OrderedDict
import numpy as np
from mmcv.utils import print_log
from terminaltables import AsciiTable
from .builder import DATASETS
from .coco import CocoDataset
@DATASETS.register_module()
class LVISV05Dataset(CocoDataset):
CLASSES = (
'acorn', 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock',
'alcohol', 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet',
'antenna', 'apple', 'apple_juice', 'applesauce', 'apricot', 'apron',
'aquarium', 'armband', 'armchair', 'armoire', 'armor', 'artichoke',
'trash_can', 'ashtray', 'asparagus', 'atomizer', 'avocado', 'award',
'awning', 'ax', 'baby_buggy', 'basketball_backboard', 'backpack',
'handbag', 'suitcase', 'bagel', 'bagpipe', 'baguet', 'bait', 'ball',
'ballet_skirt', 'balloon', 'bamboo', 'banana', 'Band_Aid', 'bandage',
'bandanna', 'banjo', 'banner', 'barbell', 'barge', 'barrel',
'barrette', 'barrow', 'baseball_base', 'baseball', 'baseball_bat',
'baseball_cap', 'baseball_glove', 'basket', 'basketball_hoop',
'basketball', 'bass_horn', 'bat_(animal)', 'bath_mat', 'bath_towel',
'bathrobe', 'bathtub', 'batter_(food)', 'battery', 'beachball', 'bead',
'beaker', 'bean_curd', 'beanbag', 'beanie', 'bear', 'bed',
'bedspread', 'cow', 'beef_(food)', 'beeper', 'beer_bottle', 'beer_can',
'beetle', 'bell', 'bell_pepper', 'belt', 'belt_buckle', 'bench',
'beret', 'bib', 'Bible', 'bicycle', 'visor', 'binder', 'binoculars',
'bird', 'birdfeeder', 'birdbath', 'birdcage', 'birdhouse',
'birthday_cake', 'birthday_card', 'biscuit_(bread)', 'pirate_flag',
'black_sheep', 'blackboard', 'blanket', 'blazer', 'blender', 'blimp',
'blinker', 'blueberry', 'boar', 'gameboard', 'boat', 'bobbin',
'bobby_pin', 'boiled_egg', 'bolo_tie', 'deadbolt', 'bolt', 'bonnet',
'book', 'book_bag', 'bookcase', 'booklet', 'bookmark',
'boom_microphone', 'boot', 'bottle', 'bottle_opener', 'bouquet',
'bow_(weapon)', 'bow_(decorative_ribbons)', 'bow-tie', 'bowl',
'pipe_bowl', 'bowler_hat', 'bowling_ball', 'bowling_pin',
'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere',
'bread-bin', 'breechcloth', 'bridal_gown', 'briefcase',
'bristle_brush', 'broccoli', 'broach', 'broom', 'brownie',
'brussels_sprouts', 'bubble_gum', 'bucket', 'horse_buggy', 'bull',
'bulldog', 'bulldozer', 'bullet_train', 'bulletin_board',
'bulletproof_vest', 'bullhorn', 'corned_beef', 'bun', 'bunk_bed',
'buoy', 'burrito', 'bus_(vehicle)', 'business_card', 'butcher_knife',
'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car',
'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf',
'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)',
'can', 'can_opener', 'candelabrum', 'candle', 'candle_holder',
'candy_bar', 'candy_cane', 'walking_cane', 'canister', 'cannon',
'canoe', 'cantaloup', 'canteen', 'cap_(headwear)', 'bottle_cap',
'cape', 'cappuccino', 'car_(automobile)', 'railcar_(part_of_a_train)',
'elevator_car', 'car_battery', 'identity_card', 'card', 'cardigan',
'cargo_ship', 'carnation', 'horse_carriage', 'carrot', 'tote_bag',
'cart', 'carton', 'cash_register', 'casserole', 'cassette', 'cast',
'cat', 'cauliflower', 'caviar', 'cayenne_(spice)', 'CD_player',
'celery', 'cellular_telephone', 'chain_mail', 'chair', 'chaise_longue',
'champagne', 'chandelier', 'chap', 'checkbook', 'checkerboard',
'cherry', 'chessboard', 'chest_of_drawers_(furniture)',
'chicken_(animal)', 'chicken_wire', 'chickpea', 'Chihuahua',
'chili_(vegetable)', 'chime', 'chinaware', 'crisp_(potato_chip)',
'poker_chip', 'chocolate_bar', 'chocolate_cake', 'chocolate_milk',
'chocolate_mousse', 'choker', 'chopping_board', 'chopstick',
'Christmas_tree', 'slide', 'cider', 'cigar_box', 'cigarette',
'cigarette_case', 'cistern', 'clarinet', 'clasp', 'cleansing_agent',
'clementine', 'clip', 'clipboard', 'clock', 'clock_tower',
'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster', 'coat',
'coat_hanger', 'coatrack', 'cock', 'coconut', 'coffee_filter',
'coffee_maker', 'coffee_table', 'coffeepot', 'coil', 'coin',
'colander', 'coleslaw', 'coloring_material', 'combination_lock',
'pacifier', 'comic_book', 'computer_keyboard', 'concrete_mixer',
'cone', 'control', 'convertible_(automobile)', 'sofa_bed', 'cookie',
'cookie_jar', 'cooking_utensil', 'cooler_(for_food)',
'cork_(bottle_plug)', 'corkboard', 'corkscrew', 'edible_corn',
'cornbread', 'cornet', 'cornice', 'cornmeal', 'corset',
'romaine_lettuce', 'costume', 'cougar', 'coverall', 'cowbell',
'cowboy_hat', 'crab_(animal)', 'cracker', 'crape', 'crate', 'crayon',
'cream_pitcher', 'credit_card', 'crescent_roll', 'crib', 'crock_pot',
'crossbar', 'crouton', 'crow', 'crown', 'crucifix', 'cruise_ship',
'police_cruiser', 'crumb', 'crutch', 'cub_(animal)', 'cube',
'cucumber', 'cufflink', 'cup', 'trophy_cup', 'cupcake', 'hair_curler',
'curling_iron', 'curtain', 'cushion', 'custard', 'cutting_tool',
'cylinder', 'cymbal', 'dachshund', 'dagger', 'dartboard',
'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk',
'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux',
'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher',
'dishwasher_detergent', 'diskette', 'dispenser', 'Dixie_cup', 'dog',
'dog_collar', 'doll', 'dollar', 'dolphin', 'domestic_ass', 'eye_mask',
'doorbell', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly',
'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit',
'dresser', 'drill', 'drinking_fountain', 'drone', 'dropper',
'drum_(musical_instrument)', 'drumstick', 'duck', 'duckling',
'duct_tape', 'duffel_bag', 'dumbbell', 'dumpster', 'dustpan',
'Dutch_oven', 'eagle', 'earphone', 'earplug', 'earring', 'easel',
'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater',
'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk',
'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan',
'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)',
'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm',
'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace',
'fireplug', 'fish', 'fish_(food)', 'fishbowl', 'fishing_boat',
'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flash',
'flashlight', 'fleece', 'flip-flop_(sandal)', 'flipper_(footwear)',
'flower_arrangement', 'flute_glass', 'foal', 'folding_chair',
'food_processor', 'football_(American)', 'football_helmet',
'footstool', 'fork', 'forklift', 'freight_car', 'French_toast',
'freshener', 'frisbee', 'frog', 'fruit_juice', 'fruit_salad',
'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage',
'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic',
'gasmask', 'gazelle', 'gelatin', 'gemstone', 'giant_panda',
'gift_wrap', 'ginger', 'giraffe', 'cincture',
'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles',
'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose',
'gorilla', 'gourd', 'surgical_gown', 'grape', 'grasshopper', 'grater',
'gravestone', 'gravy_boat', 'green_bean', 'green_onion', 'griddle',
'grillroom', 'grinder_(tool)', 'grits', 'grizzly', 'grocery_bag',
'guacamole', 'guitar', 'gull', 'gun', 'hair_spray', 'hairbrush',
'hairnet', 'hairpin', 'ham', 'hamburger', 'hammer', 'hammock',
'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel',
'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw',
'hardback_book', 'harmonium', 'hat', 'hatbox', 'hatch', 'veil',
'headband', 'headboard', 'headlight', 'headscarf', 'headset',
'headstall_(for_horses)', 'hearing_aid', 'heart', 'heater',
'helicopter', 'helmet', 'heron', 'highchair', 'hinge', 'hippopotamus',
'hockey_stick', 'hog', 'home_plate_(baseball)', 'honey', 'fume_hood',
'hook', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce',
'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear',
'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate',
'ice_tea', 'igniter', 'incense', 'inhaler', 'iPod',
'iron_(for_clothing)', 'ironing_board', 'jacket', 'jam', 'jean',
'jeep', 'jelly_bean', 'jersey', 'jet_plane', 'jewelry', 'joystick',
'jumpsuit', 'kayak', 'keg', 'kennel', 'kettle', 'key', 'keycard',
'kilt', 'kimono', 'kitchen_sink', 'kitchen_table', 'kite', 'kitten',
'kiwi_fruit', 'knee_pad', 'knife', 'knight_(chess_piece)',
'knitting_needle', 'knob', 'knocker_(on_a_door)', 'koala', 'lab_coat',
'ladder', 'ladle', 'ladybug', 'lamb_(animal)', 'lamb-chop', 'lamp',
'lamppost', 'lampshade', 'lantern', 'lanyard', 'laptop_computer',
'lasagna', 'latch', 'lawn_mower', 'leather', 'legging_(clothing)',
'Lego', 'lemon', 'lemonade', 'lettuce', 'license_plate', 'life_buoy',
'life_jacket', 'lightbulb', 'lightning_rod', 'lime', 'limousine',
'linen_paper', 'lion', 'lip_balm', 'lipstick', 'liquor', 'lizard',
'Loafer_(type_of_shoe)', 'log', 'lollipop', 'lotion',
'speaker_(stero_equipment)', 'loveseat', 'machine_gun', 'magazine',
'magnet', 'mail_slot', 'mailbox_(at_home)', 'mallet', 'mammoth',
'mandarin_orange', 'manger', 'manhole', 'map', 'marker', 'martini',
'mascot', 'mashed_potato', 'masher', 'mask', 'mast',
'mat_(gym_equipment)', 'matchbox', 'mattress', 'measuring_cup',
'measuring_stick', 'meatball', 'medicine', 'melon', 'microphone',
'microscope', 'microwave_oven', 'milestone', 'milk', 'minivan',
'mint_candy', 'mirror', 'mitten', 'mixer_(kitchen_tool)', 'money',
'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor',
'motor_scooter', 'motor_vehicle', 'motorboat', 'motorcycle',
'mound_(baseball)', 'mouse_(animal_rodent)',
'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom',
'music_stool', 'musical_instrument', 'nailfile', 'nameplate', 'napkin',
'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newsstand',
'nightshirt', 'nosebag_(for_animals)', 'noseband_(for_animals)',
'notebook', 'notepad', 'nut', 'nutcracker', 'oar', 'octopus_(food)',
'octopus_(animal)', 'oil_lamp', 'olive_oil', 'omelet', 'onion',
'orange_(fruit)', 'orange_juice', 'oregano', 'ostrich', 'ottoman',
'overalls_(clothing)', 'owl', 'packet', 'inkpad', 'pad', 'paddle',
'padlock', 'paintbox', 'paintbrush', 'painting', 'pajamas', 'palette',
'pan_(for_cooking)', 'pan_(metal_container)', 'pancake', 'pantyhose',
'papaya', 'paperclip', 'paper_plate', 'paper_towel', 'paperback_book',
'paperweight', 'parachute', 'parakeet', 'parasail_(sports)',
'parchment', 'parka', 'parking_meter', 'parrot',
'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport',
'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter',
'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'pegboard',
'pelican', 'pen', 'pencil', 'pencil_box', 'pencil_sharpener',
'pendulum', 'penguin', 'pennant', 'penny_(coin)', 'pepper',
'pepper_mill', 'perfume', 'persimmon', 'baby', 'pet', 'petfood',
'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano',
'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow',
'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball',
'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)',
'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat',
'plate', 'platter', 'playing_card', 'playpen', 'pliers',
'plow_(farm_equipment)', 'pocket_watch', 'pocketknife',
'poker_(fire_stirring_tool)', 'pole', 'police_van', 'polo_shirt',
'poncho', 'pony', 'pool_table', 'pop_(soda)', 'portrait',
'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato',
'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'printer',
'projectile_(weapon)', 'projector', 'propeller', 'prune', 'pudding',
'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher', 'puppet',
'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit', 'race_car',
'racket', 'radar', 'radiator', 'radio_receiver', 'radish', 'raft',
'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat',
'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt',
'recliner', 'record_player', 'red_cabbage', 'reflector',
'remote_control', 'rhinoceros', 'rib_(food)', 'rifle', 'ring',
'river_boat', 'road_map', 'robe', 'rocking_chair', 'roller_skate',
'Rollerblade', 'rolling_pin', 'root_beer',
'router_(computer_equipment)', 'rubber_band', 'runner_(carpet)',
'plastic_bag', 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag',
'safety_pin', 'sail', 'salad', 'salad_plate', 'salami',
'salmon_(fish)', 'salmon_(food)', 'salsa', 'saltshaker',
'sandal_(type_of_shoe)', 'sandwich', 'satchel', 'saucepan', 'saucer',
'sausage', 'sawhorse', 'saxophone', 'scale_(measuring_instrument)',
'scarecrow', 'scarf', 'school_bus', 'scissors', 'scoreboard',
'scrambled_eggs', 'scraper', 'scratcher', 'screwdriver',
'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane',
'seashell', 'seedling', 'serving_dish', 'sewing_machine', 'shaker',
'shampoo', 'shark', 'sharpener', 'Sharpie', 'shaver_(electric)',
'shaving_cream', 'shawl', 'shears', 'sheep', 'shepherd_dog',
'sherbert', 'shield', 'shirt', 'shoe', 'shopping_bag', 'shopping_cart',
'short_pants', 'shot_glass', 'shoulder_bag', 'shovel', 'shower_head',
'shower_curtain', 'shredder_(for_paper)', 'sieve', 'signboard', 'silo',
'sink', 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka',
'ski_pole', 'skirt', 'sled', 'sleeping_bag', 'sling_(bandage)',
'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman',
'snowmobile', 'soap', 'soccer_ball', 'sock', 'soda_fountain',
'carbonated_water', 'sofa', 'softball', 'solar_array', 'sombrero',
'soup', 'soup_bowl', 'soupspoon', 'sour_cream', 'soya_milk',
'space_shuttle', 'sparkler_(fireworks)', 'spatula', 'spear',
'spectacles', 'spice_rack', 'spider', 'sponge', 'spoon', 'sportswear',
'spotlight', 'squirrel', 'stapler_(stapling_machine)', 'starfish',
'statue_(sculpture)', 'steak_(food)', 'steak_knife',
'steamer_(kitchen_appliance)', 'steering_wheel', 'stencil',
'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer',
'stirrup', 'stockings_(leg_wear)', 'stool', 'stop_sign', 'brake_light',
'stove', 'strainer', 'strap', 'straw_(for_drinking)', 'strawberry',
'street_sign', 'streetlight', 'string_cheese', 'stylus', 'subwoofer',
'sugar_bowl', 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower',
'sunglasses', 'sunhat', 'sunscreen', 'surfboard', 'sushi', 'mop',
'sweat_pants', 'sweatband', 'sweater', 'sweatshirt', 'sweet_potato',
'swimsuit', 'sword', 'syringe', 'Tabasco_sauce', 'table-tennis_table',
'table', 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag',
'taillight', 'tambourine', 'army_tank', 'tank_(storage_vessel)',
'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure',
'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup',
'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth',
'telephone_pole', 'telephoto_lens', 'television_camera',
'television_set', 'tennis_ball', 'tennis_racket', 'tequila',
'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread',
'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil',
'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven',
'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush',
'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel',
'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light',
'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline',
'tray', 'tree_house', 'trench_coat', 'triangle_(musical_instrument)',
'tricycle', 'tripod', 'trousers', 'truck', 'truffle_(chocolate)',
'trunk', 'vat', 'turban', 'turkey_(bird)', 'turkey_(food)', 'turnip',
'turtle', 'turtleneck_(clothing)', 'typewriter', 'umbrella',
'underwear', 'unicycle', 'urinal', 'urn', 'vacuum_cleaner', 'valve',
'vase', 'vending_machine', 'vent', 'videotape', 'vinegar', 'violin',
'vodka', 'volleyball', 'vulture', 'waffle', 'waffle_iron', 'wagon',
'wagon_wheel', 'walking_stick', 'wall_clock', 'wall_socket', 'wallet',
'walrus', 'wardrobe', 'wasabi', 'automatic_washer', 'watch',
'water_bottle', 'water_cooler', 'water_faucet', 'water_filter',
'water_heater', 'water_jug', 'water_gun', 'water_scooter', 'water_ski',
'water_tower', 'watering_can', 'watermelon', 'weathervane', 'webcam',
'wedding_cake', 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair',
'whipped_cream', 'whiskey', 'whistle', 'wick', 'wig', 'wind_chime',
'windmill', 'window_box_(for_plants)', 'windshield_wiper', 'windsock',
'wine_bottle', 'wine_bucket', 'wineglass', 'wing_chair',
'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon', 'wreath',
'wrench', 'wristband', 'wristlet', 'yacht', 'yak', 'yogurt',
'yoke_(animal_equipment)', 'zebra', 'zucchini')
def load_annotations(self, ann_file):
"""Load annotation from lvis style annotation file.
Args:
ann_file (str): Path of annotation file.
Returns:
list[dict]: Annotation info from LVIS api.
"""
try:
import lvis
assert lvis.__version__ >= '10.5.3'
from lvis import LVIS
except AssertionError:
raise AssertionError('Incompatible version of lvis is installed. '
'Run pip uninstall lvis first. Then run pip '
'install mmlvis to install open-mmlab forked '
'lvis. ')
except ImportError:
raise ImportError('Package lvis is not installed. Please run pip '
'install mmlvis to install open-mmlab forked '
'lvis.')
self.coco = LVIS(ann_file)
assert not self.custom_classes, 'LVIS custom classes is not supported'
self.cat_ids = self.coco.get_cat_ids()
self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)}
self.img_ids = self.coco.get_img_ids()
data_infos = []
for i in self.img_ids:
info = self.coco.load_imgs([i])[0]
if info['file_name'].startswith('COCO'):
# Convert form the COCO 2014 file naming convention of
# COCO_[train/val/test]2014_000000000000.jpg to the 2017
# naming convention of 000000000000.jpg
# (LVIS v1 will fix this naming issue)
info['filename'] = info['file_name'][-16:]
else:
info['filename'] = info['file_name']
data_infos.append(info)
return data_infos
def evaluate(self,
results,
metric='bbox',
logger=None,
jsonfile_prefix=None,
classwise=False,
proposal_nums=(100, 300, 1000),
iou_thrs=np.arange(0.5, 0.96, 0.05)):
"""Evaluation in LVIS protocol.
Args:
results (list[list | tuple]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated. Options are
'bbox', 'segm', 'proposal', 'proposal_fast'.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
jsonfile_prefix (str | None):
classwise (bool): Whether to evaluating the AP for each class.
proposal_nums (Sequence[int]): Proposal number used for evaluating
recalls, such as recall@100, recall@1000.
Default: (100, 300, 1000).
iou_thrs (Sequence[float]): IoU threshold used for evaluating
recalls. If set to a list, the average recall of all IoUs will
also be computed. Default: 0.5.
Returns:
dict[str, float]: LVIS style metrics.
"""
try:
import lvis
assert lvis.__version__ >= '10.5.3'
from lvis import LVISResults, LVISEval
except AssertionError:
raise AssertionError('Incompatible version of lvis is installed. '
'Run pip uninstall lvis first. Then run pip '
'install mmlvis to install open-mmlab forked '
'lvis. ')
except ImportError:
raise ImportError('Package lvis is not installed. Please run pip '
'install mmlvis to install open-mmlab forked '
'lvis.')
assert isinstance(results, list), 'results must be a list'
assert len(results) == len(self), (
'The length of results is not equal to the dataset len: {} != {}'.
format(len(results), len(self)))
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError('metric {} is not supported'.format(metric))
if jsonfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
jsonfile_prefix = osp.join(tmp_dir.name, 'results')
else:
tmp_dir = None
result_files = self.results2json(results, jsonfile_prefix)
eval_results = OrderedDict()
# get original api
lvis_gt = self.coco
for metric in metrics:
msg = 'Evaluating {}...'.format(metric)
if logger is None:
msg = '\n' + msg
print_log(msg, logger=logger)
if metric == 'proposal_fast':
ar = self.fast_eval_recall(
results, proposal_nums, iou_thrs, logger='silent')
log_msg = []
for i, num in enumerate(proposal_nums):
eval_results['AR@{}'.format(num)] = ar[i]
log_msg.append('\nAR@{}\t{:.4f}'.format(num, ar[i]))
log_msg = ''.join(log_msg)
print_log(log_msg, logger=logger)
continue
if metric not in result_files:
raise KeyError('{} is not in results'.format(metric))
try:
lvis_dt = LVISResults(lvis_gt, result_files[metric])
except IndexError:
print_log(
'The testing results of the whole dataset is empty.',
logger=logger,
level=logging.ERROR)
break
iou_type = 'bbox' if metric == 'proposal' else metric
lvis_eval = LVISEval(lvis_gt, lvis_dt, iou_type)
lvis_eval.params.imgIds = self.img_ids
if metric == 'proposal':
lvis_eval.params.useCats = 0
lvis_eval.params.maxDets = list(proposal_nums)
lvis_eval.evaluate()
lvis_eval.accumulate()
lvis_eval.summarize()
for k, v in lvis_eval.get_results().items():
if k.startswith('AR'):
val = float('{:.3f}'.format(float(v)))
eval_results[k] = val
else:
lvis_eval.evaluate()
lvis_eval.accumulate()
lvis_eval.summarize()
lvis_results = lvis_eval.get_results()
if classwise: # Compute per-category AP
# Compute per-category AP
# from https://github.com/facebookresearch/detectron2/
precisions = lvis_eval.eval['precision']
# precision: (iou, recall, cls, area range, max dets)
assert len(self.cat_ids) == precisions.shape[2]
results_per_category = []
for idx, catId in enumerate(self.cat_ids):
# area range index 0: all area ranges
# max dets index -1: typically 100 per image
nm = self.coco.load_cats(catId)[0]
precision = precisions[:, :, idx, 0, -1]
precision = precision[precision > -1]
if precision.size:
ap = np.mean(precision)
else:
ap = float('nan')
results_per_category.append(
(f'{nm["name"]}', f'{float(ap):0.3f}'))
num_columns = min(6, len(results_per_category) * 2)
results_flatten = list(
itertools.chain(*results_per_category))
headers = ['category', 'AP'] * (num_columns // 2)
results_2d = itertools.zip_longest(*[
results_flatten[i::num_columns]
for i in range(num_columns)
])
table_data = [headers]
table_data += [result for result in results_2d]
table = AsciiTable(table_data)
print_log('\n' + table.table, logger=logger)
for k, v in lvis_results.items():
if k.startswith('AP'):
key = '{}_{}'.format(metric, k)
val = float('{:.3f}'.format(float(v)))
eval_results[key] = val
ap_summary = ' '.join([
'{}:{:.3f}'.format(k, float(v))
for k, v in lvis_results.items() if k.startswith('AP')
])
eval_results['{}_mAP_copypaste'.format(metric)] = ap_summary
lvis_eval.print_results()
if tmp_dir is not None:
tmp_dir.cleanup()
return eval_results
LVISDataset = LVISV05Dataset
DATASETS.register_module(name='LVISDataset', module=LVISDataset)
@DATASETS.register_module()
class LVISV1Dataset(LVISDataset):
CLASSES = (
'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock', 'alcohol',
'alligator', 'almond', 'ambulance', 'amplifier', 'anklet', 'antenna',
'apple', 'applesauce', 'apricot', 'apron', 'aquarium',
'arctic_(type_of_shoe)', 'armband', 'armchair', 'armoire', 'armor',
'artichoke', 'trash_can', 'ashtray', 'asparagus', 'atomizer',
'avocado', 'award', 'awning', 'ax', 'baboon', 'baby_buggy',
'basketball_backboard', 'backpack', 'handbag', 'suitcase', 'bagel',
'bagpipe', 'baguet', 'bait', 'ball', 'ballet_skirt', 'balloon',
'bamboo', 'banana', 'Band_Aid', 'bandage', 'bandanna', 'banjo',
'banner', 'barbell', 'barge', 'barrel', 'barrette', 'barrow',
'baseball_base', 'baseball', 'baseball_bat', 'baseball_cap',
'baseball_glove', 'basket', 'basketball', 'bass_horn', 'bat_(animal)',
'bath_mat', 'bath_towel', 'bathrobe', 'bathtub', 'batter_(food)',
'battery', 'beachball', 'bead', 'bean_curd', 'beanbag', 'beanie',
'bear', 'bed', 'bedpan', 'bedspread', 'cow', 'beef_(food)', 'beeper',
'beer_bottle', 'beer_can', 'beetle', 'bell', 'bell_pepper', 'belt',
'belt_buckle', 'bench', 'beret', 'bib', 'Bible', 'bicycle', 'visor',
'billboard', 'binder', 'binoculars', 'bird', 'birdfeeder', 'birdbath',
'birdcage', 'birdhouse', 'birthday_cake', 'birthday_card',
'pirate_flag', 'black_sheep', 'blackberry', 'blackboard', 'blanket',
'blazer', 'blender', 'blimp', 'blinker', 'blouse', 'blueberry',
'gameboard', 'boat', 'bob', 'bobbin', 'bobby_pin', 'boiled_egg',
'bolo_tie', 'deadbolt', 'bolt', 'bonnet', 'book', 'bookcase',
'booklet', 'bookmark', 'boom_microphone', 'boot', 'bottle',
'bottle_opener', 'bouquet', 'bow_(weapon)', 'bow_(decorative_ribbons)',
'bow-tie', 'bowl', 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'box',
'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere',
'bread-bin', 'bread', 'breechcloth', 'bridal_gown', 'briefcase',
'broccoli', 'broach', 'broom', 'brownie', 'brussels_sprouts',
'bubble_gum', 'bucket', 'horse_buggy', 'bull', 'bulldog', 'bulldozer',
'bullet_train', 'bulletin_board', 'bulletproof_vest', 'bullhorn',
'bun', 'bunk_bed', 'buoy', 'burrito', 'bus_(vehicle)', 'business_card',
'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car',
'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf',
'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)',
'can', 'can_opener', 'candle', 'candle_holder', 'candy_bar',
'candy_cane', 'walking_cane', 'canister', 'canoe', 'cantaloup',
'canteen', 'cap_(headwear)', 'bottle_cap', 'cape', 'cappuccino',
'car_(automobile)', 'railcar_(part_of_a_train)', 'elevator_car',
'car_battery', 'identity_card', 'card', 'cardigan', 'cargo_ship',
'carnation', 'horse_carriage', 'carrot', 'tote_bag', 'cart', 'carton',
'cash_register', 'casserole', 'cassette', 'cast', 'cat', 'cauliflower',
'cayenne_(spice)', 'CD_player', 'celery', 'cellular_telephone',
'chain_mail', 'chair', 'chaise_longue', 'chalice', 'chandelier',
'chap', 'checkbook', 'checkerboard', 'cherry', 'chessboard',
'chicken_(animal)', 'chickpea', 'chili_(vegetable)', 'chime',
'chinaware', 'crisp_(potato_chip)', 'poker_chip', 'chocolate_bar',
'chocolate_cake', 'chocolate_milk', 'chocolate_mousse', 'choker',
'chopping_board', 'chopstick', 'Christmas_tree', 'slide', 'cider',
'cigar_box', 'cigarette', 'cigarette_case', 'cistern', 'clarinet',
'clasp', 'cleansing_agent', 'cleat_(for_securing_rope)', 'clementine',
'clip', 'clipboard', 'clippers_(for_plants)', 'cloak', 'clock',
'clock_tower', 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster',
'coat', 'coat_hanger', 'coatrack', 'cock', 'cockroach',
'cocoa_(beverage)', 'coconut', 'coffee_maker', 'coffee_table',
'coffeepot', 'coil', 'coin', 'colander', 'coleslaw',
'coloring_material', 'combination_lock', 'pacifier', 'comic_book',
'compass', 'computer_keyboard', 'condiment', 'cone', 'control',
'convertible_(automobile)', 'sofa_bed', 'cooker', 'cookie',
'cooking_utensil', 'cooler_(for_food)', 'cork_(bottle_plug)',
'corkboard', 'corkscrew', 'edible_corn', 'cornbread', 'cornet',
'cornice', 'cornmeal', 'corset', 'costume', 'cougar', 'coverall',
'cowbell', 'cowboy_hat', 'crab_(animal)', 'crabmeat', 'cracker',
'crape', 'crate', 'crayon', 'cream_pitcher', 'crescent_roll', 'crib',
'crock_pot', 'crossbar', 'crouton', 'crow', 'crowbar', 'crown',
'crucifix', 'cruise_ship', 'police_cruiser', 'crumb', 'crutch',
'cub_(animal)', 'cube', 'cucumber', 'cufflink', 'cup', 'trophy_cup',
'cupboard', 'cupcake', 'hair_curler', 'curling_iron', 'curtain',
'cushion', 'cylinder', 'cymbal', 'dagger', 'dalmatian', 'dartboard',
'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk',
'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux',
'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher',
'dishwasher_detergent', 'dispenser', 'diving_board', 'Dixie_cup',
'dog', 'dog_collar', 'doll', 'dollar', 'dollhouse', 'dolphin',
'domestic_ass', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly',
'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit',
'dresser', 'drill', 'drone', 'dropper', 'drum_(musical_instrument)',
'drumstick', 'duck', 'duckling', 'duct_tape', 'duffel_bag', 'dumbbell',
'dumpster', 'dustpan', 'eagle', 'earphone', 'earplug', 'earring',
'easel', 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater',
'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk',
'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan',
'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)',
'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm',
'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace',
'fireplug', 'first-aid_kit', 'fish', 'fish_(food)', 'fishbowl',
'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flap',
'flash', 'flashlight', 'fleece', 'flip-flop_(sandal)',
'flipper_(footwear)', 'flower_arrangement', 'flute_glass', 'foal',
'folding_chair', 'food_processor', 'football_(American)',
'football_helmet', 'footstool', 'fork', 'forklift', 'freight_car',
'French_toast', 'freshener', 'frisbee', 'frog', 'fruit_juice',
'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage',
'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic',
'gasmask', 'gazelle', 'gelatin', 'gemstone', 'generator',
'giant_panda', 'gift_wrap', 'ginger', 'giraffe', 'cincture',
'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles',
'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose',
'gorilla', 'gourd', 'grape', 'grater', 'gravestone', 'gravy_boat',
'green_bean', 'green_onion', 'griddle', 'grill', 'grits', 'grizzly',
'grocery_bag', 'guitar', 'gull', 'gun', 'hairbrush', 'hairnet',
'hairpin', 'halter_top', 'ham', 'hamburger', 'hammer', 'hammock',
'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel',
'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw',
'hardback_book', 'harmonium', 'hat', 'hatbox', 'veil', 'headband',
'headboard', 'headlight', 'headscarf', 'headset',
'headstall_(for_horses)', 'heart', 'heater', 'helicopter', 'helmet',
'heron', 'highchair', 'hinge', 'hippopotamus', 'hockey_stick', 'hog',
'home_plate_(baseball)', 'honey', 'fume_hood', 'hook', 'hookah',
'hornet', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce',
'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear',
'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate',
'igniter', 'inhaler', 'iPod', 'iron_(for_clothing)', 'ironing_board',
'jacket', 'jam', 'jar', 'jean', 'jeep', 'jelly_bean', 'jersey',
'jet_plane', 'jewel', 'jewelry', 'joystick', 'jumpsuit', 'kayak',
'keg', 'kennel', 'kettle', 'key', 'keycard', 'kilt', 'kimono',
'kitchen_sink', 'kitchen_table', 'kite', 'kitten', 'kiwi_fruit',
'knee_pad', 'knife', 'knitting_needle', 'knob', 'knocker_(on_a_door)',
'koala', 'lab_coat', 'ladder', 'ladle', 'ladybug', 'lamb_(animal)',
'lamb-chop', 'lamp', 'lamppost', 'lampshade', 'lantern', 'lanyard',
'laptop_computer', 'lasagna', 'latch', 'lawn_mower', 'leather',
'legging_(clothing)', 'Lego', 'legume', 'lemon', 'lemonade', 'lettuce',
'license_plate', 'life_buoy', 'life_jacket', 'lightbulb',
'lightning_rod', 'lime', 'limousine', 'lion', 'lip_balm', 'liquor',
'lizard', 'log', 'lollipop', 'speaker_(stero_equipment)', 'loveseat',
'machine_gun', 'magazine', 'magnet', 'mail_slot', 'mailbox_(at_home)',
'mallard', 'mallet', 'mammoth', 'manatee', 'mandarin_orange', 'manger',
'manhole', 'map', 'marker', 'martini', 'mascot', 'mashed_potato',
'masher', 'mask', 'mast', 'mat_(gym_equipment)', 'matchbox',
'mattress', 'measuring_cup', 'measuring_stick', 'meatball', 'medicine',
'melon', 'microphone', 'microscope', 'microwave_oven', 'milestone',
'milk', 'milk_can', 'milkshake', 'minivan', 'mint_candy', 'mirror',
'mitten', 'mixer_(kitchen_tool)', 'money',
'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor',
'motor_scooter', 'motor_vehicle', 'motorcycle', 'mound_(baseball)',
'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom',
'music_stool', 'musical_instrument', 'nailfile', 'napkin',
'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newspaper',
'newsstand', 'nightshirt', 'nosebag_(for_animals)',
'noseband_(for_animals)', 'notebook', 'notepad', 'nut', 'nutcracker',
'oar', 'octopus_(food)', 'octopus_(animal)', 'oil_lamp', 'olive_oil',
'omelet', 'onion', 'orange_(fruit)', 'orange_juice', 'ostrich',
'ottoman', 'oven', 'overalls_(clothing)', 'owl', 'packet', 'inkpad',
'pad', 'paddle', 'padlock', 'paintbrush', 'painting', 'pajamas',
'palette', 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake',
'pantyhose', 'papaya', 'paper_plate', 'paper_towel', 'paperback_book',
'paperweight', 'parachute', 'parakeet', 'parasail_(sports)', 'parasol',
'parchment', 'parka', 'parking_meter', 'parrot',
'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport',
'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter',
'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'wooden_leg',
'pegboard', 'pelican', 'pen', 'pencil', 'pencil_box',
'pencil_sharpener', 'pendulum', 'penguin', 'pennant', 'penny_(coin)',
'pepper', 'pepper_mill', 'perfume', 'persimmon', 'person', 'pet',
'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano',
'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow',
'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball',
'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)',
'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat',
'plate', 'platter', 'playpen', 'pliers', 'plow_(farm_equipment)',
'plume', 'pocket_watch', 'pocketknife', 'poker_(fire_stirring_tool)',
'pole', 'polo_shirt', 'poncho', 'pony', 'pool_table', 'pop_(soda)',
'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato',
'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'pretzel',
'printer', 'projectile_(weapon)', 'projector', 'propeller', 'prune',
'pudding', 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher',
'puppet', 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit',
'race_car', 'racket', 'radar', 'radiator', 'radio_receiver', 'radish',
'raft', 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat',
'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt',
'recliner', 'record_player', 'reflector', 'remote_control',
'rhinoceros', 'rib_(food)', 'rifle', 'ring', 'river_boat', 'road_map',
'robe', 'rocking_chair', 'rodent', 'roller_skate', 'Rollerblade',
'rolling_pin', 'root_beer', 'router_(computer_equipment)',
'rubber_band', 'runner_(carpet)', 'plastic_bag',
'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag', 'safety_pin',
'sail', 'salad', 'salad_plate', 'salami', 'salmon_(fish)',
'salmon_(food)', 'salsa', 'saltshaker', 'sandal_(type_of_shoe)',
'sandwich', 'satchel', 'saucepan', 'saucer', 'sausage', 'sawhorse',
'saxophone', 'scale_(measuring_instrument)', 'scarecrow', 'scarf',
'school_bus', 'scissors', 'scoreboard', 'scraper', 'screwdriver',
'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane',
'seashell', 'sewing_machine', 'shaker', 'shampoo', 'shark',
'sharpener', 'Sharpie', 'shaver_(electric)', 'shaving_cream', 'shawl',
'shears', 'sheep', 'shepherd_dog', 'sherbert', 'shield', 'shirt',
'shoe', 'shopping_bag', 'shopping_cart', 'short_pants', 'shot_glass',
'shoulder_bag', 'shovel', 'shower_head', 'shower_cap',
'shower_curtain', 'shredder_(for_paper)', 'signboard', 'silo', 'sink',
'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka', 'ski_pole',
'skirt', 'skullcap', 'sled', 'sleeping_bag', 'sling_(bandage)',
'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman',
'snowmobile', 'soap', 'soccer_ball', 'sock', 'sofa', 'softball',
'solar_array', 'sombrero', 'soup', 'soup_bowl', 'soupspoon',
'sour_cream', 'soya_milk', 'space_shuttle', 'sparkler_(fireworks)',
'spatula', 'spear', 'spectacles', 'spice_rack', 'spider', 'crawfish',
'sponge', 'spoon', 'sportswear', 'spotlight', 'squid_(food)',
'squirrel', 'stagecoach', 'stapler_(stapling_machine)', 'starfish',
'statue_(sculpture)', 'steak_(food)', 'steak_knife', 'steering_wheel',
'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer',
'stirrup', 'stool', 'stop_sign', 'brake_light', 'stove', 'strainer',
'strap', 'straw_(for_drinking)', 'strawberry', 'street_sign',
'streetlight', 'string_cheese', 'stylus', 'subwoofer', 'sugar_bowl',
'sugarcane_(plant)', 'suit_(clothing)', 'sunflower', 'sunglasses',
'sunhat', 'surfboard', 'sushi', 'mop', 'sweat_pants', 'sweatband',
'sweater', 'sweatshirt', 'sweet_potato', 'swimsuit', 'sword',
'syringe', 'Tabasco_sauce', 'table-tennis_table', 'table',
'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag', 'taillight',
'tambourine', 'army_tank', 'tank_(storage_vessel)',
'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure',
'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup',
'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth',
'telephone_pole', 'telephoto_lens', 'television_camera',
'television_set', 'tennis_ball', 'tennis_racket', 'tequila',
'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread',
'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil',
'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven',
'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush',
'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel',
'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light',
'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline',
'tray', 'trench_coat', 'triangle_(musical_instrument)', 'tricycle',
'tripod', 'trousers', 'truck', 'truffle_(chocolate)', 'trunk', 'vat',
'turban', 'turkey_(food)', 'turnip', 'turtle', 'turtleneck_(clothing)',
'typewriter', 'umbrella', 'underwear', 'unicycle', 'urinal', 'urn',
'vacuum_cleaner', 'vase', 'vending_machine', 'vent', 'vest',
'videotape', 'vinegar', 'violin', 'vodka', 'volleyball', 'vulture',
'waffle', 'waffle_iron', 'wagon', 'wagon_wheel', 'walking_stick',
'wall_clock', 'wall_socket', 'wallet', 'walrus', 'wardrobe',
'washbasin', 'automatic_washer', 'watch', 'water_bottle',
'water_cooler', 'water_faucet', 'water_heater', 'water_jug',
'water_gun', 'water_scooter', 'water_ski', 'water_tower',
'watering_can', 'watermelon', 'weathervane', 'webcam', 'wedding_cake',
'wedding_ring', 'wet_suit', 'wheel', 'wheelchair', 'whipped_cream',
'whistle', 'wig', 'wind_chime', 'windmill', 'window_box_(for_plants)',
'windshield_wiper', 'windsock', 'wine_bottle', 'wine_bucket',
'wineglass', 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon',
'wreath', 'wrench', 'wristband', 'wristlet', 'yacht', 'yogurt',
'yoke_(animal_equipment)', 'zebra', 'zucchini')
def load_annotations(self, ann_file):
try:
import lvis
assert lvis.__version__ >= '10.5.3'
from lvis import LVIS
except AssertionError:
raise AssertionError('Incompatible version of lvis is installed. '
'Run pip uninstall lvis first. Then run pip '
'install mmlvis to install open-mmlab forked '
'lvis. ')
except ImportError:
raise ImportError('Package lvis is not installed. Please run pip '
'install mmlvis to install open-mmlab forked '
'lvis.')
self.coco = LVIS(ann_file)
assert not self.custom_classes, 'LVIS custom classes is not supported'
self.cat_ids = self.coco.get_cat_ids()
self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)}
self.img_ids = self.coco.get_img_ids()
data_infos = []
for i in self.img_ids:
info = self.coco.load_imgs([i])[0]
# coco_url is used in LVISv1 instead of file_name
# e.g. http://images.cocodataset.org/train2017/000000391895.jpg
# train/val split in specified in url
info['filename'] = info['coco_url'].replace(
'http://images.cocodataset.org/', '')
data_infos.append(info)
return data_infos
| insightface/detection/scrfd/mmdet/datasets/lvis.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/datasets/lvis.py",
"repo_id": "insightface",
"token_count": 22131
} | 102 |
import os.path as osp
import xml.etree.ElementTree as ET
import mmcv
import numpy as np
from PIL import Image
from .builder import DATASETS
from .custom import CustomDataset
@DATASETS.register_module()
class XMLDataset(CustomDataset):
"""XML dataset for detection.
Args:
min_size (int | float, optional): The minimum size of bounding
boxes in the images. If the size of a bounding box is less than
``min_size``, it would be add to ignored field.
"""
def __init__(self, min_size=None, **kwargs):
super(XMLDataset, self).__init__(**kwargs)
self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)}
self.min_size = min_size
def load_annotations(self, ann_file):
"""Load annotation from XML style ann_file.
Args:
ann_file (str): Path of XML file.
Returns:
list[dict]: Annotation info from XML file.
"""
data_infos = []
img_ids = mmcv.list_from_file(ann_file)
for img_id in img_ids:
filename = f'JPEGImages/{img_id}.jpg'
xml_path = osp.join(self.img_prefix, 'Annotations',
f'{img_id}.xml')
tree = ET.parse(xml_path)
root = tree.getroot()
size = root.find('size')
width = 0
height = 0
if size is not None:
width = int(size.find('width').text)
height = int(size.find('height').text)
else:
img_path = osp.join(self.img_prefix, 'JPEGImages',
'{}.jpg'.format(img_id))
img = Image.open(img_path)
width, height = img.size
data_infos.append(
dict(id=img_id, filename=filename, width=width, height=height))
return data_infos
def _filter_imgs(self, min_size=32):
"""Filter images too small or without annotation."""
valid_inds = []
for i, img_info in enumerate(self.data_infos):
if min(img_info['width'], img_info['height']) < min_size:
continue
if self.filter_empty_gt:
img_id = img_info['id']
xml_path = osp.join(self.img_prefix, 'Annotations',
f'{img_id}.xml')
tree = ET.parse(xml_path)
root = tree.getroot()
for obj in root.findall('object'):
name = obj.find('name').text
if name in self.CLASSES:
valid_inds.append(i)
break
else:
valid_inds.append(i)
return valid_inds
def get_ann_info(self, idx):
"""Get annotation from XML file by index.
Args:
idx (int): Index of data.
Returns:
dict: Annotation info of specified index.
"""
img_id = self.data_infos[idx]['id']
xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml')
tree = ET.parse(xml_path)
root = tree.getroot()
bboxes = []
labels = []
bboxes_ignore = []
labels_ignore = []
for obj in root.findall('object'):
name = obj.find('name').text
if name not in self.CLASSES:
continue
label = self.cat2label[name]
difficult = int(obj.find('difficult').text)
bnd_box = obj.find('bndbox')
# TODO: check whether it is necessary to use int
# Coordinates may be float type
bbox = [
int(float(bnd_box.find('xmin').text)),
int(float(bnd_box.find('ymin').text)),
int(float(bnd_box.find('xmax').text)),
int(float(bnd_box.find('ymax').text))
]
ignore = False
if self.min_size:
assert not self.test_mode
w = bbox[2] - bbox[0]
h = bbox[3] - bbox[1]
if w < self.min_size or h < self.min_size:
ignore = True
if difficult or ignore:
bboxes_ignore.append(bbox)
labels_ignore.append(label)
else:
bboxes.append(bbox)
labels.append(label)
if not bboxes:
bboxes = np.zeros((0, 4))
labels = np.zeros((0, ))
else:
bboxes = np.array(bboxes, ndmin=2) - 1
labels = np.array(labels)
if not bboxes_ignore:
bboxes_ignore = np.zeros((0, 4))
labels_ignore = np.zeros((0, ))
else:
bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1
labels_ignore = np.array(labels_ignore)
ann = dict(
bboxes=bboxes.astype(np.float32),
labels=labels.astype(np.int64),
bboxes_ignore=bboxes_ignore.astype(np.float32),
labels_ignore=labels_ignore.astype(np.int64))
return ann
def get_cat_ids(self, idx):
"""Get category ids in XML file by index.
Args:
idx (int): Index of data.
Returns:
list[int]: All categories in the image of specified index.
"""
cat_ids = []
img_id = self.data_infos[idx]['id']
xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml')
tree = ET.parse(xml_path)
root = tree.getroot()
for obj in root.findall('object'):
name = obj.find('name').text
if name not in self.CLASSES:
continue
label = self.cat2label[name]
cat_ids.append(label)
return cat_ids
| insightface/detection/scrfd/mmdet/datasets/xml_style.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/datasets/xml_style.py",
"repo_id": "insightface",
"token_count": 3049
} | 103 |
from mmcv.utils import Registry, build_from_cfg
from torch import nn
BACKBONES = Registry('backbone')
NECKS = Registry('neck')
ROI_EXTRACTORS = Registry('roi_extractor')
SHARED_HEADS = Registry('shared_head')
HEADS = Registry('head')
LOSSES = Registry('loss')
DETECTORS = Registry('detector')
def build(cfg, registry, default_args=None):
"""Build a module.
Args:
cfg (dict, list[dict]): The config of modules, is is either a dict
or a list of configs.
registry (:obj:`Registry`): A registry the module belongs to.
default_args (dict, optional): Default arguments to build the module.
Defaults to None.
Returns:
nn.Module: A built nn module.
"""
if isinstance(cfg, list):
modules = [
build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg
]
return nn.Sequential(*modules)
else:
return build_from_cfg(cfg, registry, default_args)
def build_backbone(cfg):
"""Build backbone."""
return build(cfg, BACKBONES)
def build_neck(cfg):
"""Build neck."""
return build(cfg, NECKS)
def build_roi_extractor(cfg):
"""Build roi extractor."""
return build(cfg, ROI_EXTRACTORS)
def build_shared_head(cfg):
"""Build shared head."""
return build(cfg, SHARED_HEADS)
def build_head(cfg):
"""Build head."""
return build(cfg, HEADS)
def build_loss(cfg):
"""Build loss."""
return build(cfg, LOSSES)
def build_detector(cfg, train_cfg=None, test_cfg=None):
"""Build detector."""
return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
| insightface/detection/scrfd/mmdet/models/builder.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/builder.py",
"repo_id": "insightface",
"token_count": 648
} | 104 |
import torch
import torch.nn as nn
from mmcv.cnn import bias_init_with_prob, normal_init
from mmcv.ops import DeformConv2d, MaskedConv2d
from mmcv.runner import force_fp32
from mmdet.core import (anchor_inside_flags, build_anchor_generator,
build_assigner, build_bbox_coder, build_sampler,
calc_region, images_to_levels, multi_apply,
multiclass_nms, unmap)
from ..builder import HEADS, build_loss
from .anchor_head import AnchorHead
class FeatureAdaption(nn.Module):
"""Feature Adaption Module.
Feature Adaption Module is implemented based on DCN v1.
It uses anchor shape prediction rather than feature map to
predict offsets of deform conv layer.
Args:
in_channels (int): Number of channels in the input feature map.
out_channels (int): Number of channels in the output feature map.
kernel_size (int): Deformable conv kernel size.
deform_groups (int): Deformable conv group size.
"""
def __init__(self,
in_channels,
out_channels,
kernel_size=3,
deform_groups=4):
super(FeatureAdaption, self).__init__()
offset_channels = kernel_size * kernel_size * 2
self.conv_offset = nn.Conv2d(
2, deform_groups * offset_channels, 1, bias=False)
self.conv_adaption = DeformConv2d(
in_channels,
out_channels,
kernel_size=kernel_size,
padding=(kernel_size - 1) // 2,
deform_groups=deform_groups)
self.relu = nn.ReLU(inplace=True)
def init_weights(self):
normal_init(self.conv_offset, std=0.1)
normal_init(self.conv_adaption, std=0.01)
def forward(self, x, shape):
offset = self.conv_offset(shape.detach())
x = self.relu(self.conv_adaption(x, offset))
return x
@HEADS.register_module()
class GuidedAnchorHead(AnchorHead):
"""Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.).
This GuidedAnchorHead will predict high-quality feature guided
anchors and locations where anchors will be kept in inference.
There are mainly 3 categories of bounding-boxes.
- Sampled 9 pairs for target assignment. (approxes)
- The square boxes where the predicted anchors are based on. (squares)
- Guided anchors.
Please refer to https://arxiv.org/abs/1901.03278 for more details.
Args:
num_classes (int): Number of classes.
in_channels (int): Number of channels in the input feature map.
feat_channels (int): Number of hidden channels.
approx_anchor_generator (dict): Config dict for approx generator
square_anchor_generator (dict): Config dict for square generator
anchor_coder (dict): Config dict for anchor coder
bbox_coder (dict): Config dict for bbox coder
deform_groups: (int): Group number of DCN in
FeatureAdaption module.
loc_filter_thr (float): Threshold to filter out unconcerned regions.
loss_loc (dict): Config of location loss.
loss_shape (dict): Config of anchor shape loss.
loss_cls (dict): Config of classification loss.
loss_bbox (dict): Config of bbox regression loss.
"""
def __init__(
self,
num_classes,
in_channels,
feat_channels=256,
approx_anchor_generator=dict(
type='AnchorGenerator',
octave_base_scale=8,
scales_per_octave=3,
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
square_anchor_generator=dict(
type='AnchorGenerator',
ratios=[1.0],
scales=[8],
strides=[4, 8, 16, 32, 64]),
anchor_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]
),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]
),
reg_decoded_bbox=False,
deform_groups=4,
loc_filter_thr=0.01,
train_cfg=None,
test_cfg=None,
loss_loc=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)): # yapf: disable
super(AnchorHead, self).__init__()
self.in_channels = in_channels
self.num_classes = num_classes
self.feat_channels = feat_channels
self.deform_groups = deform_groups
self.loc_filter_thr = loc_filter_thr
# build approx_anchor_generator and square_anchor_generator
assert (approx_anchor_generator['octave_base_scale'] ==
square_anchor_generator['scales'][0])
assert (approx_anchor_generator['strides'] ==
square_anchor_generator['strides'])
self.approx_anchor_generator = build_anchor_generator(
approx_anchor_generator)
self.square_anchor_generator = build_anchor_generator(
square_anchor_generator)
self.approxs_per_octave = self.approx_anchor_generator \
.num_base_anchors[0]
self.reg_decoded_bbox = reg_decoded_bbox
# one anchor per location
self.num_anchors = 1
self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
self.loc_focal_loss = loss_loc['type'] in ['FocalLoss']
self.sampling = loss_cls['type'] not in ['FocalLoss']
self.ga_sampling = train_cfg is not None and hasattr(
train_cfg, 'ga_sampler')
if self.use_sigmoid_cls:
self.cls_out_channels = self.num_classes
else:
self.cls_out_channels = self.num_classes + 1
# build bbox_coder
self.anchor_coder = build_bbox_coder(anchor_coder)
self.bbox_coder = build_bbox_coder(bbox_coder)
# build losses
self.loss_loc = build_loss(loss_loc)
self.loss_shape = build_loss(loss_shape)
self.loss_cls = build_loss(loss_cls)
self.loss_bbox = build_loss(loss_bbox)
self.train_cfg = train_cfg
self.test_cfg = test_cfg
if self.train_cfg:
self.assigner = build_assigner(self.train_cfg.assigner)
# use PseudoSampler when sampling is False
if self.sampling and hasattr(self.train_cfg, 'sampler'):
sampler_cfg = self.train_cfg.sampler
else:
sampler_cfg = dict(type='PseudoSampler')
self.sampler = build_sampler(sampler_cfg, context=self)
self.ga_assigner = build_assigner(self.train_cfg.ga_assigner)
if self.ga_sampling:
ga_sampler_cfg = self.train_cfg.ga_sampler
else:
ga_sampler_cfg = dict(type='PseudoSampler')
self.ga_sampler = build_sampler(ga_sampler_cfg, context=self)
self.fp16_enabled = False
self._init_layers()
def _init_layers(self):
self.relu = nn.ReLU(inplace=True)
self.conv_loc = nn.Conv2d(self.in_channels, 1, 1)
self.conv_shape = nn.Conv2d(self.in_channels, self.num_anchors * 2, 1)
self.feature_adaption = FeatureAdaption(
self.in_channels,
self.feat_channels,
kernel_size=3,
deform_groups=self.deform_groups)
self.conv_cls = MaskedConv2d(self.feat_channels,
self.num_anchors * self.cls_out_channels,
1)
self.conv_reg = MaskedConv2d(self.feat_channels, self.num_anchors * 4,
1)
def init_weights(self):
normal_init(self.conv_cls, std=0.01)
normal_init(self.conv_reg, std=0.01)
bias_cls = bias_init_with_prob(0.01)
normal_init(self.conv_loc, std=0.01, bias=bias_cls)
normal_init(self.conv_shape, std=0.01)
self.feature_adaption.init_weights()
def forward_single(self, x):
loc_pred = self.conv_loc(x)
shape_pred = self.conv_shape(x)
x = self.feature_adaption(x, shape_pred)
# masked conv is only used during inference for speed-up
if not self.training:
mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr
else:
mask = None
cls_score = self.conv_cls(x, mask)
bbox_pred = self.conv_reg(x, mask)
return cls_score, bbox_pred, shape_pred, loc_pred
def forward(self, feats):
return multi_apply(self.forward_single, feats)
def get_sampled_approxs(self, featmap_sizes, img_metas, device='cuda'):
"""Get sampled approxs and inside flags according to feature map sizes.
Args:
featmap_sizes (list[tuple]): Multi-level feature map sizes.
img_metas (list[dict]): Image meta info.
device (torch.device | str): device for returned tensors
Returns:
tuple: approxes of each image, inside flags of each image
"""
num_imgs = len(img_metas)
# since feature map sizes of all images are the same, we only compute
# approxes for one time
multi_level_approxs = self.approx_anchor_generator.grid_anchors(
featmap_sizes, device=device)
approxs_list = [multi_level_approxs for _ in range(num_imgs)]
# for each image, we compute inside flags of multi level approxes
inside_flag_list = []
for img_id, img_meta in enumerate(img_metas):
multi_level_flags = []
multi_level_approxs = approxs_list[img_id]
# obtain valid flags for each approx first
multi_level_approx_flags = self.approx_anchor_generator \
.valid_flags(featmap_sizes,
img_meta['pad_shape'],
device=device)
for i, flags in enumerate(multi_level_approx_flags):
approxs = multi_level_approxs[i]
inside_flags_list = []
for i in range(self.approxs_per_octave):
split_valid_flags = flags[i::self.approxs_per_octave]
split_approxs = approxs[i::self.approxs_per_octave, :]
inside_flags = anchor_inside_flags(
split_approxs, split_valid_flags,
img_meta['img_shape'][:2],
self.train_cfg.allowed_border)
inside_flags_list.append(inside_flags)
# inside_flag for a position is true if any anchor in this
# position is true
inside_flags = (
torch.stack(inside_flags_list, 0).sum(dim=0) > 0)
multi_level_flags.append(inside_flags)
inside_flag_list.append(multi_level_flags)
return approxs_list, inside_flag_list
def get_anchors(self,
featmap_sizes,
shape_preds,
loc_preds,
img_metas,
use_loc_filter=False,
device='cuda'):
"""Get squares according to feature map sizes and guided anchors.
Args:
featmap_sizes (list[tuple]): Multi-level feature map sizes.
shape_preds (list[tensor]): Multi-level shape predictions.
loc_preds (list[tensor]): Multi-level location predictions.
img_metas (list[dict]): Image meta info.
use_loc_filter (bool): Use loc filter or not.
device (torch.device | str): device for returned tensors
Returns:
tuple: square approxs of each image, guided anchors of each image,
loc masks of each image
"""
num_imgs = len(img_metas)
num_levels = len(featmap_sizes)
# since feature map sizes of all images are the same, we only compute
# squares for one time
multi_level_squares = self.square_anchor_generator.grid_anchors(
featmap_sizes, device=device)
squares_list = [multi_level_squares for _ in range(num_imgs)]
# for each image, we compute multi level guided anchors
guided_anchors_list = []
loc_mask_list = []
for img_id, img_meta in enumerate(img_metas):
multi_level_guided_anchors = []
multi_level_loc_mask = []
for i in range(num_levels):
squares = squares_list[img_id][i]
shape_pred = shape_preds[i][img_id]
loc_pred = loc_preds[i][img_id]
guided_anchors, loc_mask = self._get_guided_anchors_single(
squares,
shape_pred,
loc_pred,
use_loc_filter=use_loc_filter)
multi_level_guided_anchors.append(guided_anchors)
multi_level_loc_mask.append(loc_mask)
guided_anchors_list.append(multi_level_guided_anchors)
loc_mask_list.append(multi_level_loc_mask)
return squares_list, guided_anchors_list, loc_mask_list
def _get_guided_anchors_single(self,
squares,
shape_pred,
loc_pred,
use_loc_filter=False):
"""Get guided anchors and loc masks for a single level.
Args:
square (tensor): Squares of a single level.
shape_pred (tensor): Shape predections of a single level.
loc_pred (tensor): Loc predections of a single level.
use_loc_filter (list[tensor]): Use loc filter or not.
Returns:
tuple: guided anchors, location masks
"""
# calculate location filtering mask
loc_pred = loc_pred.sigmoid().detach()
if use_loc_filter:
loc_mask = loc_pred >= self.loc_filter_thr
else:
loc_mask = loc_pred >= 0.0
mask = loc_mask.permute(1, 2, 0).expand(-1, -1, self.num_anchors)
mask = mask.contiguous().view(-1)
# calculate guided anchors
squares = squares[mask]
anchor_deltas = shape_pred.permute(1, 2, 0).contiguous().view(
-1, 2).detach()[mask]
bbox_deltas = anchor_deltas.new_full(squares.size(), 0)
bbox_deltas[:, 2:] = anchor_deltas
guided_anchors = self.anchor_coder.decode(
squares, bbox_deltas, wh_ratio_clip=1e-6)
return guided_anchors, mask
def ga_loc_targets(self, gt_bboxes_list, featmap_sizes):
"""Compute location targets for guided anchoring.
Each feature map is divided into positive, negative and ignore regions.
- positive regions: target 1, weight 1
- ignore regions: target 0, weight 0
- negative regions: target 0, weight 0.1
Args:
gt_bboxes_list (list[Tensor]): Gt bboxes of each image.
featmap_sizes (list[tuple]): Multi level sizes of each feature
maps.
Returns:
tuple
"""
anchor_scale = self.approx_anchor_generator.octave_base_scale
anchor_strides = self.approx_anchor_generator.strides
# Currently only supports same stride in x and y direction.
for stride in anchor_strides:
assert (stride[0] == stride[1])
anchor_strides = [stride[0] for stride in anchor_strides]
center_ratio = self.train_cfg.center_ratio
ignore_ratio = self.train_cfg.ignore_ratio
img_per_gpu = len(gt_bboxes_list)
num_lvls = len(featmap_sizes)
r1 = (1 - center_ratio) / 2
r2 = (1 - ignore_ratio) / 2
all_loc_targets = []
all_loc_weights = []
all_ignore_map = []
for lvl_id in range(num_lvls):
h, w = featmap_sizes[lvl_id]
loc_targets = torch.zeros(
img_per_gpu,
1,
h,
w,
device=gt_bboxes_list[0].device,
dtype=torch.float32)
loc_weights = torch.full_like(loc_targets, -1)
ignore_map = torch.zeros_like(loc_targets)
all_loc_targets.append(loc_targets)
all_loc_weights.append(loc_weights)
all_ignore_map.append(ignore_map)
for img_id in range(img_per_gpu):
gt_bboxes = gt_bboxes_list[img_id]
scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) *
(gt_bboxes[:, 3] - gt_bboxes[:, 1]))
min_anchor_size = scale.new_full(
(1, ), float(anchor_scale * anchor_strides[0]))
# assign gt bboxes to different feature levels w.r.t. their scales
target_lvls = torch.floor(
torch.log2(scale) - torch.log2(min_anchor_size) + 0.5)
target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long()
for gt_id in range(gt_bboxes.size(0)):
lvl = target_lvls[gt_id].item()
# rescaled to corresponding feature map
gt_ = gt_bboxes[gt_id, :4] / anchor_strides[lvl]
# calculate ignore regions
ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region(
gt_, r2, featmap_sizes[lvl])
# calculate positive (center) regions
ctr_x1, ctr_y1, ctr_x2, ctr_y2 = calc_region(
gt_, r1, featmap_sizes[lvl])
all_loc_targets[lvl][img_id, 0, ctr_y1:ctr_y2 + 1,
ctr_x1:ctr_x2 + 1] = 1
all_loc_weights[lvl][img_id, 0, ignore_y1:ignore_y2 + 1,
ignore_x1:ignore_x2 + 1] = 0
all_loc_weights[lvl][img_id, 0, ctr_y1:ctr_y2 + 1,
ctr_x1:ctr_x2 + 1] = 1
# calculate ignore map on nearby low level feature
if lvl > 0:
d_lvl = lvl - 1
# rescaled to corresponding feature map
gt_ = gt_bboxes[gt_id, :4] / anchor_strides[d_lvl]
ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region(
gt_, r2, featmap_sizes[d_lvl])
all_ignore_map[d_lvl][img_id, 0, ignore_y1:ignore_y2 + 1,
ignore_x1:ignore_x2 + 1] = 1
# calculate ignore map on nearby high level feature
if lvl < num_lvls - 1:
u_lvl = lvl + 1
# rescaled to corresponding feature map
gt_ = gt_bboxes[gt_id, :4] / anchor_strides[u_lvl]
ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region(
gt_, r2, featmap_sizes[u_lvl])
all_ignore_map[u_lvl][img_id, 0, ignore_y1:ignore_y2 + 1,
ignore_x1:ignore_x2 + 1] = 1
for lvl_id in range(num_lvls):
# ignore negative regions w.r.t. ignore map
all_loc_weights[lvl_id][(all_loc_weights[lvl_id] < 0)
& (all_ignore_map[lvl_id] > 0)] = 0
# set negative regions with weight 0.1
all_loc_weights[lvl_id][all_loc_weights[lvl_id] < 0] = 0.1
# loc average factor to balance loss
loc_avg_factor = sum(
[t.size(0) * t.size(-1) * t.size(-2)
for t in all_loc_targets]) / 200
return all_loc_targets, all_loc_weights, loc_avg_factor
def _ga_shape_target_single(self,
flat_approxs,
inside_flags,
flat_squares,
gt_bboxes,
gt_bboxes_ignore,
img_meta,
unmap_outputs=True):
"""Compute guided anchoring targets.
This function returns sampled anchors and gt bboxes directly
rather than calculates regression targets.
Args:
flat_approxs (Tensor): flat approxs of a single image,
shape (n, 4)
inside_flags (Tensor): inside flags of a single image,
shape (n, ).
flat_squares (Tensor): flat squares of a single image,
shape (approxs_per_octave * n, 4)
gt_bboxes (Tensor): Ground truth bboxes of a single image.
img_meta (dict): Meta info of a single image.
approxs_per_octave (int): number of approxs per octave
cfg (dict): RPN train configs.
unmap_outputs (bool): unmap outputs or not.
Returns:
tuple
"""
if not inside_flags.any():
return (None, ) * 5
# assign gt and sample anchors
expand_inside_flags = inside_flags[:, None].expand(
-1, self.approxs_per_octave).reshape(-1)
approxs = flat_approxs[expand_inside_flags, :]
squares = flat_squares[inside_flags, :]
assign_result = self.ga_assigner.assign(approxs, squares,
self.approxs_per_octave,
gt_bboxes, gt_bboxes_ignore)
sampling_result = self.ga_sampler.sample(assign_result, squares,
gt_bboxes)
bbox_anchors = torch.zeros_like(squares)
bbox_gts = torch.zeros_like(squares)
bbox_weights = torch.zeros_like(squares)
pos_inds = sampling_result.pos_inds
neg_inds = sampling_result.neg_inds
if len(pos_inds) > 0:
bbox_anchors[pos_inds, :] = sampling_result.pos_bboxes
bbox_gts[pos_inds, :] = sampling_result.pos_gt_bboxes
bbox_weights[pos_inds, :] = 1.0
# map up to original set of anchors
if unmap_outputs:
num_total_anchors = flat_squares.size(0)
bbox_anchors = unmap(bbox_anchors, num_total_anchors, inside_flags)
bbox_gts = unmap(bbox_gts, num_total_anchors, inside_flags)
bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags)
return (bbox_anchors, bbox_gts, bbox_weights, pos_inds, neg_inds)
def ga_shape_targets(self,
approx_list,
inside_flag_list,
square_list,
gt_bboxes_list,
img_metas,
gt_bboxes_ignore_list=None,
unmap_outputs=True):
"""Compute guided anchoring targets.
Args:
approx_list (list[list]): Multi level approxs of each image.
inside_flag_list (list[list]): Multi level inside flags of each
image.
square_list (list[list]): Multi level squares of each image.
gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
img_metas (list[dict]): Meta info of each image.
gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes.
unmap_outputs (bool): unmap outputs or not.
Returns:
tuple
"""
num_imgs = len(img_metas)
assert len(approx_list) == len(inside_flag_list) == len(
square_list) == num_imgs
# anchor number of multi levels
num_level_squares = [squares.size(0) for squares in square_list[0]]
# concat all level anchors and flags to a single tensor
inside_flag_flat_list = []
approx_flat_list = []
square_flat_list = []
for i in range(num_imgs):
assert len(square_list[i]) == len(inside_flag_list[i])
inside_flag_flat_list.append(torch.cat(inside_flag_list[i]))
approx_flat_list.append(torch.cat(approx_list[i]))
square_flat_list.append(torch.cat(square_list[i]))
# compute targets for each image
if gt_bboxes_ignore_list is None:
gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
(all_bbox_anchors, all_bbox_gts, all_bbox_weights, pos_inds_list,
neg_inds_list) = multi_apply(
self._ga_shape_target_single,
approx_flat_list,
inside_flag_flat_list,
square_flat_list,
gt_bboxes_list,
gt_bboxes_ignore_list,
img_metas,
unmap_outputs=unmap_outputs)
# no valid anchors
if any([bbox_anchors is None for bbox_anchors in all_bbox_anchors]):
return None
# sampled anchors of all images
num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])
num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])
# split targets to a list w.r.t. multiple levels
bbox_anchors_list = images_to_levels(all_bbox_anchors,
num_level_squares)
bbox_gts_list = images_to_levels(all_bbox_gts, num_level_squares)
bbox_weights_list = images_to_levels(all_bbox_weights,
num_level_squares)
return (bbox_anchors_list, bbox_gts_list, bbox_weights_list,
num_total_pos, num_total_neg)
def loss_shape_single(self, shape_pred, bbox_anchors, bbox_gts,
anchor_weights, anchor_total_num):
shape_pred = shape_pred.permute(0, 2, 3, 1).contiguous().view(-1, 2)
bbox_anchors = bbox_anchors.contiguous().view(-1, 4)
bbox_gts = bbox_gts.contiguous().view(-1, 4)
anchor_weights = anchor_weights.contiguous().view(-1, 4)
bbox_deltas = bbox_anchors.new_full(bbox_anchors.size(), 0)
bbox_deltas[:, 2:] += shape_pred
# filter out negative samples to speed-up weighted_bounded_iou_loss
inds = torch.nonzero(
anchor_weights[:, 0] > 0, as_tuple=False).squeeze(1)
bbox_deltas_ = bbox_deltas[inds]
bbox_anchors_ = bbox_anchors[inds]
bbox_gts_ = bbox_gts[inds]
anchor_weights_ = anchor_weights[inds]
pred_anchors_ = self.anchor_coder.decode(
bbox_anchors_, bbox_deltas_, wh_ratio_clip=1e-6)
loss_shape = self.loss_shape(
pred_anchors_,
bbox_gts_,
anchor_weights_,
avg_factor=anchor_total_num)
return loss_shape
def loss_loc_single(self, loc_pred, loc_target, loc_weight,
loc_avg_factor):
loss_loc = self.loss_loc(
loc_pred.reshape(-1, 1),
loc_target.reshape(-1).long(),
loc_weight.reshape(-1),
avg_factor=loc_avg_factor)
return loss_loc
@force_fp32(
apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds'))
def loss(self,
cls_scores,
bbox_preds,
shape_preds,
loc_preds,
gt_bboxes,
gt_labels,
img_metas,
gt_bboxes_ignore=None):
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
assert len(featmap_sizes) == self.approx_anchor_generator.num_levels
device = cls_scores[0].device
# get loc targets
loc_targets, loc_weights, loc_avg_factor = self.ga_loc_targets(
gt_bboxes, featmap_sizes)
# get sampled approxes
approxs_list, inside_flag_list = self.get_sampled_approxs(
featmap_sizes, img_metas, device=device)
# get squares and guided anchors
squares_list, guided_anchors_list, _ = self.get_anchors(
featmap_sizes, shape_preds, loc_preds, img_metas, device=device)
# get shape targets
shape_targets = self.ga_shape_targets(approxs_list, inside_flag_list,
squares_list, gt_bboxes,
img_metas)
if shape_targets is None:
return None
(bbox_anchors_list, bbox_gts_list, anchor_weights_list, anchor_fg_num,
anchor_bg_num) = shape_targets
anchor_total_num = (
anchor_fg_num if not self.ga_sampling else anchor_fg_num +
anchor_bg_num)
# get anchor targets
label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
cls_reg_targets = self.get_targets(
guided_anchors_list,
inside_flag_list,
gt_bboxes,
img_metas,
gt_bboxes_ignore_list=gt_bboxes_ignore,
gt_labels_list=gt_labels,
label_channels=label_channels)
if cls_reg_targets is None:
return None
(labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
num_total_pos, num_total_neg) = cls_reg_targets
num_total_samples = (
num_total_pos + num_total_neg if self.sampling else num_total_pos)
# anchor number of multi levels
num_level_anchors = [
anchors.size(0) for anchors in guided_anchors_list[0]
]
# concat all level anchors to a single tensor
concat_anchor_list = []
for i in range(len(guided_anchors_list)):
concat_anchor_list.append(torch.cat(guided_anchors_list[i]))
all_anchor_list = images_to_levels(concat_anchor_list,
num_level_anchors)
# get classification and bbox regression losses
losses_cls, losses_bbox = multi_apply(
self.loss_single,
cls_scores,
bbox_preds,
all_anchor_list,
labels_list,
label_weights_list,
bbox_targets_list,
bbox_weights_list,
num_total_samples=num_total_samples)
# get anchor location loss
losses_loc = []
for i in range(len(loc_preds)):
loss_loc = self.loss_loc_single(
loc_preds[i],
loc_targets[i],
loc_weights[i],
loc_avg_factor=loc_avg_factor)
losses_loc.append(loss_loc)
# get anchor shape loss
losses_shape = []
for i in range(len(shape_preds)):
loss_shape = self.loss_shape_single(
shape_preds[i],
bbox_anchors_list[i],
bbox_gts_list[i],
anchor_weights_list[i],
anchor_total_num=anchor_total_num)
losses_shape.append(loss_shape)
return dict(
loss_cls=losses_cls,
loss_bbox=losses_bbox,
loss_shape=losses_shape,
loss_loc=losses_loc)
@force_fp32(
apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds'))
def get_bboxes(self,
cls_scores,
bbox_preds,
shape_preds,
loc_preds,
img_metas,
cfg=None,
rescale=False):
assert len(cls_scores) == len(bbox_preds) == len(shape_preds) == len(
loc_preds)
num_levels = len(cls_scores)
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
device = cls_scores[0].device
# get guided anchors
_, guided_anchors, loc_masks = self.get_anchors(
featmap_sizes,
shape_preds,
loc_preds,
img_metas,
use_loc_filter=not self.training,
device=device)
result_list = []
for img_id in range(len(img_metas)):
cls_score_list = [
cls_scores[i][img_id].detach() for i in range(num_levels)
]
bbox_pred_list = [
bbox_preds[i][img_id].detach() for i in range(num_levels)
]
guided_anchor_list = [
guided_anchors[img_id][i].detach() for i in range(num_levels)
]
loc_mask_list = [
loc_masks[img_id][i].detach() for i in range(num_levels)
]
img_shape = img_metas[img_id]['img_shape']
scale_factor = img_metas[img_id]['scale_factor']
proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list,
guided_anchor_list,
loc_mask_list, img_shape,
scale_factor, cfg, rescale)
result_list.append(proposals)
return result_list
def _get_bboxes_single(self,
cls_scores,
bbox_preds,
mlvl_anchors,
mlvl_masks,
img_shape,
scale_factor,
cfg,
rescale=False):
cfg = self.test_cfg if cfg is None else cfg
assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors)
mlvl_bboxes = []
mlvl_scores = []
for cls_score, bbox_pred, anchors, mask in zip(cls_scores, bbox_preds,
mlvl_anchors,
mlvl_masks):
assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
# if no location is kept, end.
if mask.sum() == 0:
continue
# reshape scores and bbox_pred
cls_score = cls_score.permute(1, 2,
0).reshape(-1, self.cls_out_channels)
if self.use_sigmoid_cls:
scores = cls_score.sigmoid()
else:
scores = cls_score.softmax(-1)
bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4)
# filter scores, bbox_pred w.r.t. mask.
# anchors are filtered in get_anchors() beforehand.
scores = scores[mask, :]
bbox_pred = bbox_pred[mask, :]
if scores.dim() == 0:
anchors = anchors.unsqueeze(0)
scores = scores.unsqueeze(0)
bbox_pred = bbox_pred.unsqueeze(0)
# filter anchors, bbox_pred, scores w.r.t. scores
nms_pre = cfg.get('nms_pre', -1)
if nms_pre > 0 and scores.shape[0] > nms_pre:
if self.use_sigmoid_cls:
max_scores, _ = scores.max(dim=1)
else:
# remind that we set FG labels to [0, num_class-1]
# since mmdet v2.0
# BG cat_id: num_class
max_scores, _ = scores[:, :-1].max(dim=1)
_, topk_inds = max_scores.topk(nms_pre)
anchors = anchors[topk_inds, :]
bbox_pred = bbox_pred[topk_inds, :]
scores = scores[topk_inds, :]
bboxes = self.bbox_coder.decode(
anchors, bbox_pred, max_shape=img_shape)
mlvl_bboxes.append(bboxes)
mlvl_scores.append(scores)
mlvl_bboxes = torch.cat(mlvl_bboxes)
if rescale:
mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor)
mlvl_scores = torch.cat(mlvl_scores)
if self.use_sigmoid_cls:
# Add a dummy background class to the backend when using sigmoid
# remind that we set FG labels to [0, num_class-1] since mmdet v2.0
# BG cat_id: num_class
padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1)
mlvl_scores = torch.cat([mlvl_scores, padding], dim=1)
# multi class NMS
det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores,
cfg.score_thr, cfg.nms,
cfg.max_per_img)
return det_bboxes, det_labels
| insightface/detection/scrfd/mmdet/models/dense_heads/guided_anchor_head.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/dense_heads/guided_anchor_head.py",
"repo_id": "insightface",
"token_count": 19308
} | 105 |
# Copyright (c) 2019 Western Digital Corporation or its affiliates.
import warnings
import torch
import torch.nn as nn
import torch.nn.functional as F
from mmcv.cnn import ConvModule, normal_init
from mmcv.runner import force_fp32
from mmdet.core import (build_anchor_generator, build_assigner,
build_bbox_coder, build_sampler, images_to_levels,
multi_apply, multiclass_nms)
from ..builder import HEADS, build_loss
from .base_dense_head import BaseDenseHead
from .dense_test_mixins import BBoxTestMixin
@HEADS.register_module()
class YOLOV3Head(BaseDenseHead, BBoxTestMixin):
"""YOLOV3Head Paper link: https://arxiv.org/abs/1804.02767.
Args:
num_classes (int): The number of object classes (w/o background)
in_channels (List[int]): Number of input channels per scale.
out_channels (List[int]): The number of output channels per scale
before the final 1x1 layer. Default: (1024, 512, 256).
anchor_generator (dict): Config dict for anchor generator
bbox_coder (dict): Config of bounding box coder.
featmap_strides (List[int]): The stride of each scale.
Should be in descending order. Default: (32, 16, 8).
one_hot_smoother (float): Set a non-zero value to enable label-smooth
Default: 0.
conv_cfg (dict): Config dict for convolution layer. Default: None.
norm_cfg (dict): Dictionary to construct and config norm layer.
Default: dict(type='BN', requires_grad=True)
act_cfg (dict): Config dict for activation layer.
Default: dict(type='LeakyReLU', negative_slope=0.1).
loss_cls (dict): Config of classification loss.
loss_conf (dict): Config of confidence loss.
loss_xy (dict): Config of xy coordinate loss.
loss_wh (dict): Config of wh coordinate loss.
train_cfg (dict): Training config of YOLOV3 head. Default: None.
test_cfg (dict): Testing config of YOLOV3 head. Default: None.
"""
def __init__(self,
num_classes,
in_channels,
out_channels=(1024, 512, 256),
anchor_generator=dict(
type='YOLOAnchorGenerator',
base_sizes=[[(116, 90), (156, 198), (373, 326)],
[(30, 61), (62, 45), (59, 119)],
[(10, 13), (16, 30), (33, 23)]],
strides=[32, 16, 8]),
bbox_coder=dict(type='YOLOBBoxCoder'),
featmap_strides=[32, 16, 8],
one_hot_smoother=0.,
conv_cfg=None,
norm_cfg=dict(type='BN', requires_grad=True),
act_cfg=dict(type='LeakyReLU', negative_slope=0.1),
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0),
loss_conf=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0),
loss_xy=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0),
loss_wh=dict(type='MSELoss', loss_weight=1.0),
train_cfg=None,
test_cfg=None):
super(YOLOV3Head, self).__init__()
# Check params
assert (len(in_channels) == len(out_channels) == len(featmap_strides))
self.num_classes = num_classes
self.in_channels = in_channels
self.out_channels = out_channels
self.featmap_strides = featmap_strides
self.train_cfg = train_cfg
self.test_cfg = test_cfg
if self.train_cfg:
self.assigner = build_assigner(self.train_cfg.assigner)
if hasattr(self.train_cfg, 'sampler'):
sampler_cfg = self.train_cfg.sampler
else:
sampler_cfg = dict(type='PseudoSampler')
self.sampler = build_sampler(sampler_cfg, context=self)
self.one_hot_smoother = one_hot_smoother
self.conv_cfg = conv_cfg
self.norm_cfg = norm_cfg
self.act_cfg = act_cfg
self.bbox_coder = build_bbox_coder(bbox_coder)
self.anchor_generator = build_anchor_generator(anchor_generator)
self.loss_cls = build_loss(loss_cls)
self.loss_conf = build_loss(loss_conf)
self.loss_xy = build_loss(loss_xy)
self.loss_wh = build_loss(loss_wh)
# usually the numbers of anchors for each level are the same
# except SSD detectors
self.num_anchors = self.anchor_generator.num_base_anchors[0]
assert len(
self.anchor_generator.num_base_anchors) == len(featmap_strides)
self._init_layers()
@property
def num_levels(self):
return len(self.featmap_strides)
@property
def num_attrib(self):
"""int: number of attributes in pred_map, bboxes (4) +
objectness (1) + num_classes"""
return 5 + self.num_classes
def _init_layers(self):
self.convs_bridge = nn.ModuleList()
self.convs_pred = nn.ModuleList()
for i in range(self.num_levels):
conv_bridge = ConvModule(
self.in_channels[i],
self.out_channels[i],
3,
padding=1,
conv_cfg=self.conv_cfg,
norm_cfg=self.norm_cfg,
act_cfg=self.act_cfg)
conv_pred = nn.Conv2d(self.out_channels[i],
self.num_anchors * self.num_attrib, 1)
self.convs_bridge.append(conv_bridge)
self.convs_pred.append(conv_pred)
def init_weights(self):
"""Initialize weights of the head."""
for m in self.convs_pred:
normal_init(m, std=0.01)
def forward(self, feats):
"""Forward features from the upstream network.
Args:
feats (tuple[Tensor]): Features from the upstream network, each is
a 4D-tensor.
Returns:
tuple[Tensor]: A tuple of multi-level predication map, each is a
4D-tensor of shape (batch_size, 5+num_classes, height, width).
"""
assert len(feats) == self.num_levels
pred_maps = []
for i in range(self.num_levels):
x = feats[i]
x = self.convs_bridge[i](x)
pred_map = self.convs_pred[i](x)
pred_maps.append(pred_map)
return tuple(pred_maps),
@force_fp32(apply_to=('pred_maps', ))
def get_bboxes(self,
pred_maps,
img_metas,
cfg=None,
rescale=False,
with_nms=True):
"""Transform network output for a batch into bbox predictions.
Args:
pred_maps (list[Tensor]): Raw predictions for a batch of images.
img_metas (list[dict]): Meta information of each image, e.g.,
image size, scaling factor, etc.
cfg (mmcv.Config | None): Test / postprocessing configuration,
if None, test_cfg would be used. Default: None.
rescale (bool): If True, return boxes in original image space.
Default: False.
with_nms (bool): If True, do nms before return boxes.
Default: True.
Returns:
list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
The first item is an (n, 5) tensor, where the first 4 columns
are bounding box positions (tl_x, tl_y, br_x, br_y) and the
5-th column is a score between 0 and 1. The second item is a
(n,) tensor where each item is the predicted class label of the
corresponding box.
"""
result_list = []
num_levels = len(pred_maps)
for img_id in range(len(img_metas)):
pred_maps_list = [
pred_maps[i][img_id].detach() for i in range(num_levels)
]
scale_factor = img_metas[img_id]['scale_factor']
proposals = self._get_bboxes_single(pred_maps_list, scale_factor,
cfg, rescale, with_nms)
result_list.append(proposals)
return result_list
def _get_bboxes_single(self,
pred_maps_list,
scale_factor,
cfg,
rescale=False,
with_nms=True):
"""Transform outputs for a single batch item into bbox predictions.
Args:
pred_maps_list (list[Tensor]): Prediction maps for different scales
of each single image in the batch.
scale_factor (ndarray): Scale factor of the image arrange as
(w_scale, h_scale, w_scale, h_scale).
cfg (mmcv.Config | None): Test / postprocessing configuration,
if None, test_cfg would be used.
rescale (bool): If True, return boxes in original image space.
Default: False.
with_nms (bool): If True, do nms before return boxes.
Default: True.
Returns:
tuple(Tensor):
det_bboxes (Tensor): BBox predictions in shape (n, 5), where
the first 4 columns are bounding box positions
(tl_x, tl_y, br_x, br_y) and the 5-th column is a score
between 0 and 1.
det_labels (Tensor): A (n,) tensor where each item is the
predicted class label of the corresponding box.
"""
cfg = self.test_cfg if cfg is None else cfg
assert len(pred_maps_list) == self.num_levels
multi_lvl_bboxes = []
multi_lvl_cls_scores = []
multi_lvl_conf_scores = []
num_levels = len(pred_maps_list)
featmap_sizes = [
pred_maps_list[i].shape[-2:] for i in range(num_levels)
]
multi_lvl_anchors = self.anchor_generator.grid_anchors(
featmap_sizes, pred_maps_list[0][0].device)
for i in range(self.num_levels):
# get some key info for current scale
pred_map = pred_maps_list[i]
stride = self.featmap_strides[i]
# (h, w, num_anchors*num_attrib) -> (h*w*num_anchors, num_attrib)
pred_map = pred_map.permute(1, 2, 0).reshape(-1, self.num_attrib)
pred_map[..., :2] = torch.sigmoid(pred_map[..., :2])
bbox_pred = self.bbox_coder.decode(multi_lvl_anchors[i],
pred_map[..., :4], stride)
# conf and cls
conf_pred = torch.sigmoid(pred_map[..., 4]).view(-1)
cls_pred = torch.sigmoid(pred_map[..., 5:]).view(
-1, self.num_classes) # Cls pred one-hot.
# Filtering out all predictions with conf < conf_thr
conf_thr = cfg.get('conf_thr', -1)
if conf_thr > 0:
# add as_tuple=False for compatibility in Pytorch 1.6
# flatten would create a Reshape op with constant values,
# and raise RuntimeError when doing inference in ONNX Runtime
# with a different input image (#4221).
conf_inds = conf_pred.ge(conf_thr).nonzero(
as_tuple=False).squeeze(1)
bbox_pred = bbox_pred[conf_inds, :]
cls_pred = cls_pred[conf_inds, :]
conf_pred = conf_pred[conf_inds]
# Get top-k prediction
nms_pre = cfg.get('nms_pre', -1)
if 0 < nms_pre < conf_pred.size(0) and (
not torch.onnx.is_in_onnx_export()):
_, topk_inds = conf_pred.topk(nms_pre)
bbox_pred = bbox_pred[topk_inds, :]
cls_pred = cls_pred[topk_inds, :]
conf_pred = conf_pred[topk_inds]
# Save the result of current scale
multi_lvl_bboxes.append(bbox_pred)
multi_lvl_cls_scores.append(cls_pred)
multi_lvl_conf_scores.append(conf_pred)
# Merge the results of different scales together
multi_lvl_bboxes = torch.cat(multi_lvl_bboxes)
multi_lvl_cls_scores = torch.cat(multi_lvl_cls_scores)
multi_lvl_conf_scores = torch.cat(multi_lvl_conf_scores)
if with_nms and (multi_lvl_conf_scores.size(0) == 0):
return torch.zeros((0, 5)), torch.zeros((0, ))
if rescale:
multi_lvl_bboxes /= multi_lvl_bboxes.new_tensor(scale_factor)
# In mmdet 2.x, the class_id for background is num_classes.
# i.e., the last column.
padding = multi_lvl_cls_scores.new_zeros(multi_lvl_cls_scores.shape[0],
1)
multi_lvl_cls_scores = torch.cat([multi_lvl_cls_scores, padding],
dim=1)
# Support exporting to onnx without nms
if with_nms and cfg.get('nms', None) is not None:
det_bboxes, det_labels = multiclass_nms(
multi_lvl_bboxes,
multi_lvl_cls_scores,
cfg.score_thr,
cfg.nms,
cfg.max_per_img,
score_factors=multi_lvl_conf_scores)
return det_bboxes, det_labels
else:
return (multi_lvl_bboxes, multi_lvl_cls_scores,
multi_lvl_conf_scores)
@force_fp32(apply_to=('pred_maps', ))
def loss(self,
pred_maps,
gt_bboxes,
gt_labels,
img_metas,
gt_bboxes_ignore=None):
"""Compute loss of the head.
Args:
pred_maps (list[Tensor]): Prediction map for each scale level,
shape (N, num_anchors * num_attrib, H, W)
gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
gt_labels (list[Tensor]): class indices corresponding to each box
img_metas (list[dict]): Meta information of each image, e.g.,
image size, scaling factor, etc.
gt_bboxes_ignore (None | list[Tensor]): specify which bounding
boxes can be ignored when computing the loss.
Returns:
dict[str, Tensor]: A dictionary of loss components.
"""
num_imgs = len(img_metas)
device = pred_maps[0][0].device
featmap_sizes = [
pred_maps[i].shape[-2:] for i in range(self.num_levels)
]
multi_level_anchors = self.anchor_generator.grid_anchors(
featmap_sizes, device)
anchor_list = [multi_level_anchors for _ in range(num_imgs)]
responsible_flag_list = []
for img_id in range(len(img_metas)):
responsible_flag_list.append(
self.anchor_generator.responsible_flags(
featmap_sizes, gt_bboxes[img_id], device))
target_maps_list, neg_maps_list = self.get_targets(
anchor_list, responsible_flag_list, gt_bboxes, gt_labels)
losses_cls, losses_conf, losses_xy, losses_wh = multi_apply(
self.loss_single, pred_maps, target_maps_list, neg_maps_list)
return dict(
loss_cls=losses_cls,
loss_conf=losses_conf,
loss_xy=losses_xy,
loss_wh=losses_wh)
def loss_single(self, pred_map, target_map, neg_map):
"""Compute loss of a single image from a batch.
Args:
pred_map (Tensor): Raw predictions for a single level.
target_map (Tensor): The Ground-Truth target for a single level.
neg_map (Tensor): The negative masks for a single level.
Returns:
tuple:
loss_cls (Tensor): Classification loss.
loss_conf (Tensor): Confidence loss.
loss_xy (Tensor): Regression loss of x, y coordinate.
loss_wh (Tensor): Regression loss of w, h coordinate.
"""
num_imgs = len(pred_map)
pred_map = pred_map.permute(0, 2, 3,
1).reshape(num_imgs, -1, self.num_attrib)
neg_mask = neg_map.float()
pos_mask = target_map[..., 4]
pos_and_neg_mask = neg_mask + pos_mask
pos_mask = pos_mask.unsqueeze(dim=-1)
if torch.max(pos_and_neg_mask) > 1.:
warnings.warn('There is overlap between pos and neg sample.')
pos_and_neg_mask = pos_and_neg_mask.clamp(min=0., max=1.)
pred_xy = pred_map[..., :2]
pred_wh = pred_map[..., 2:4]
pred_conf = pred_map[..., 4]
pred_label = pred_map[..., 5:]
target_xy = target_map[..., :2]
target_wh = target_map[..., 2:4]
target_conf = target_map[..., 4]
target_label = target_map[..., 5:]
loss_cls = self.loss_cls(pred_label, target_label, weight=pos_mask)
loss_conf = self.loss_conf(
pred_conf, target_conf, weight=pos_and_neg_mask)
loss_xy = self.loss_xy(pred_xy, target_xy, weight=pos_mask)
loss_wh = self.loss_wh(pred_wh, target_wh, weight=pos_mask)
return loss_cls, loss_conf, loss_xy, loss_wh
def get_targets(self, anchor_list, responsible_flag_list, gt_bboxes_list,
gt_labels_list):
"""Compute target maps for anchors in multiple images.
Args:
anchor_list (list[list[Tensor]]): Multi level anchors of each
image. The outer list indicates images, and the inner list
corresponds to feature levels of the image. Each element of
the inner list is a tensor of shape (num_total_anchors, 4).
responsible_flag_list (list[list[Tensor]]): Multi level responsible
flags of each image. Each element is a tensor of shape
(num_total_anchors, )
gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
gt_labels_list (list[Tensor]): Ground truth labels of each box.
Returns:
tuple: Usually returns a tuple containing learning targets.
- target_map_list (list[Tensor]): Target map of each level.
- neg_map_list (list[Tensor]): Negative map of each level.
"""
num_imgs = len(anchor_list)
# anchor number of multi levels
num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
results = multi_apply(self._get_targets_single, anchor_list,
responsible_flag_list, gt_bboxes_list,
gt_labels_list)
all_target_maps, all_neg_maps = results
assert num_imgs == len(all_target_maps) == len(all_neg_maps)
target_maps_list = images_to_levels(all_target_maps, num_level_anchors)
neg_maps_list = images_to_levels(all_neg_maps, num_level_anchors)
return target_maps_list, neg_maps_list
def _get_targets_single(self, anchors, responsible_flags, gt_bboxes,
gt_labels):
"""Generate matching bounding box prior and converted GT.
Args:
anchors (list[Tensor]): Multi-level anchors of the image.
responsible_flags (list[Tensor]): Multi-level responsible flags of
anchors
gt_bboxes (Tensor): Ground truth bboxes of single image.
gt_labels (Tensor): Ground truth labels of single image.
Returns:
tuple:
target_map (Tensor): Predication target map of each
scale level, shape (num_total_anchors,
5+num_classes)
neg_map (Tensor): Negative map of each scale level,
shape (num_total_anchors,)
"""
anchor_strides = []
for i in range(len(anchors)):
anchor_strides.append(
torch.tensor(self.featmap_strides[i],
device=gt_bboxes.device).repeat(len(anchors[i])))
concat_anchors = torch.cat(anchors)
concat_responsible_flags = torch.cat(responsible_flags)
anchor_strides = torch.cat(anchor_strides)
assert len(anchor_strides) == len(concat_anchors) == \
len(concat_responsible_flags)
assign_result = self.assigner.assign(concat_anchors,
concat_responsible_flags,
gt_bboxes)
sampling_result = self.sampler.sample(assign_result, concat_anchors,
gt_bboxes)
target_map = concat_anchors.new_zeros(
concat_anchors.size(0), self.num_attrib)
target_map[sampling_result.pos_inds, :4] = self.bbox_coder.encode(
sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes,
anchor_strides[sampling_result.pos_inds])
target_map[sampling_result.pos_inds, 4] = 1
gt_labels_one_hot = F.one_hot(
gt_labels, num_classes=self.num_classes).float()
if self.one_hot_smoother != 0: # label smooth
gt_labels_one_hot = gt_labels_one_hot * (
1 - self.one_hot_smoother
) + self.one_hot_smoother / self.num_classes
target_map[sampling_result.pos_inds, 5:] = gt_labels_one_hot[
sampling_result.pos_assigned_gt_inds]
neg_map = concat_anchors.new_zeros(
concat_anchors.size(0), dtype=torch.uint8)
neg_map[sampling_result.neg_inds] = 1
return target_map, neg_map
def aug_test(self, feats, img_metas, rescale=False):
"""Test function with test time augmentation.
Args:
feats (list[Tensor]): the outer list indicates test-time
augmentations and inner Tensor should have a shape NxCxHxW,
which contains features for all images in the batch.
img_metas (list[list[dict]]): the outer list indicates test-time
augs (multiscale, flip, etc.) and the inner list indicates
images in a batch. each dict has image information.
rescale (bool, optional): Whether to rescale the results.
Defaults to False.
Returns:
list[ndarray]: bbox results of each class
"""
return self.aug_test_bboxes(feats, img_metas, rescale=rescale)
| insightface/detection/scrfd/mmdet/models/dense_heads/yolo_head.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/dense_heads/yolo_head.py",
"repo_id": "insightface",
"token_count": 11422
} | 106 |
from ..builder import DETECTORS
from .two_stage import TwoStageDetector
@DETECTORS.register_module()
class MaskScoringRCNN(TwoStageDetector):
"""Mask Scoring RCNN.
https://arxiv.org/abs/1903.00241
"""
def __init__(self,
backbone,
rpn_head,
roi_head,
train_cfg,
test_cfg,
neck=None,
pretrained=None):
super(MaskScoringRCNN, self).__init__(
backbone=backbone,
neck=neck,
rpn_head=rpn_head,
roi_head=roi_head,
train_cfg=train_cfg,
test_cfg=test_cfg,
pretrained=pretrained)
| insightface/detection/scrfd/mmdet/models/detectors/mask_scoring_rcnn.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/detectors/mask_scoring_rcnn.py",
"repo_id": "insightface",
"token_count": 395
} | 107 |
import torch
import torch.nn as nn
import torch.nn.functional as F
from ..builder import LOSSES
def ae_loss_per_image(tl_preds, br_preds, match):
"""Associative Embedding Loss in one image.
Associative Embedding Loss including two parts: pull loss and push loss.
Pull loss makes embedding vectors from same object closer to each other.
Push loss distinguish embedding vector from different objects, and makes
the gap between them is large enough.
During computing, usually there are 3 cases:
- no object in image: both pull loss and push loss will be 0.
- one object in image: push loss will be 0 and pull loss is computed
by the two corner of the only object.
- more than one objects in image: pull loss is computed by corner pairs
from each object, push loss is computed by each object with all
other objects. We use confusion matrix with 0 in diagonal to
compute the push loss.
Args:
tl_preds (tensor): Embedding feature map of left-top corner.
br_preds (tensor): Embedding feature map of bottim-right corner.
match (list): Downsampled coordinates pair of each ground truth box.
"""
tl_list, br_list, me_list = [], [], []
if len(match) == 0: # no object in image
pull_loss = tl_preds.sum() * 0.
push_loss = tl_preds.sum() * 0.
else:
for m in match:
[tl_y, tl_x], [br_y, br_x] = m
tl_e = tl_preds[:, tl_y, tl_x].view(-1, 1)
br_e = br_preds[:, br_y, br_x].view(-1, 1)
tl_list.append(tl_e)
br_list.append(br_e)
me_list.append((tl_e + br_e) / 2.0)
tl_list = torch.cat(tl_list)
br_list = torch.cat(br_list)
me_list = torch.cat(me_list)
assert tl_list.size() == br_list.size()
# N is object number in image, M is dimension of embedding vector
N, M = tl_list.size()
pull_loss = (tl_list - me_list).pow(2) + (br_list - me_list).pow(2)
pull_loss = pull_loss.sum() / N
margin = 1 # exp setting of CornerNet, details in section 3.3 of paper
# confusion matrix of push loss
conf_mat = me_list.expand((N, N, M)).permute(1, 0, 2) - me_list
conf_weight = 1 - torch.eye(N).type_as(me_list)
conf_mat = conf_weight * (margin - conf_mat.sum(-1).abs())
if N > 1: # more than one object in current image
push_loss = F.relu(conf_mat).sum() / (N * (N - 1))
else:
push_loss = tl_preds.sum() * 0.
return pull_loss, push_loss
@LOSSES.register_module()
class AssociativeEmbeddingLoss(nn.Module):
"""Associative Embedding Loss.
More details can be found in
`Associative Embedding <https://arxiv.org/abs/1611.05424>`_ and
`CornerNet <https://arxiv.org/abs/1808.01244>`_ .
Code is modified from `kp_utils.py <https://github.com/princeton-vl/CornerNet/blob/master/models/py_utils/kp_utils.py#L180>`_ # noqa: E501
Args:
pull_weight (float): Loss weight for corners from same object.
push_weight (float): Loss weight for corners from different object.
"""
def __init__(self, pull_weight=0.25, push_weight=0.25):
super(AssociativeEmbeddingLoss, self).__init__()
self.pull_weight = pull_weight
self.push_weight = push_weight
def forward(self, pred, target, match):
"""Forward function."""
batch = pred.size(0)
pull_all, push_all = 0.0, 0.0
for i in range(batch):
pull, push = ae_loss_per_image(pred[i], target[i], match[i])
pull_all += self.pull_weight * pull
push_all += self.push_weight * push
return pull_all, push_all
| insightface/detection/scrfd/mmdet/models/losses/ae_loss.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/losses/ae_loss.py",
"repo_id": "insightface",
"token_count": 1613
} | 108 |
import warnings
import torch.nn as nn
import torch.nn.functional as F
from mmcv.cnn import ConvModule, xavier_init
from mmcv.runner import auto_fp16
from ..builder import NECKS
@NECKS.register_module()
class FPN(nn.Module):
r"""Feature Pyramid Network.
This is an implementation of paper `Feature Pyramid Networks for Object
Detection <https://arxiv.org/abs/1612.03144>`_.
Args:
in_channels (List[int]): Number of input channels per scale.
out_channels (int): Number of output channels (used at each scale)
num_outs (int): Number of output scales.
start_level (int): Index of the start input backbone level used to
build the feature pyramid. Default: 0.
end_level (int): Index of the end input backbone level (exclusive) to
build the feature pyramid. Default: -1, which means the last level.
add_extra_convs (bool | str): If bool, it decides whether to add conv
layers on top of the original feature maps. Default to False.
If True, its actual mode is specified by `extra_convs_on_inputs`.
If str, it specifies the source feature map of the extra convs.
Only the following options are allowed
- 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- 'on_lateral': Last feature map after lateral convs.
- 'on_output': The last output feature map after fpn convs.
extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
on the original feature from the backbone. If True,
it is equivalent to `add_extra_convs='on_input'`. If False, it is
equivalent to set `add_extra_convs='on_output'`. Default to True.
relu_before_extra_convs (bool): Whether to apply relu before the extra
conv. Default: False.
no_norm_on_lateral (bool): Whether to apply norm on lateral.
Default: False.
conv_cfg (dict): Config dict for convolution layer. Default: None.
norm_cfg (dict): Config dict for normalization layer. Default: None.
act_cfg (str): Config dict for activation layer in ConvModule.
Default: None.
upsample_cfg (dict): Config dict for interpolate layer.
Default: `dict(mode='nearest')`
Example:
>>> import torch
>>> in_channels = [2, 3, 5, 7]
>>> scales = [340, 170, 84, 43]
>>> inputs = [torch.rand(1, c, s, s)
... for c, s in zip(in_channels, scales)]
>>> self = FPN(in_channels, 11, len(in_channels)).eval()
>>> outputs = self.forward(inputs)
>>> for i in range(len(outputs)):
... print(f'outputs[{i}].shape = {outputs[i].shape}')
outputs[0].shape = torch.Size([1, 11, 340, 340])
outputs[1].shape = torch.Size([1, 11, 170, 170])
outputs[2].shape = torch.Size([1, 11, 84, 84])
outputs[3].shape = torch.Size([1, 11, 43, 43])
"""
def __init__(self,
in_channels,
out_channels,
num_outs,
start_level=0,
end_level=-1,
add_extra_convs=False,
extra_convs_on_inputs=True,
relu_before_extra_convs=False,
no_norm_on_lateral=False,
conv_cfg=None,
norm_cfg=None,
act_cfg=None,
upsample_cfg=dict(mode='nearest')):
super(FPN, self).__init__()
assert isinstance(in_channels, list)
self.in_channels = in_channels
self.out_channels = out_channels
self.num_ins = len(in_channels)
self.num_outs = num_outs
self.relu_before_extra_convs = relu_before_extra_convs
self.no_norm_on_lateral = no_norm_on_lateral
self.fp16_enabled = False
self.upsample_cfg = upsample_cfg.copy()
if end_level == -1:
self.backbone_end_level = self.num_ins
assert num_outs >= self.num_ins - start_level
else:
# if end_level < inputs, no extra level is allowed
self.backbone_end_level = end_level
assert end_level <= len(in_channels)
assert num_outs == end_level - start_level
self.start_level = start_level
self.end_level = end_level
self.add_extra_convs = add_extra_convs
assert isinstance(add_extra_convs, (str, bool))
if isinstance(add_extra_convs, str):
# Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
elif add_extra_convs: # True
if extra_convs_on_inputs:
# TODO: deprecate `extra_convs_on_inputs`
warnings.simplefilter('once')
warnings.warn(
'"extra_convs_on_inputs" will be deprecated in v2.9.0,'
'Please use "add_extra_convs"', DeprecationWarning)
self.add_extra_convs = 'on_input'
else:
self.add_extra_convs = 'on_output'
self.lateral_convs = nn.ModuleList()
self.fpn_convs = nn.ModuleList()
for i in range(self.start_level, self.backbone_end_level):
l_conv = ConvModule(
in_channels[i],
out_channels,
1,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
act_cfg=act_cfg,
inplace=False)
fpn_conv = ConvModule(
out_channels,
out_channels,
3,
padding=1,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
act_cfg=act_cfg,
inplace=False)
self.lateral_convs.append(l_conv)
self.fpn_convs.append(fpn_conv)
# add extra conv layers (e.g., RetinaNet)
extra_levels = num_outs - self.backbone_end_level + self.start_level
if self.add_extra_convs and extra_levels >= 1:
for i in range(extra_levels):
if i == 0 and self.add_extra_convs == 'on_input':
in_channels = self.in_channels[self.backbone_end_level - 1]
else:
in_channels = out_channels
extra_fpn_conv = ConvModule(
in_channels,
out_channels,
3,
stride=2,
padding=1,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
act_cfg=act_cfg,
inplace=False)
self.fpn_convs.append(extra_fpn_conv)
# default init_weights for conv(msra) and norm in ConvModule
def init_weights(self):
"""Initialize the weights of FPN module."""
for m in self.modules():
if isinstance(m, nn.Conv2d):
xavier_init(m, distribution='uniform')
@auto_fp16()
def forward(self, inputs):
"""Forward function."""
assert len(inputs) == len(self.in_channels)
# build laterals
laterals = [
lateral_conv(inputs[i + self.start_level])
for i, lateral_conv in enumerate(self.lateral_convs)
]
# build top-down path
used_backbone_levels = len(laterals)
for i in range(used_backbone_levels - 1, 0, -1):
# In some cases, fixing `scale factor` (e.g. 2) is preferred, but
# it cannot co-exist with `size` in `F.interpolate`.
if 'scale_factor' in self.upsample_cfg:
laterals[i - 1] += F.interpolate(laterals[i],
**self.upsample_cfg)
else:
prev_shape = laterals[i - 1].shape[2:]
laterals[i - 1] += F.interpolate(
laterals[i], size=prev_shape, **self.upsample_cfg)
# build outputs
# part 1: from original levels
outs = [
self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
]
# part 2: add extra levels
if self.num_outs > len(outs):
# use max pool to get more levels on top of outputs
# (e.g., Faster R-CNN, Mask R-CNN)
if not self.add_extra_convs:
for i in range(self.num_outs - used_backbone_levels):
outs.append(F.max_pool2d(outs[-1], 1, stride=2))
# add conv layers on top of original feature maps (RetinaNet)
else:
if self.add_extra_convs == 'on_input':
extra_source = inputs[self.backbone_end_level - 1]
elif self.add_extra_convs == 'on_lateral':
extra_source = laterals[-1]
elif self.add_extra_convs == 'on_output':
extra_source = outs[-1]
else:
raise NotImplementedError
outs.append(self.fpn_convs[used_backbone_levels](extra_source))
for i in range(used_backbone_levels + 1, self.num_outs):
if self.relu_before_extra_convs:
outs.append(self.fpn_convs[i](F.relu(outs[-1])))
else:
outs.append(self.fpn_convs[i](outs[-1]))
return tuple(outs)
| insightface/detection/scrfd/mmdet/models/necks/fpn.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/necks/fpn.py",
"repo_id": "insightface",
"token_count": 4776
} | 109 |
import torch
import torch.nn as nn
from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, build_assigner,
build_sampler, merge_aug_bboxes, merge_aug_masks,
multiclass_nms)
from ..builder import HEADS, build_head, build_roi_extractor
from .base_roi_head import BaseRoIHead
from .test_mixins import BBoxTestMixin, MaskTestMixin
@HEADS.register_module()
class CascadeRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin):
"""Cascade roi head including one bbox head and one mask head.
https://arxiv.org/abs/1712.00726
"""
def __init__(self,
num_stages,
stage_loss_weights,
bbox_roi_extractor=None,
bbox_head=None,
mask_roi_extractor=None,
mask_head=None,
shared_head=None,
train_cfg=None,
test_cfg=None):
assert bbox_roi_extractor is not None
assert bbox_head is not None
assert shared_head is None, \
'Shared head is not supported in Cascade RCNN anymore'
self.num_stages = num_stages
self.stage_loss_weights = stage_loss_weights
super(CascadeRoIHead, self).__init__(
bbox_roi_extractor=bbox_roi_extractor,
bbox_head=bbox_head,
mask_roi_extractor=mask_roi_extractor,
mask_head=mask_head,
shared_head=shared_head,
train_cfg=train_cfg,
test_cfg=test_cfg)
def init_bbox_head(self, bbox_roi_extractor, bbox_head):
"""Initialize box head and box roi extractor.
Args:
bbox_roi_extractor (dict): Config of box roi extractor.
bbox_head (dict): Config of box in box head.
"""
self.bbox_roi_extractor = nn.ModuleList()
self.bbox_head = nn.ModuleList()
if not isinstance(bbox_roi_extractor, list):
bbox_roi_extractor = [
bbox_roi_extractor for _ in range(self.num_stages)
]
if not isinstance(bbox_head, list):
bbox_head = [bbox_head for _ in range(self.num_stages)]
assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages
for roi_extractor, head in zip(bbox_roi_extractor, bbox_head):
self.bbox_roi_extractor.append(build_roi_extractor(roi_extractor))
self.bbox_head.append(build_head(head))
def init_mask_head(self, mask_roi_extractor, mask_head):
"""Initialize mask head and mask roi extractor.
Args:
mask_roi_extractor (dict): Config of mask roi extractor.
mask_head (dict): Config of mask in mask head.
"""
self.mask_head = nn.ModuleList()
if not isinstance(mask_head, list):
mask_head = [mask_head for _ in range(self.num_stages)]
assert len(mask_head) == self.num_stages
for head in mask_head:
self.mask_head.append(build_head(head))
if mask_roi_extractor is not None:
self.share_roi_extractor = False
self.mask_roi_extractor = nn.ModuleList()
if not isinstance(mask_roi_extractor, list):
mask_roi_extractor = [
mask_roi_extractor for _ in range(self.num_stages)
]
assert len(mask_roi_extractor) == self.num_stages
for roi_extractor in mask_roi_extractor:
self.mask_roi_extractor.append(
build_roi_extractor(roi_extractor))
else:
self.share_roi_extractor = True
self.mask_roi_extractor = self.bbox_roi_extractor
def init_assigner_sampler(self):
"""Initialize assigner and sampler for each stage."""
self.bbox_assigner = []
self.bbox_sampler = []
if self.train_cfg is not None:
for idx, rcnn_train_cfg in enumerate(self.train_cfg):
self.bbox_assigner.append(
build_assigner(rcnn_train_cfg.assigner))
self.current_stage = idx
self.bbox_sampler.append(
build_sampler(rcnn_train_cfg.sampler, context=self))
def init_weights(self, pretrained):
"""Initialize the weights in head.
Args:
pretrained (str, optional): Path to pre-trained weights.
Defaults to None.
"""
if self.with_shared_head:
self.shared_head.init_weights(pretrained=pretrained)
for i in range(self.num_stages):
if self.with_bbox:
self.bbox_roi_extractor[i].init_weights()
self.bbox_head[i].init_weights()
if self.with_mask:
if not self.share_roi_extractor:
self.mask_roi_extractor[i].init_weights()
self.mask_head[i].init_weights()
def forward_dummy(self, x, proposals):
"""Dummy forward function."""
# bbox head
outs = ()
rois = bbox2roi([proposals])
if self.with_bbox:
for i in range(self.num_stages):
bbox_results = self._bbox_forward(i, x, rois)
outs = outs + (bbox_results['cls_score'],
bbox_results['bbox_pred'])
# mask heads
if self.with_mask:
mask_rois = rois[:100]
for i in range(self.num_stages):
mask_results = self._mask_forward(i, x, mask_rois)
outs = outs + (mask_results['mask_pred'], )
return outs
def _bbox_forward(self, stage, x, rois):
"""Box head forward function used in both training and testing."""
bbox_roi_extractor = self.bbox_roi_extractor[stage]
bbox_head = self.bbox_head[stage]
bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs],
rois)
# do not support caffe_c4 model anymore
cls_score, bbox_pred = bbox_head(bbox_feats)
bbox_results = dict(
cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats)
return bbox_results
def _bbox_forward_train(self, stage, x, sampling_results, gt_bboxes,
gt_labels, rcnn_train_cfg):
"""Run forward function and calculate loss for box head in training."""
rois = bbox2roi([res.bboxes for res in sampling_results])
bbox_results = self._bbox_forward(stage, x, rois)
bbox_targets = self.bbox_head[stage].get_targets(
sampling_results, gt_bboxes, gt_labels, rcnn_train_cfg)
loss_bbox = self.bbox_head[stage].loss(bbox_results['cls_score'],
bbox_results['bbox_pred'], rois,
*bbox_targets)
bbox_results.update(
loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets)
return bbox_results
def _mask_forward(self, stage, x, rois):
"""Mask head forward function used in both training and testing."""
mask_roi_extractor = self.mask_roi_extractor[stage]
mask_head = self.mask_head[stage]
mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs],
rois)
# do not support caffe_c4 model anymore
mask_pred = mask_head(mask_feats)
mask_results = dict(mask_pred=mask_pred)
return mask_results
def _mask_forward_train(self,
stage,
x,
sampling_results,
gt_masks,
rcnn_train_cfg,
bbox_feats=None):
"""Run forward function and calculate loss for mask head in
training."""
pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results])
mask_results = self._mask_forward(stage, x, pos_rois)
mask_targets = self.mask_head[stage].get_targets(
sampling_results, gt_masks, rcnn_train_cfg)
pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'],
mask_targets, pos_labels)
mask_results.update(loss_mask=loss_mask)
return mask_results
def forward_train(self,
x,
img_metas,
proposal_list,
gt_bboxes,
gt_labels,
gt_bboxes_ignore=None,
gt_masks=None):
"""
Args:
x (list[Tensor]): list of multi-level img features.
img_metas (list[dict]): list of image info dict where each dict
has: 'img_shape', 'scale_factor', 'flip', and may also contain
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
For details on the values of these keys see
`mmdet/datasets/pipelines/formatting.py:Collect`.
proposals (list[Tensors]): list of region proposals.
gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
gt_labels (list[Tensor]): class indices corresponding to each box
gt_bboxes_ignore (None | list[Tensor]): specify which bounding
boxes can be ignored when computing the loss.
gt_masks (None | Tensor) : true segmentation masks for each box
used if the architecture supports a segmentation task.
Returns:
dict[str, Tensor]: a dictionary of loss components
"""
losses = dict()
for i in range(self.num_stages):
self.current_stage = i
rcnn_train_cfg = self.train_cfg[i]
lw = self.stage_loss_weights[i]
# assign gts and sample proposals
sampling_results = []
if self.with_bbox or self.with_mask:
bbox_assigner = self.bbox_assigner[i]
bbox_sampler = self.bbox_sampler[i]
num_imgs = len(img_metas)
if gt_bboxes_ignore is None:
gt_bboxes_ignore = [None for _ in range(num_imgs)]
for j in range(num_imgs):
assign_result = bbox_assigner.assign(
proposal_list[j], gt_bboxes[j], gt_bboxes_ignore[j],
gt_labels[j])
sampling_result = bbox_sampler.sample(
assign_result,
proposal_list[j],
gt_bboxes[j],
gt_labels[j],
feats=[lvl_feat[j][None] for lvl_feat in x])
sampling_results.append(sampling_result)
# bbox head forward and loss
bbox_results = self._bbox_forward_train(i, x, sampling_results,
gt_bboxes, gt_labels,
rcnn_train_cfg)
for name, value in bbox_results['loss_bbox'].items():
losses[f's{i}.{name}'] = (
value * lw if 'loss' in name else value)
# mask head forward and loss
if self.with_mask:
mask_results = self._mask_forward_train(
i, x, sampling_results, gt_masks, rcnn_train_cfg,
bbox_results['bbox_feats'])
for name, value in mask_results['loss_mask'].items():
losses[f's{i}.{name}'] = (
value * lw if 'loss' in name else value)
# refine bboxes
if i < self.num_stages - 1:
pos_is_gts = [res.pos_is_gt for res in sampling_results]
# bbox_targets is a tuple
roi_labels = bbox_results['bbox_targets'][0]
with torch.no_grad():
roi_labels = torch.where(
roi_labels == self.bbox_head[i].num_classes,
bbox_results['cls_score'][:, :-1].argmax(1),
roi_labels)
proposal_list = self.bbox_head[i].refine_bboxes(
bbox_results['rois'], roi_labels,
bbox_results['bbox_pred'], pos_is_gts, img_metas)
return losses
def simple_test(self, x, proposal_list, img_metas, rescale=False):
"""Test without augmentation."""
assert self.with_bbox, 'Bbox head must be implemented.'
num_imgs = len(proposal_list)
img_shapes = tuple(meta['img_shape'] for meta in img_metas)
ori_shapes = tuple(meta['ori_shape'] for meta in img_metas)
scale_factors = tuple(meta['scale_factor'] for meta in img_metas)
# "ms" in variable names means multi-stage
ms_bbox_result = {}
ms_segm_result = {}
ms_scores = []
rcnn_test_cfg = self.test_cfg
rois = bbox2roi(proposal_list)
for i in range(self.num_stages):
bbox_results = self._bbox_forward(i, x, rois)
# split batch bbox prediction back to each image
cls_score = bbox_results['cls_score']
bbox_pred = bbox_results['bbox_pred']
num_proposals_per_img = tuple(
len(proposals) for proposals in proposal_list)
rois = rois.split(num_proposals_per_img, 0)
cls_score = cls_score.split(num_proposals_per_img, 0)
if isinstance(bbox_pred, torch.Tensor):
bbox_pred = bbox_pred.split(num_proposals_per_img, 0)
else:
bbox_pred = self.bbox_head[i].bbox_pred_split(
bbox_pred, num_proposals_per_img)
ms_scores.append(cls_score)
if i < self.num_stages - 1:
bbox_label = [s[:, :-1].argmax(dim=1) for s in cls_score]
rois = torch.cat([
self.bbox_head[i].regress_by_class(rois[j], bbox_label[j],
bbox_pred[j],
img_metas[j])
for j in range(num_imgs)
])
# average scores of each image by stages
cls_score = [
sum([score[i] for score in ms_scores]) / float(len(ms_scores))
for i in range(num_imgs)
]
# apply bbox post-processing to each image individually
det_bboxes = []
det_labels = []
for i in range(num_imgs):
det_bbox, det_label = self.bbox_head[-1].get_bboxes(
rois[i],
cls_score[i],
bbox_pred[i],
img_shapes[i],
scale_factors[i],
rescale=rescale,
cfg=rcnn_test_cfg)
det_bboxes.append(det_bbox)
det_labels.append(det_label)
if torch.onnx.is_in_onnx_export():
return det_bboxes, det_labels
bbox_results = [
bbox2result(det_bboxes[i], det_labels[i],
self.bbox_head[-1].num_classes)
for i in range(num_imgs)
]
ms_bbox_result['ensemble'] = bbox_results
if self.with_mask:
if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes):
mask_classes = self.mask_head[-1].num_classes
segm_results = [[[] for _ in range(mask_classes)]
for _ in range(num_imgs)]
else:
if rescale and not isinstance(scale_factors[0], float):
scale_factors = [
torch.from_numpy(scale_factor).to(det_bboxes[0].device)
for scale_factor in scale_factors
]
_bboxes = [
det_bboxes[i][:, :4] *
scale_factors[i] if rescale else det_bboxes[i][:, :4]
for i in range(len(det_bboxes))
]
mask_rois = bbox2roi(_bboxes)
num_mask_rois_per_img = tuple(
_bbox.size(0) for _bbox in _bboxes)
aug_masks = []
for i in range(self.num_stages):
mask_results = self._mask_forward(i, x, mask_rois)
mask_pred = mask_results['mask_pred']
# split batch mask prediction back to each image
mask_pred = mask_pred.split(num_mask_rois_per_img, 0)
aug_masks.append(
[m.sigmoid().cpu().numpy() for m in mask_pred])
# apply mask post-processing to each image individually
segm_results = []
for i in range(num_imgs):
if det_bboxes[i].shape[0] == 0:
segm_results.append(
[[]
for _ in range(self.mask_head[-1].num_classes)])
else:
aug_mask = [mask[i] for mask in aug_masks]
merged_masks = merge_aug_masks(
aug_mask, [[img_metas[i]]] * self.num_stages,
rcnn_test_cfg)
segm_result = self.mask_head[-1].get_seg_masks(
merged_masks, _bboxes[i], det_labels[i],
rcnn_test_cfg, ori_shapes[i], scale_factors[i],
rescale)
segm_results.append(segm_result)
ms_segm_result['ensemble'] = segm_results
if self.with_mask:
results = list(
zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble']))
else:
results = ms_bbox_result['ensemble']
return results
def aug_test(self, features, proposal_list, img_metas, rescale=False):
"""Test with augmentations.
If rescale is False, then returned bboxes and masks will fit the scale
of imgs[0].
"""
rcnn_test_cfg = self.test_cfg
aug_bboxes = []
aug_scores = []
for x, img_meta in zip(features, img_metas):
# only one image in the batch
img_shape = img_meta[0]['img_shape']
scale_factor = img_meta[0]['scale_factor']
flip = img_meta[0]['flip']
flip_direction = img_meta[0]['flip_direction']
proposals = bbox_mapping(proposal_list[0][:, :4], img_shape,
scale_factor, flip, flip_direction)
# "ms" in variable names means multi-stage
ms_scores = []
rois = bbox2roi([proposals])
for i in range(self.num_stages):
bbox_results = self._bbox_forward(i, x, rois)
ms_scores.append(bbox_results['cls_score'])
if i < self.num_stages - 1:
bbox_label = bbox_results['cls_score'][:, :-1].argmax(
dim=1)
rois = self.bbox_head[i].regress_by_class(
rois, bbox_label, bbox_results['bbox_pred'],
img_meta[0])
cls_score = sum(ms_scores) / float(len(ms_scores))
bboxes, scores = self.bbox_head[-1].get_bboxes(
rois,
cls_score,
bbox_results['bbox_pred'],
img_shape,
scale_factor,
rescale=False,
cfg=None)
aug_bboxes.append(bboxes)
aug_scores.append(scores)
# after merging, bboxes will be rescaled to the original image size
merged_bboxes, merged_scores = merge_aug_bboxes(
aug_bboxes, aug_scores, img_metas, rcnn_test_cfg)
det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores,
rcnn_test_cfg.score_thr,
rcnn_test_cfg.nms,
rcnn_test_cfg.max_per_img)
bbox_result = bbox2result(det_bboxes, det_labels,
self.bbox_head[-1].num_classes)
if self.with_mask:
if det_bboxes.shape[0] == 0:
segm_result = [[[]
for _ in range(self.mask_head[-1].num_classes)]
]
else:
aug_masks = []
aug_img_metas = []
for x, img_meta in zip(features, img_metas):
img_shape = img_meta[0]['img_shape']
scale_factor = img_meta[0]['scale_factor']
flip = img_meta[0]['flip']
flip_direction = img_meta[0]['flip_direction']
_bboxes = bbox_mapping(det_bboxes[:, :4], img_shape,
scale_factor, flip, flip_direction)
mask_rois = bbox2roi([_bboxes])
for i in range(self.num_stages):
mask_results = self._mask_forward(i, x, mask_rois)
aug_masks.append(
mask_results['mask_pred'].sigmoid().cpu().numpy())
aug_img_metas.append(img_meta)
merged_masks = merge_aug_masks(aug_masks, aug_img_metas,
self.test_cfg)
ori_shape = img_metas[0][0]['ori_shape']
segm_result = self.mask_head[-1].get_seg_masks(
merged_masks,
det_bboxes,
det_labels,
rcnn_test_cfg,
ori_shape,
scale_factor=1.0,
rescale=False)
return [(bbox_result, segm_result)]
else:
return [bbox_result]
| insightface/detection/scrfd/mmdet/models/roi_heads/cascade_roi_head.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/roi_heads/cascade_roi_head.py",
"repo_id": "insightface",
"token_count": 12330
} | 110 |
from .generic_roi_extractor import GenericRoIExtractor
from .single_level_roi_extractor import SingleRoIExtractor
__all__ = [
'SingleRoIExtractor',
'GenericRoIExtractor',
]
| insightface/detection/scrfd/mmdet/models/roi_heads/roi_extractors/__init__.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/models/roi_heads/roi_extractors/__init__.py",
"repo_id": "insightface",
"token_count": 65
} | 111 |
from .collect_env import collect_env
from .logger import get_root_logger
__all__ = ['get_root_logger', 'collect_env']
| insightface/detection/scrfd/mmdet/utils/__init__.py/0 | {
"file_path": "insightface/detection/scrfd/mmdet/utils/__init__.py",
"repo_id": "insightface",
"token_count": 41
} | 112 |
import os
import json
import os.path as osp
import io
import torch
import numpy as np
from mmcv import Config
from mmdet.models import build_detector
from mmcv.cnn import get_model_complexity_info
def get_flops(cfg, input_shape):
model = build_detector(
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
if torch.cuda.is_available():
model.cuda()
model.eval()
if hasattr(model, 'forward_dummy'):
model.forward = model.forward_dummy
else:
raise NotImplementedError(
'FLOPs counter is currently not currently supported with {}'.
format(model.__class__.__name__))
buf = io.StringIO()
all_flops, params = get_model_complexity_info(model, input_shape, print_per_layer_stat=True, as_strings=False, ost=buf)
buf = buf.getvalue()
lines = buf.split("\n")
names = ['(stem)', '(layer1)', '(layer2)', '(layer3)', '(layer4)', '(neck)', '(bbox_head)']
name_ptr = 0
line_num = 0
_flops = []
while name_ptr<len(names):
line = lines[line_num].strip()
name = names[name_ptr]
if line.startswith(name):
flops = float(lines[line_num+1].split(',')[2].strip().split(' ')[0])
_flops.append(flops)
name_ptr+=1
line_num+=1
backbone_flops = _flops[:-2]
neck_flops = _flops[-2]
head_flops = _flops[-1]
return all_flops/1e9, backbone_flops, neck_flops, head_flops
def get_stat(result_dir, group, prefix, idx):
curr_dir = osp.join(result_dir, group, "%s_%d"%(prefix, idx))
aps_file = osp.join(curr_dir, 'aps')
aps = []
if osp.exists(aps_file):
with open(aps_file, 'r') as f:
aps = [float(x) for x in f.readline().strip().split(',')]
cfg_file = osp.join('configs', group, '%s_%d.py'%(prefix, idx))
cfg = Config.fromfile(cfg_file)
all_flops, backbone_flops, neck_flops, head_flops = get_flops(cfg, (3,480,640))
return aps, all_flops, backbone_flops, neck_flops, head_flops
result_dir = './wouts'
group = 'scrfdgen2.5g'
prefix = group
idx_from = 0
idx_to = 320
outf = open(osp.join(result_dir, "%s.txt"%prefix), 'w')
for idx in range(idx_from, idx_to):
aps, all_flops, backbone_flops, neck_flops, head_flops = get_stat(result_dir, group, prefix, idx)
backbone_ratio = np.sum(backbone_flops) / all_flops
neck_ratio = neck_flops / all_flops
head_ratio = head_flops / all_flops
print(idx, aps, all_flops, backbone_flops, backbone_ratio, neck_ratio, head_ratio)
name = "%s_%d"%(prefix, idx)
data = dict(name=name, backbone_flops=backbone_flops, neck_flops=neck_flops, head_flops=head_flops, all_flops=all_flops, aps=aps)
data = json.dumps(data)
outf.write(data)
outf.write("\n")
outf.close()
| insightface/detection/scrfd/search_tools/search_stat.py/0 | {
"file_path": "insightface/detection/scrfd/search_tools/search_stat.py",
"repo_id": "insightface",
"token_count": 1255
} | 113 |
import argparse
import os
import warnings
import mmcv
import torch
from mmcv import Config, DictAction
from mmcv.cnn import fuse_conv_bn
from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
from mmcv.runner import (get_dist_info, init_dist, load_checkpoint,
wrap_fp16_model)
from mmdet.apis import multi_gpu_test, single_gpu_test
from mmdet.datasets import (build_dataloader, build_dataset,
replace_ImageToTensor)
from mmdet.models import build_detector
def parse_args():
parser = argparse.ArgumentParser(
description='MMDet test (and eval) a model')
parser.add_argument('config', help='test config file path')
parser.add_argument('checkpoint', help='checkpoint file')
parser.add_argument('--out', help='output result file in pickle format')
parser.add_argument(
'--fuse-conv-bn',
action='store_true',
help='Whether to fuse conv and bn, this will slightly increase'
'the inference speed')
parser.add_argument(
'--format-only',
action='store_true',
help='Format the output results without perform evaluation. It is'
'useful when you want to format the result to a specific format and '
'submit it to the test server')
parser.add_argument(
'--eval',
type=str,
nargs='+',
help='evaluation metrics, which depends on the dataset, e.g., "bbox",'
' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC')
parser.add_argument('--show', action='store_true', help='show results')
parser.add_argument(
'--show-dir', help='directory where painted images will be saved')
parser.add_argument(
'--show-score-thr',
type=float,
default=0.3,
help='score threshold (default: 0.3)')
parser.add_argument(
'--gpu-collect',
action='store_true',
help='whether to use gpu to collect results.')
parser.add_argument(
'--tmpdir',
help='tmp directory used for collecting results from multiple '
'workers, available when gpu-collect is not specified')
parser.add_argument(
'--cfg-options',
nargs='+',
action=DictAction,
help='override some settings in the used config, the key-value pair '
'in xxx=yyy format will be merged into config file.')
parser.add_argument(
'--options',
nargs='+',
action=DictAction,
help='custom options for evaluation, the key-value pair in xxx=yyy '
'format will be kwargs for dataset.evaluate() function (deprecate), '
'change to --eval-options instead.')
parser.add_argument(
'--eval-options',
nargs='+',
action=DictAction,
help='custom options for evaluation, the key-value pair in xxx=yyy '
'format will be kwargs for dataset.evaluate() function')
parser.add_argument(
'--launcher',
choices=['none', 'pytorch', 'slurm', 'mpi'],
default='none',
help='job launcher')
parser.add_argument('--local_rank', type=int, default=0)
args = parser.parse_args()
if 'LOCAL_RANK' not in os.environ:
os.environ['LOCAL_RANK'] = str(args.local_rank)
if args.options and args.eval_options:
raise ValueError(
'--options and --eval-options cannot be both '
'specified, --options is deprecated in favor of --eval-options')
if args.options:
warnings.warn('--options is deprecated in favor of --eval-options')
args.eval_options = args.options
return args
def main():
args = parse_args()
assert args.out or args.eval or args.format_only or args.show \
or args.show_dir, \
('Please specify at least one operation (save/eval/format/show the '
'results / save the results) with the argument "--out", "--eval"'
', "--format-only", "--show" or "--show-dir"')
if args.eval and args.format_only:
raise ValueError('--eval and --format_only cannot be both specified')
if args.out is not None and not args.out.endswith(('.pkl', '.pickle')):
raise ValueError('The output file must be a pkl file.')
cfg = Config.fromfile(args.config)
if args.cfg_options is not None:
cfg.merge_from_dict(args.cfg_options)
# import modules from string list.
if cfg.get('custom_imports', None):
from mmcv.utils import import_modules_from_strings
import_modules_from_strings(**cfg['custom_imports'])
# set cudnn_benchmark
if cfg.get('cudnn_benchmark', False):
torch.backends.cudnn.benchmark = True
cfg.model.pretrained = None
if cfg.model.get('neck'):
if isinstance(cfg.model.neck, list):
for neck_cfg in cfg.model.neck:
if neck_cfg.get('rfp_backbone'):
if neck_cfg.rfp_backbone.get('pretrained'):
neck_cfg.rfp_backbone.pretrained = None
elif cfg.model.neck.get('rfp_backbone'):
if cfg.model.neck.rfp_backbone.get('pretrained'):
cfg.model.neck.rfp_backbone.pretrained = None
# in case the test dataset is concatenated
if isinstance(cfg.data.test, dict):
cfg.data.test.test_mode = True
elif isinstance(cfg.data.test, list):
for ds_cfg in cfg.data.test:
ds_cfg.test_mode = True
# init distributed env first, since logger depends on the dist info.
if args.launcher == 'none':
distributed = False
else:
distributed = True
init_dist(args.launcher, **cfg.dist_params)
# build the dataloader
samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1)
if samples_per_gpu > 1:
# Replace 'ImageToTensor' to 'DefaultFormatBundle'
cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
dataset = build_dataset(cfg.data.test)
data_loader = build_dataloader(
dataset,
samples_per_gpu=samples_per_gpu,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
# build the model and load checkpoint
model = build_detector(cfg.model, train_cfg=None, test_cfg=cfg.test_cfg)
fp16_cfg = cfg.get('fp16', None)
if fp16_cfg is not None:
wrap_fp16_model(model)
checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
if args.fuse_conv_bn:
model = fuse_conv_bn(model)
# old versions did not save class info in checkpoints, this walkaround is
# for backward compatibility
if 'CLASSES' in checkpoint['meta']:
model.CLASSES = checkpoint['meta']['CLASSES']
else:
model.CLASSES = dataset.CLASSES
if not distributed:
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader, args.show, args.show_dir,
args.show_score_thr)
else:
model = MMDistributedDataParallel(
model.cuda(),
device_ids=[torch.cuda.current_device()],
broadcast_buffers=False)
outputs = multi_gpu_test(model, data_loader, args.tmpdir,
args.gpu_collect)
rank, _ = get_dist_info()
if rank == 0:
if args.out:
print(f'\nwriting results to {args.out}')
mmcv.dump(outputs, args.out)
kwargs = {} if args.eval_options is None else args.eval_options
if args.format_only:
dataset.format_results(outputs, **kwargs)
if args.eval:
eval_kwargs = cfg.get('evaluation', {}).copy()
# hard-code way to remove EvalHook args
for key in [
'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best',
'rule'
]:
eval_kwargs.pop(key, None)
eval_kwargs.update(dict(metric=args.eval, **kwargs))
print(dataset.evaluate(outputs, **eval_kwargs))
if __name__ == '__main__':
main()
| insightface/detection/scrfd/tools/test.py/0 | {
"file_path": "insightface/detection/scrfd/tools/test.py",
"repo_id": "insightface",
"token_count": 3466
} | 114 |
# InsightFace Model Zoo
:bell: **ALL models are available for non-commercial research purposes only.**
## 0. Python Package models
To check the detail of insightface python package, please see [here](../python-package).
To install: ``pip install -U insightface``
To use the specific model pack:
```
model_pack_name = 'buffalo_l'
app = FaceAnalysis(name=model_pack_name)
```
Name in **bold** is the default model pack in latest version.
| Name | Detection Model | Recognition Model | Alignment | Attributes | Model-Size |
| -------------- | --------------- | ------------------- | ------------ | ---------- | ---------- |
| antelopev2 | RetinaFace-10GF | ResNet100@Glint360K | 2d106 & 3d68 | Gender&Age | 407MB |
| **buffalo_l** | RetinaFace-10GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 326MB |
| buffalo_m | RetinaFace-2.5GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 313MB |
| buffalo_s | RetinaFace-500MF | MBF@WebFace600K | 2d106 & 3d68 | Gender&Age | 159MB |
| buffalo_sc | RetinaFace-500MF | MBF@WebFace600K | - | - | 16MB |
### Recognition accuracy of python library model packs:
| Name | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) |
| :-------- | ------ | ------- | --------- | ----------- | ---------- | ------ | ------ | -------- | --------- |
| buffalo_l | 91.25 | 90.29 | 94.70 | 93.16 | 74.96 | 99.83 | 99.33 | 98.23 | 97.25 |
| buffalo_s | 71.87 | 69.45 | 80.45 | 73.39 | 51.03 | 99.70 | 98.00 | 96.58 | 95.02 |
*buffalo_m has the same accuracy with buffalo_l.*
*buffalo_sc has the same accuracy with buffalo_s.*
(Note that almost all ONNX models in our model_zoo can be called by python library.)
## 1. Face Recognition models.
### Definition:
The default training loss is margin based softmax if not specified.
``MFN``: MobileFaceNet
``MS1MV2``: MS1M-ArcFace
``MS1MV3``: MS1M-RetinaFace
``MS1M_MegaFace``: MS1MV2+MegaFace_train
``_pfc``: using Partial FC, with sample-ratio=0.1
``MegaFace``: MegaFace identification test, with gallery=1e6.
``IJBC``: IJBC 1:1 test, under FAR<=1e-4.
``BDrive``: BaiduDrive
``GDrive``: GoogleDrive
### List of models by MXNet and PaddlePaddle:
| Backbone | Dataset | Method | LFW | CFP-FP | AgeDB-30 | MegaFace | Link. |
| -------- | ------- | ------- | ----- | ------ | -------- | -------- | ------------------------------------------------------------ |
| R100 (mxnet) | MS1MV2 | ArcFace | 99.77 | 98.27 | 98.28 | 98.47 | [BDrive](https://pan.baidu.com/s/1wuRTf2YIsKt76TxFufsRNA), [GDrive](https://drive.google.com/file/d/1Hc5zUfBATaXUgcU2haUNa7dcaZSw95h2/view?usp=sharing) |
| MFN (mxnet) | MS1MV1 | ArcFace | 99.50 | 88.94 | 95.91 | - | [BDrive](https://pan.baidu.com/s/1If28BkHde4fiuweJrbicVA), [GDrive](https://drive.google.com/file/d/1RHyJIeYuHduVDDBTn3ffpYEZoXWRamWI/view?usp=sharing) |
| MFN (paddle) | MS1MV2 | ArcFace | 99.45 | 93.43 | 96.13 | - | [pretrained model](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/MobileFaceNet_128_v1.0_pretrained.tar), [inference model](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/mobileface_v1.0_infer.tar) |
| iResNet50 (paddle) | MS1MV2 | ArcFace | 99.73 | 97.43 | 97.88 | - | [pretrained model](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/arcface_iresnet50_v1.0_pretrained.tar), [inference model](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/arcface_iresnet50_v1.0_infer.tar) |
### List of models by various depth IResNet and training datasets:
| Backbone | Dataset | MR-ALL | African | Caucasian | South Asian | East Asian | Link(onnx) |
|----------|-----------|--------|---------|-----------|-------------|------------|-----------------------------------------------------------------------|
| R100 | Casia | 42.735 | 39.666 | 53.933 | 47.807 | 21.572 | [GDrive](https://drive.google.com/file/d/1WOrOK-qZO5FcagscCI3td6nnABUPPepD/view?usp=sharing) |
| R100 | MS1MV2 | 80.725 | 79.117 | 87.176 | 85.501 | 55.807 | [GDrive](https://drive.google.com/file/d/1772DTho9EG047KNUIv2lop2e7EobiCFn/view?usp=sharing) |
| R18 | MS1MV3 | 68.326 | 62.613 | 75.125 | 70.213 | 43.859 | [GDrive](https://drive.google.com/file/d/1dWZb0SLcdzr-toUzsVZ1zogn9dEIW1Dk/view?usp=sharing) |
| R34 | MS1MV3 | 77.365 | 71.644 | 83.291 | 80.084 | 53.712 | [GDrive](https://drive.google.com/file/d/1ON6ImX-AigDKAi4pelFPf12vkJVyGFKl/view?usp=sharing) |
| R50 | MS1MV3 | 80.533 | 75.488 | 86.115 | 84.305 | 57.352 | [GDrive](https://drive.google.com/file/d/1FPldzmZ6jHfaC-R-jLkxvQRP-cLgxjCT/view?usp=sharing) |
| R100 | MS1MV3 | 84.312 | 81.083 | 89.040 | 88.082 | 62.193 | [GDrive](https://drive.google.com/file/d/1fZOfvfnavFYjzfFoKTh5j1YDcS8KCnio/view?usp=sharing) |
| R18 | Glint360K | 72.074 | 68.230 | 80.575 | 75.852 | 47.831 | [GDrive](https://drive.google.com/file/d/1Z0eoO1Wqv32K8TdFHKqrlrxv46_W4390/view?usp=sharing) |
| R34 | Glint360K | 83.015 | 79.907 | 88.620 | 86.815 | 60.604 | [GDrive](https://drive.google.com/file/d/1G1oeLkp_b3JA_z4wGs62RdLpg-u_Ov2Y/view?usp=sharing) |
| R50 | Glint360K | 87.077 | 85.272 | 91.617 | 90.541 | 66.813 | [GDrive](https://drive.google.com/file/d/1MpRhM76OQ6cTzpr2ZSpHp2_CP19Er4PI/view?usp=sharing) |
| R100 | Glint360K | 90.659 | 89.488 | 94.285 | 93.434 | 72.528 | [GDrive](https://drive.google.com/file/d/1Gh8C-bwl2B90RDrvKJkXafvZC3q4_H_z/view?usp=sharing) |
### List of models by IResNet-50 and different training datasets:
| Dataset | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) | Link(onnx) |
| :-------- | ------ | ------- | ---- | ------ | -------- | ----- | ------ | -------- | --------- | --- |
| CISIA | 36.794 | 42.550 | 55.825 | 49.618 | 19.611 | 99.450| 95.214 | 94.900 | 87.220 | [GDrive](https://drive.google.com/file/d/1km-cVFvUAPU1UumLLi1fIRasdg6VA-vM/view?usp=sharing) |
| CISIA_pfc | 37.107 | 38.934 | 53.823 | 48.674 | 19.927 | 99.367| 95.429 | 94.600 | 84.970 | [GDrive](https://drive.google.com/file/d/1z8linstTZopL5Yy7NOUgVVtgzGtsu1LM/view?usp=sharing) |
| VGG2 | 38.578 | 35.259 | 54.304 | 44.081 | 24.095 | 99.550| 97.410 | 95.080 | 91.220 | [GDrive](https://drive.google.com/file/d/1UwyVIDSNDkHKClBANrWi8qpMU4nXizT6/view?usp=sharing) |
| VGG2_pfc | 40.673 | 36.767 | 60.180 | 49.039 | 24.255 | 99.683| 98.529 | 95.400 | 92.490 | [GDrive](https://drive.google.com/file/d/1uW0EsctVyPklSyXMXF39AniIhSRXCRtp/view?usp=sharing) |
| GlintAsia | 62.663 | 49.531 | 64.829 | 57.984 | 61.743 | 99.583| 93.186 | 95.400 | 91.500 | [GDrive](https://drive.google.com/file/d/1IyXh7m1HMwTZw4B5N1WMPIsN-S9kdS95/view?usp=sharing) |
| GlintAsia_pfc | 63.149 | 50.366 | 65.227 | 57.936 | 61.820 | 99.650| 93.029 | 95.233 | 91.140 | [GDrive](https://drive.google.com/file/d/1CTjalggNucgPkmpFi5ij-NGG1Fy9sL5r/view?usp=sharing) |
| MS1MV2 | 77.696 | 74.596 | 84.126 | 82.041 | 51.105 | 99.833| 98.083 | 98.083 | 96.140 | [GDrive](https://drive.google.com/file/d/1rd4kbiXtXBTWE8nP7p4OTv_CAp2FUa1i/view?usp=sharing) |
| MS1MV2_pfc | 77.738 | 74.728 | 84.883 | 82.798 | 52.507 | 99.783| 98.071 | 98.017 | 96.080 | [GDrive](https://drive.google.com/file/d/1ryrXenGQa-EGyk64mVaG136ihNUBmNMW/view?usp=sharing) |
| MS1M_MegaFace | 78.372 | 74.138 | 82.251 | 77.223 | 60.203 | 99.750| 97.557 | 97.400 | 95.350 | [GDrive](https://drive.google.com/file/d/1c2JG0StcTMDrL4ywz3qWTN_9io3lo_ER/view?usp=sharing) |
| MS1M_MegaFace_pfc | 78.773 | 73.690 | 82.947 | 78.793 | 57.566 | 99.800| 97.870 | 97.733 | 95.400 | [GDrive](https://drive.google.com/file/d/1BnG48LS_HIvYlSbSnP_LzpO3xjx0_rpu/view?usp=sharing) |
| MS1MV3 | 82.522 | 77.172 | 87.028 | 86.006 | 60.625 | 99.800| 98.529 | 98.267 | 96.580 | [GDrive](https://drive.google.com/file/d/1Tqorubgcl0qfjbjEM_Y9EDmjG5tCWzbr/view?usp=sharing) |
| MS1MV3_pfc | 81.683 | 78.126 | 87.286 | 85.542 | 58.925 | 99.800| 98.443 | 98.167 | 96.430 | [GDrive](https://drive.google.com/file/d/15jrHCqhEmoSZ93kKL9orVMhbKfNWAhp-/view?usp=sharing) |
| Glint360k | 86.789 | 84.749 | 91.414 | 90.088 | 66.168 | 99.817| 99.143 | 98.450 | 97.130 | [GDrive](https://drive.google.com/file/d/1gnt6P3jaiwfevV4hreWHPu0Mive5VRyP/view?usp=sharing) |
| Glint360k_pfc | 87.077 | 85.272 | 91.616 | 90.541 | 66.813 | 99.817| 99.143 | 98.450 | 97.020 | [GDrive](https://drive.google.com/file/d/164o2Ct42tyJdQjckeMJH2-7KTXolu-EP/view?usp=sharing) |
| WebFace600K | 90.566 | 89.355 | 94.177 | 92.358 | 73.852 | 99.800| 99.200 | 98.100 | 97.120 | [GDrive](https://drive.google.com/file/d/1N0GL-8ehw_bz2eZQWz2b0A5XBdXdxZhg/view?usp=sharing) |
| WebFace600K_pfc | 89.951 | 89.301 | 94.016 | 92.381 | 73.007 | 99.817| 99.143 | 98.117 | 97.010 | [GDrive](https://drive.google.com/file/d/11TASXssTnwLY1ZqKlRjsJiV-1nWu9pDY/view?usp=sharing) |
| Average | 69.247 | 65.908 | 77.121 | 72.819 | 52.014 | 99.706| 97.374 | 96.962 | 93.925 | |
| Average_pfc | 69.519 | 65.898 | 77.497 | 73.213 | 51.853 | 99.715| 97.457 | 96.965 | 93.818 | |
### List of models by MobileFaceNet and different training datasets:
**``FLOPS``:** 450M FLOPs
**``Model-Size``:** 13MB
| Dataset | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) | Link(onnx) |
| :-------- | ------ | ------- | ---- | ------ | -------- | ----- | ------ | -------- | --------- | --- |
| WebFace600K | 71.865 | 69.449 | 80.454 | 73.394 | 51.026 | 99.70 | 98.00 | 96.58 | 95.02 | - |
## 2. Face Detection models.
### 2.1 RetinaFace
In RetinaFace, mAP was evaluated with multi-scale testing.
``m025``: means MobileNet-0.25
| Impelmentation | Easy-Set | Medium-Set | Hard-Set | Link |
| ------------------------ | -------- | ---------- | -------- | ------------------------------------------------------------ |
| RetinaFace-R50 | 96.5 | 95.6 | 90.4 | [BDrive](https://pan.baidu.com/s/1C6nKq122gJxRhb37vK0_LQ), [GDrive](https://drive.google.com/file/d/1wm-6K688HQEx_H90UdAIuKv-NAsKBu85/view?usp=sharing) |
| RetinaFace-m025(yangfly) | - | - | 82.5 | [BDrive](https://pan.baidu.com/s/1P1ypO7VYUbNAezdvLm2m9w)(nzof), [GDrive](https://drive.google.com/drive/folders/1OTXuAUdkLVaf78iz63D1uqGLZi4LbPeL?usp=sharing) |
| BlazeFace-FPN-SSH (paddle) | 91.9 | 89.8 | 81.7% | [pretrained model](https://paddledet.bj.bcebos.com/models/blazeface_fpn_ssh_1000e.pdparams), [inference model](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/blazeface_fpn_ssh_1000e_v1.0_infer.tar) |
### 2.2 SCRFD
In SCRFD, mAP was evaluated with single scale testing, VGA resolution.
``2.5G``: means the model cost ``2.5G`` FLOPs while the input image is in VGA(640x480) resolution.
``_KPS``: means this model can detect five facial keypoints.
| Name | Easy | Medium | Hard | FLOPs | Params(M) | Infer(ms) | Link(pth) |
| :------------: | ----- | ------ | ----- | ----- | --------- | --------- | ------------------------------------------------------------ |
| SCRFD_500M | 90.57 | 88.12 | 68.51 | 500M | 0.57 | 3.6 | [GDrive](https://drive.google.com/file/d/1OX0i_vWDp1Fp-ZynOUMZo-q1vB5g1pTN/view?usp=sharing) |
| SCRFD_1G | 92.38 | 90.57 | 74.80 | 1G | 0.64 | 4.1 | [GDrive](https://drive.google.com/file/d/1acd5wKjWnl1zMgS5YJBtCh13aWtw9dej/view?usp=sharing) |
| SCRFD_2.5G | 93.78 | 92.16 | 77.87 | 2.5G | 0.67 | 4.2 | [GDrive](https://drive.google.com/file/d/1wgg8GY2vyP3uUTaAKT0_MSpAPIhmDsCQ/view?usp=sharing) |
| SCRFD_10G | 95.16 | 93.87 | 83.05 | 10G | 3.86 | 4.9 | [GDrive](https://drive.google.com/file/d/1kUYa0s1XxLW37ZFRGeIfKNr9L_4ScpOg/view?usp=sharing) |
| SCRFD_34G | 96.06 | 94.92 | 85.29 | 34G | 9.80 | 11.7 | [GDrive](https://drive.google.com/file/d/1w9QOPilC9EhU0JgiVJoX0PLvfNSlm1XE/view?usp=sharing) |
| SCRFD_500M_KPS | 90.97 | 88.44 | 69.49 | 500M | 0.57 | 3.6 | [GDrive](https://drive.google.com/file/d/1TXvKmfLTTxtk7tMd2fEf-iWtAljlWDud/view?usp=sharing) |
| SCRFD_2.5G_KPS | 93.80 | 92.02 | 77.13 | 2.5G | 0.82 | 4.3 | [GDrive](https://drive.google.com/file/d/1KtOB9TocdPG9sk_S_-1QVG21y7OoLIIf/view?usp=sharing) |
| SCRFD_10G_KPS | 95.40 | 94.01 | 82.80 | 10G | 4.23 | 5.0 | [GDrive](https://drive.google.com/file/d/1-2uy0tgkenw6ZLxfKV1qVhmkb5Ep_5yx/view?usp=sharing) |
## 3. Face Alignment models.
### 2.1 2D Face Alignment
| Impelmentation | Points | Backbone | Params(M) | Link(onnx) |
| --------------------- | ------ | ------------- | --------- | ------------------------------------------------------------ |
| Coordinate-regression | 106 | MobileNet-0.5 | 1.2 | [GDrive](https://drive.google.com/file/d/1M5685m-bKnMCt0u2myJoEK5gUY3TDt_1/view?usp=sharing) |
### 2.2 3D Face Alignment
| Impelmentation | Points | Backbone | Params(M) | Link(onnx) |
| -------------- | ------ | --------- | --------- | ------------------------------------------------------------ |
| - | 68 | ResNet-50 | 34.2 | [GDrive](https://drive.google.com/file/d/1aJe5Rzoqrtf_a9U84E-V1b0rUi8-QbCI/view?usp=sharing) |
### 2.3 Dense Face Alignment
## 4. Face Attribute models.
### 4.1 Gender&Age
| Training-Set | Backbone | Params(M) | Link(onnx) |
| ------------ | -------------- | --------- | ------------------------------------------------------------ |
| CelebA | MobileNet-0.25 | 0.3 | [GDrive](https://drive.google.com/file/d/1Mm3TeUuaZOwmEMp0nGOddvgXCjpRodPU/view?usp=sharing) |
### 4.2 Expression
| insightface/model_zoo/README.md/0 | {
"file_path": "insightface/model_zoo/README.md",
"repo_id": "insightface",
"token_count": 7209
} | 115 |
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
"""
@Author : Qingping Zheng
@Contact : qingpingzheng2014@gmail.com
@File : miou.py
@Time : 10/01/21 00:00 PM
@Desc :
@License : Licensed under the Apache License, Version 2.0 (the "License");
@Copyright : Copyright 2022 The Authors. All Rights Reserved.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import cv2
import json
import numpy as np
import os
from collections import OrderedDict
from PIL import Image as PILImage
from utils.transforms import transform_parsing
LABELS = ['background', 'skin', 'nose', 'eye_g', 'l_eye', 'r_eye', \
'l_brow', 'r_brow', 'l_ear', 'r_ear', 'mouth', 'u_lip', \
'l_lip', 'hair', 'hat', 'ear_r', 'neck_l', 'neck', 'cloth']
def get_palette(num_cls):
""" Returns the color map for visualizing the segmentation mask.
Args:
num_cls: Number of classes
Returns:
The color map
"""
n = num_cls
palette = [0] * (n * 3)
for j in range(0, n):
lab = j
palette[j * 3 + 0] = 0
palette[j * 3 + 1] = 0
palette[j * 3 + 2] = 0
i = 0
while lab:
palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i))
palette[j * 3 + 1] |= (((lab >> 1) & 1) << (7 - i))
palette[j * 3 + 2] |= (((lab >> 2) & 1) << (7 - i))
i += 1
lab >>= 3
return palette
def get_confusion_matrix(gt_label, pred_label, num_classes):
"""
Calcute the confusion matrix by given label and pred
:param gt_label: the ground truth label
:param pred_label: the pred label
:param num_classes: the nunber of class
:return: the confusion matrix
"""
index = (gt_label * num_classes + pred_label).astype('int32')
label_count = np.bincount(index)
confusion_matrix = np.zeros((num_classes, num_classes))
for i_label in range(num_classes):
for i_pred_label in range(num_classes):
cur_index = i_label * num_classes + i_pred_label
if cur_index < len(label_count):
confusion_matrix[i_label, i_pred_label] = label_count[cur_index]
return confusion_matrix
def fast_histogram(a, b, na, nb):
'''
fast histogram calculation
---
* a, b: non negative label ids, a.shape == b.shape, a in [0, ... na-1], b in [0, ..., nb-1]
'''
assert a.shape == b.shape
assert np.all((a >= 0) & (a < na) & (b >= 0) & (b < nb))
# k = (a >= 0) & (a < na) & (b >= 0) & (b < nb)
hist = np.bincount(
nb * a.reshape([-1]).astype(int) + b.reshape([-1]).astype(int),
minlength=na * nb).reshape(na, nb)
assert np.sum(hist) == a.size
return hist
def _read_names(file_name):
label_names = []
for name in open(file_name, 'r'):
name = name.strip()
if len(name) > 0:
label_names.append(name)
return label_names
def _merge(*list_pairs):
a = []
b = []
for al, bl in list_pairs:
a += al
b += bl
return a, b
def compute_mean_ioU(preds, scales, centers, num_classes, datadir, input_size=[473, 473], dataset='val', reverse=False):
file_list_name = os.path.join(datadir, dataset + '_list.txt')
val_id = [line.split()[0][7:-4] for line in open(file_list_name).readlines()]
confusion_matrix = np.zeros((num_classes, num_classes))
label_names_file = os.path.join(datadir, 'label_names.txt')
gt_label_names = pred_label_names = _read_names(label_names_file)
assert gt_label_names[0] == pred_label_names[0] == 'bg'
hists = []
for i, im_name in enumerate(val_id):
gt_path = os.path.join(datadir, dataset, 'labels', im_name + '.png')
gt = cv2.imread(gt_path, cv2.IMREAD_GRAYSCALE)
h, w = gt.shape
pred_out = preds[i]
if scales is not None:
s = scales[i]
c = centers[i]
else:
s = None
c = None
pred_old = transform_parsing(pred_out, c, s, w, h, input_size)
gt = np.asarray(gt, dtype=np.int32)
pred = np.asarray(pred_old, dtype=np.int32)
ignore_index = gt != 255
gt = gt[ignore_index]
pred = pred[ignore_index]
hist = fast_histogram(gt, pred, len(gt_label_names), len(pred_label_names))
hists.append(hist)
confusion_matrix += get_confusion_matrix(gt, pred, num_classes)
hist_sum = np.sum(np.stack(hists, axis=0), axis=0)
eval_names = dict()
for label_name in gt_label_names:
gt_ind = gt_label_names.index(label_name)
pred_ind = pred_label_names.index(label_name)
eval_names[label_name] = ([gt_ind], [pred_ind])
if 'le' in eval_names and 're' in eval_names:
eval_names['eyes'] = _merge(eval_names['le'], eval_names['re'])
if 'lb' in eval_names and 'rb' in eval_names:
eval_names['brows'] = _merge(eval_names['lb'], eval_names['rb'])
if 'ulip' in eval_names and 'imouth' in eval_names and 'llip' in eval_names:
eval_names['mouth'] = _merge(
eval_names['ulip'], eval_names['imouth'], eval_names['llip'])
# Helen
if 'eyes' in eval_names and 'brows' in eval_names and 'nose' in eval_names and 'mouth' in eval_names:
eval_names['overall'] = _merge(
eval_names['eyes'], eval_names['brows'], eval_names['nose'], eval_names['mouth'])
pos = confusion_matrix.sum(1)
res = confusion_matrix.sum(0)
tp = np.diag(confusion_matrix)
pixel_accuracy = (tp.sum() / pos.sum()) * 100
mean_accuracy = ((tp / np.maximum(1.0, pos)).mean()) * 100
IoU_array = (tp / np.maximum(1.0, pos + res - tp))
IoU_array = IoU_array * 100
mean_IoU = IoU_array.mean()
print('Pixel accuracy: %f \n' % pixel_accuracy)
print('Mean accuracy: %f \n' % mean_accuracy)
print('Mean IU: %f \n' % mean_IoU)
mIoU_value = []
f1_value = []
mf1_value = []
for i, (label, iou) in enumerate(zip(LABELS, IoU_array)):
mIoU_value.append((label, iou))
mIoU_value.append(('Pixel accuracy', pixel_accuracy))
mIoU_value.append(('Mean accuracy', mean_accuracy))
mIoU_value.append(('Mean IU', mean_IoU))
mIoU_value = OrderedDict(mIoU_value)
for eval_name, (gt_inds, pred_inds) in eval_names.items():
A = hist_sum[gt_inds, :].sum()
B = hist_sum[:, pred_inds].sum()
intersected = hist_sum[gt_inds, :][:, pred_inds].sum()
f1 = 2 * intersected / (A + B)
if eval_name in gt_label_names[1:]:
mf1_value.append(f1)
f1_value.append((eval_name, f1))
f1_value.append(('Mean_F1', np.array(mf1_value).mean()))
f1_value = OrderedDict(f1_value)
return mIoU_value, f1_value
def write_results(preds, scales, centers, datadir, dataset, result_dir, input_size=[473, 473]):
palette = get_palette(20)
if not os.path.exists(result_dir):
os.makedirs(result_dir)
json_file = os.path.join(datadir, 'annotations', dataset + '.json')
with open(json_file) as data_file:
data_list = json.load(data_file)
data_list = data_list['root']
for item, pred_out, s, c in zip(data_list, preds, scales, centers):
im_name = item['im_name']
w = item['img_width']
h = item['img_height']
pred = transform_parsing(pred_out, c, s, w, h, input_size)
save_path = os.path.join(result_dir, im_name[:-4]+'.png')
output_im = PILImage.fromarray(np.asarray(pred, dtype=np.uint8))
output_im.putpalette(palette)
output_im.save(save_path)
def get_arguments():
"""Parse all the arguments provided from the CLI.
Returns:
A list of parsed arguments.
"""
parser = argparse.ArgumentParser(description="DeepLabLFOV NetworkEv")
parser.add_argument("--pred-path", type=str, default='',
help="Path to predicted segmentation.")
parser.add_argument("--gt-path", type=str, default='',
help="Path to the groundtruth dir.")
return parser.parse_args()
| insightface/parsing/dml_csr/utils/miou.py/0 | {
"file_path": "insightface/parsing/dml_csr/utils/miou.py",
"repo_id": "insightface",
"token_count": 3710
} | 116 |
import cv2
import os
import os.path as osp
from pathlib import Path
class ImageCache:
data = {}
def get_image(name, to_rgb=False, use_cache=True):
key = (name, to_rgb)
if key in ImageCache.data:
return ImageCache.data[key]
images_dir = osp.join(Path(__file__).parent.absolute(), 'images')
ext_names = ['.jpg', '.png', '.jpeg']
image_file = None
for ext_name in ext_names:
_image_file = osp.join(images_dir, "%s%s"%(name, ext_name))
if osp.exists(_image_file):
image_file = _image_file
break
assert image_file is not None, '%s not found'%name
img = cv2.imread(image_file)
if to_rgb:
img = img[:,:,::-1]
if use_cache:
ImageCache.data[key] = img
return img
| insightface/python-package/insightface/data/image.py/0 | {
"file_path": "insightface/python-package/insightface/data/image.py",
"repo_id": "insightface",
"token_count": 350
} | 117 |
# -*- coding: utf-8 -*-
# @Organization : insightface.ai
# @Author : Jia Guo
# @Time : 2021-05-04
# @Function :
import os
import os.path as osp
import glob
import onnxruntime
from .arcface_onnx import *
from .retinaface import *
#from .scrfd import *
from .landmark import *
from .attribute import Attribute
from .inswapper import INSwapper
from ..utils import download_onnx
__all__ = ['get_model']
class PickableInferenceSession(onnxruntime.InferenceSession):
# This is a wrapper to make the current InferenceSession class pickable.
def __init__(self, model_path, **kwargs):
super().__init__(model_path, **kwargs)
self.model_path = model_path
def __getstate__(self):
return {'model_path': self.model_path}
def __setstate__(self, values):
model_path = values['model_path']
self.__init__(model_path)
class ModelRouter:
def __init__(self, onnx_file):
self.onnx_file = onnx_file
def get_model(self, **kwargs):
session = PickableInferenceSession(self.onnx_file, **kwargs)
print(f'Applied providers: {session._providers}, with options: {session._provider_options}')
inputs = session.get_inputs()
input_cfg = inputs[0]
input_shape = input_cfg.shape
outputs = session.get_outputs()
if len(outputs)>=5:
return RetinaFace(model_file=self.onnx_file, session=session)
elif input_shape[2]==192 and input_shape[3]==192:
return Landmark(model_file=self.onnx_file, session=session)
elif input_shape[2]==96 and input_shape[3]==96:
return Attribute(model_file=self.onnx_file, session=session)
elif len(inputs)==2 and input_shape[2]==128 and input_shape[3]==128:
return INSwapper(model_file=self.onnx_file, session=session)
elif input_shape[2]==input_shape[3] and input_shape[2]>=112 and input_shape[2]%16==0:
return ArcFaceONNX(model_file=self.onnx_file, session=session)
else:
#raise RuntimeError('error on model routing')
return None
def find_onnx_file(dir_path):
if not os.path.exists(dir_path):
return None
paths = glob.glob("%s/*.onnx" % dir_path)
if len(paths) == 0:
return None
paths = sorted(paths)
return paths[-1]
def get_default_providers():
return ['CUDAExecutionProvider', 'CPUExecutionProvider']
def get_default_provider_options():
return None
def get_model(name, **kwargs):
root = kwargs.get('root', '~/.insightface')
root = os.path.expanduser(root)
model_root = osp.join(root, 'models')
allow_download = kwargs.get('download', False)
download_zip = kwargs.get('download_zip', False)
if not name.endswith('.onnx'):
model_dir = os.path.join(model_root, name)
model_file = find_onnx_file(model_dir)
if model_file is None:
return None
else:
model_file = name
if not osp.exists(model_file) and allow_download:
model_file = download_onnx('models', model_file, root=root, download_zip=download_zip)
assert osp.exists(model_file), 'model_file %s should exist'%model_file
assert osp.isfile(model_file), 'model_file %s should be a file'%model_file
router = ModelRouter(model_file)
providers = kwargs.get('providers', get_default_providers())
provider_options = kwargs.get('provider_options', get_default_provider_options())
model = router.get_model(providers=providers, provider_options=provider_options)
return model
| insightface/python-package/insightface/model_zoo/model_zoo.py/0 | {
"file_path": "insightface/python-package/insightface/model_zoo/model_zoo.py",
"repo_id": "insightface",
"token_count": 1488
} | 118 |
'''
functions about rendering mesh(from 3d obj to 2d image).
only use rasterization render here.
Note that:
1. Generally, render func includes camera, light, raterize. Here no camera and light(I write these in other files)
2. Generally, the input vertices are normalized to [-1,1] and cetered on [0, 0]. (in world space)
Here, the vertices are using image coords, which centers on [w/2, h/2] with the y-axis pointing to oppisite direction.
Means: render here only conducts interpolation.(I just want to make the input flexible)
Author: Yao Feng
Mail: yaofeng1995@gmail.com
'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from time import time
from .cython import mesh_core_cython
def rasterize_triangles(vertices, triangles, h, w):
'''
Args:
vertices: [nver, 3]
triangles: [ntri, 3]
h: height
w: width
Returns:
depth_buffer: [h, w] saves the depth, here, the bigger the z, the fronter the point.
triangle_buffer: [h, w] saves the tri id(-1 for no triangle).
barycentric_weight: [h, w, 3] saves corresponding barycentric weight.
# Each triangle has 3 vertices & Each vertex has 3 coordinates x, y, z.
# h, w is the size of rendering
'''
# initial
depth_buffer = np.zeros([h, w]) - 999999. #set the initial z to the farest position
triangle_buffer = np.zeros([h, w], dtype = np.int32) - 1 # if tri id = -1, the pixel has no triangle correspondance
barycentric_weight = np.zeros([h, w, 3], dtype = np.float32) #
vertices = vertices.astype(np.float32).copy()
triangles = triangles.astype(np.int32).copy()
mesh_core_cython.rasterize_triangles_core(
vertices, triangles,
depth_buffer, triangle_buffer, barycentric_weight,
vertices.shape[0], triangles.shape[0],
h, w)
def render_colors(vertices, triangles, colors, h, w, c = 3, BG = None):
''' render mesh with colors
Args:
vertices: [nver, 3]
triangles: [ntri, 3]
colors: [nver, 3]
h: height
w: width
c: channel
BG: background image
Returns:
image: [h, w, c]. rendered image./rendering.
'''
# initial
if BG is None:
image = np.zeros((h, w, c), dtype = np.float32)
else:
assert BG.shape[0] == h and BG.shape[1] == w and BG.shape[2] == c
image = BG
depth_buffer = np.zeros([h, w], dtype = np.float32, order = 'C') - 999999.
# change orders. --> C-contiguous order(column major)
vertices = vertices.astype(np.float32).copy()
triangles = triangles.astype(np.int32).copy()
colors = colors.astype(np.float32).copy()
###
st = time()
mesh_core_cython.render_colors_core(
image, vertices, triangles,
colors,
depth_buffer,
vertices.shape[0], triangles.shape[0],
h, w, c)
return image
def render_texture(vertices, triangles, texture, tex_coords, tex_triangles, h, w, c = 3, mapping_type = 'nearest', BG = None):
''' render mesh with texture map
Args:
vertices: [3, nver]
triangles: [3, ntri]
texture: [tex_h, tex_w, 3]
tex_coords: [ntexcoords, 3]
tex_triangles: [ntri, 3]
h: height of rendering
w: width of rendering
c: channel
mapping_type: 'bilinear' or 'nearest'
'''
# initial
if BG is None:
image = np.zeros((h, w, c), dtype = np.float32)
else:
assert BG.shape[0] == h and BG.shape[1] == w and BG.shape[2] == c
image = BG
depth_buffer = np.zeros([h, w], dtype = np.float32, order = 'C') - 999999.
tex_h, tex_w, tex_c = texture.shape
if mapping_type == 'nearest':
mt = int(0)
elif mapping_type == 'bilinear':
mt = int(1)
else:
mt = int(0)
# -> C order
vertices = vertices.astype(np.float32).copy()
triangles = triangles.astype(np.int32).copy()
texture = texture.astype(np.float32).copy()
tex_coords = tex_coords.astype(np.float32).copy()
tex_triangles = tex_triangles.astype(np.int32).copy()
mesh_core_cython.render_texture_core(
image, vertices, triangles,
texture, tex_coords, tex_triangles,
depth_buffer,
vertices.shape[0], tex_coords.shape[0], triangles.shape[0],
h, w, c,
tex_h, tex_w, tex_c,
mt)
return image
| insightface/python-package/insightface/thirdparty/face3d/mesh/render.py/0 | {
"file_path": "insightface/python-package/insightface/thirdparty/face3d/mesh/render.py",
"repo_id": "insightface",
"token_count": 2031
} | 119 |
import cv2
import numpy as np
from skimage import transform as trans
arcface_dst = np.array(
[[38.2946, 51.6963], [73.5318, 51.5014], [56.0252, 71.7366],
[41.5493, 92.3655], [70.7299, 92.2041]],
dtype=np.float32)
def estimate_norm(lmk, image_size=112,mode='arcface'):
assert lmk.shape == (5, 2)
assert image_size%112==0 or image_size%128==0
if image_size%112==0:
ratio = float(image_size)/112.0
diff_x = 0
else:
ratio = float(image_size)/128.0
diff_x = 8.0*ratio
dst = arcface_dst * ratio
dst[:,0] += diff_x
tform = trans.SimilarityTransform()
tform.estimate(lmk, dst)
M = tform.params[0:2, :]
return M
def norm_crop(img, landmark, image_size=112, mode='arcface'):
M = estimate_norm(landmark, image_size, mode)
warped = cv2.warpAffine(img, M, (image_size, image_size), borderValue=0.0)
return warped
def norm_crop2(img, landmark, image_size=112, mode='arcface'):
M = estimate_norm(landmark, image_size, mode)
warped = cv2.warpAffine(img, M, (image_size, image_size), borderValue=0.0)
return warped, M
def square_crop(im, S):
if im.shape[0] > im.shape[1]:
height = S
width = int(float(im.shape[1]) / im.shape[0] * S)
scale = float(S) / im.shape[0]
else:
width = S
height = int(float(im.shape[0]) / im.shape[1] * S)
scale = float(S) / im.shape[1]
resized_im = cv2.resize(im, (width, height))
det_im = np.zeros((S, S, 3), dtype=np.uint8)
det_im[:resized_im.shape[0], :resized_im.shape[1], :] = resized_im
return det_im, scale
def transform(data, center, output_size, scale, rotation):
scale_ratio = scale
rot = float(rotation) * np.pi / 180.0
#translation = (output_size/2-center[0]*scale_ratio, output_size/2-center[1]*scale_ratio)
t1 = trans.SimilarityTransform(scale=scale_ratio)
cx = center[0] * scale_ratio
cy = center[1] * scale_ratio
t2 = trans.SimilarityTransform(translation=(-1 * cx, -1 * cy))
t3 = trans.SimilarityTransform(rotation=rot)
t4 = trans.SimilarityTransform(translation=(output_size / 2,
output_size / 2))
t = t1 + t2 + t3 + t4
M = t.params[0:2]
cropped = cv2.warpAffine(data,
M, (output_size, output_size),
borderValue=0.0)
return cropped, M
def trans_points2d(pts, M):
new_pts = np.zeros(shape=pts.shape, dtype=np.float32)
for i in range(pts.shape[0]):
pt = pts[i]
new_pt = np.array([pt[0], pt[1], 1.], dtype=np.float32)
new_pt = np.dot(M, new_pt)
#print('new_pt', new_pt.shape, new_pt)
new_pts[i] = new_pt[0:2]
return new_pts
def trans_points3d(pts, M):
scale = np.sqrt(M[0][0] * M[0][0] + M[0][1] * M[0][1])
#print(scale)
new_pts = np.zeros(shape=pts.shape, dtype=np.float32)
for i in range(pts.shape[0]):
pt = pts[i]
new_pt = np.array([pt[0], pt[1], 1.], dtype=np.float32)
new_pt = np.dot(M, new_pt)
#print('new_pt', new_pt.shape, new_pt)
new_pts[i][0:2] = new_pt[0:2]
new_pts[i][2] = pts[i][2] * scale
return new_pts
def trans_points(pts, M):
if pts.shape[1] == 2:
return trans_points2d(pts, M)
else:
return trans_points3d(pts, M)
| insightface/python-package/insightface/utils/face_align.py/0 | {
"file_path": "insightface/python-package/insightface/utils/face_align.py",
"repo_id": "insightface",
"token_count": 1666
} | 120 |
Subsets and Splits