repo_id
stringlengths 19
138
| file_path
stringlengths 32
200
| content
stringlengths 1
12.9M
| __index_level_0__
int64 0
0
|
---|---|---|---|
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/mkdocs.yml | # Project information
site_name: YOLO3D
site_url: https://ruhyadi.github.io/yolo3d-lightning
site_author: Didi Ruhyadi
site_description: >-
YOLO3D: 3D Object Detection with YOLO
# Repository
repo_name: ruhyadi/yolo3d-lightning
repo_url: https://github.com/ruhyadi/yolo3d-lightning
edit_uri: ""
# Copyright
copyright: Copyright © 2020 - 2022 Didi Ruhyadi
# Configuration
theme:
name: material
language: en
# Don't include MkDocs' JavaScript
include_search_page: false
search_index_only: true
features:
- content.code.annotate
# - content.tabs.link
# - header.autohide
# - navigation.expand
- navigation.indexes
# - navigation.instant
- navigation.sections
- navigation.tabs
# - navigation.tabs.sticky
- navigation.top
- navigation.tracking
- search.highlight
- search.share
- search.suggest
# - toc.integrate
palette:
- scheme: default
primary: white
accent: indigo
toggle:
icon: material/weather-night
name: Vampire Mode
- scheme: slate
primary: indigo
accent: blue
toggle:
icon: material/weather-sunny
name: Beware of Your Eyes
font:
text: Noto Serif
code: Noto Mono
favicon: assets/logo.png
logo: assets/logo.png
icon:
repo: fontawesome/brands/github
# Plugins
plugins:
# Customization
extra:
social:
- icon: fontawesome/brands/github
link: https://github.com/ruhyadi
- icon: fontawesome/brands/docker
link: https://hub.docker.com/r/ruhyadi
- icon: fontawesome/brands/twitter
link: https://twitter.com/
- icon: fontawesome/brands/linkedin
link: https://linkedin.com/in/didiruhyadi
- icon: fontawesome/brands/instagram
link: https://instagram.com/didiir_
extra_javascript:
- javascripts/mathjax.js
- https://polyfill.io/v3/polyfill.min.js?features=es6
- https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js
# Extensions
markdown_extensions:
- admonition
- abbr
- pymdownx.snippets
- attr_list
- def_list
- footnotes
- meta
- md_in_html
- toc:
permalink: true
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:materialx.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
- pymdownx.highlight:
anchor_linenums: true
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.magiclink:
repo_url_shorthand: true
user: squidfunk
repo: mkdocs-material
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
# Page tree
nav:
- Home:
- Home: index.md | 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2021-2022 Megvii Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/requirements.txt | # --------- pytorch --------- #
torch>=1.8.0
torchvision>=0.9.1
pytorch-lightning==1.6.5
torchmetrics==0.9.2
# --------- hydra --------- #
hydra-core==1.2.0
hydra-colorlog==1.2.0
hydra-optuna-sweeper==1.2.0
# --------- loggers --------- #
# wandb
# neptune-client
# mlflow
# comet-ml
# --------- others --------- #
pyrootutils # standardizing the project root setup
pre-commit # hooks for applying linters on commit
rich # beautiful text formatting in terminal
pytest # tests
sh # for running bash commands in some tests
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/Makefile |
help: ## Show help
@grep -E '^[.a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
clean: ## Clean autogenerated files
rm -rf dist
find . -type f -name "*.DS_Store" -ls -delete
find . | grep -E "(__pycache__|\.pyc|\.pyo)" | xargs rm -rf
find . | grep -E ".pytest_cache" | xargs rm -rf
find . | grep -E ".ipynb_checkpoints" | xargs rm -rf
rm -f .coverage
clean-logs: ## Clean logs
rm -rf logs/**
format: ## Run pre-commit hooks
pre-commit run -a
sync: ## Merge changes from main branch to your current branch
git pull
git pull origin main
test: ## Run not slow tests
pytest -k "not slow"
test-full: ## Run all tests
pytest
train: ## Train the model
python src/train.py
debug: ## Enter debugging mode with pdb
#
# tips:
# - use "import pdb; pdb.set_trace()" to set breakpoint
# - use "h" to print all commands
# - use "n" to execute the next line
# - use "c" to run until the breakpoint is hit
# - use "l" to print src code around current line, "ll" for full function code
# - docs: https://docs.python.org/3/library/pdb.html
#
python -m pdb src/train.py debug=default
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/convert.py | """ Conver checkpoint to model (.pt/.pth/.onnx) """
import torch
from torch.utils.data import Dataset, DataLoader
from pytorch_lightning import LightningModule
from src import utils
import dotenv
import hydra
from omegaconf import DictConfig
import os
# load environment variables from `.env` file if it exists
# recursively searches for `.env` in all folders starting from work dir
dotenv.load_dotenv(override=True)
log = utils.get_pylogger(__name__)
@hydra.main(config_path="configs/", config_name="convert.yaml")
def convert(config: DictConfig):
# assert model convertion
assert config.get('convert_to') in ['pytorch', 'torchscript', 'onnx', 'tensorrt'], \
"Please Choose one of [pytorch, torchscript, onnx, tensorrt]"
# Init lightning model
log.info(f"Instantiating model <{config.model._target_}>")
model: LightningModule = hydra.utils.instantiate(config.model)
# regressor: LightningModule = hydra.utils.instantiate(config.model)
# regressor.load_state_dict(torch.load(config.get("regressor_weights"), map_location="cpu"))
# regressor.eval().to(config.get("device"))
# Convert relative ckpt path to absolute path if necessary
log.info(f"Load checkpoint <{config.get('checkpoint_dir')}>")
ckpt_path = config.get("checkpoint_dir")
if ckpt_path and not os.path.isabs(ckpt_path):
ckpt_path = config.get(os.path.join(hydra.utils.get_original_cwd(), ckpt_path))
# load model checkpoint
model = model.load_from_checkpoint(ckpt_path)
model.cuda()
# input sample
input_sample = config.get('input_sample')
# Convert
if config.get('convert_to') == 'pytorch':
log.info("Convert to Pytorch (.pt)")
torch.save(model.state_dict(), f'{config.get("name")}.pt')
log.info(f"Saved model {config.get('name')}.pt")
if config.get('convert_to') == 'onnx':
log.info("Convert to ONNX (.onnx)")
model.cuda()
input_sample = torch.rand((1, 3, 224, 224), device=torch.device('cuda'))
model.to_onnx(f'{config.get("name")}.onnx', input_sample, export_params=True)
log.info(f"Saved model {config.get('name')}.onnx")
if __name__ == '__main__':
convert() | 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/pyproject.toml | [tool.pytest.ini_options]
addopts = [
"--color=yes",
"--durations=0",
"--strict-markers",
"--doctest-modules",
]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::UserWarning",
]
log_cli = "True"
markers = [
"slow: slow tests",
]
minversion = "6.0"
testpaths = "tests/"
[tool.coverage.report]
exclude_lines = [
"pragma: nocover",
"raise NotImplementedError",
"raise NotImplementedError()",
"if __name__ == .__main__.:",
]
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/README.md | <div align="center">
# YOLO3D: 3D Object Detection with YOLO
</div>
## Introduction
YOLO3D is inspired by [Mousavian et al.](https://arxiv.org/abs/1612.00496) in their paper **3D Bounding Box Estimation Using Deep Learning and Geometry**. YOLO3D uses a different approach, we use 2d gt label result as the input of first stage detector, then use the 2d result as input to regressor model.
## Quickstart
```bash
git clone git@github.com:ApolloAuto/apollo-model-yolo3d.git
```
### creat env for YOLO3D
```shell
cd apollo-model-yolo3d
conda create -n apollo_yolo3d python=3.8 numpy
conda activate apollo_yolo3d
pip install -r requirements.txt
```
### datasets
here we use KITTI data to train. You can download KITTI dataset from [official website](http://www.cvlibs.net/datasets/kitti/). After that, extract dataset to `data/KITTI`.
```shell
ln -s /your/KITTI/path data/KITTI
```
```bash
├── data
│ └── KITTI
│ ├── calib
│ ├── images_2
│ └── labels_2
```
modify [datasplit](data/datasplit.py) file to split train and val data customerly.
```shell
cd data
python datasplit.py
```
### train
modify [train.yaml](configs/train.yaml) to train your model.
```shell
python src/train.py experiment=sample
```
> log path: /logs \
> model path: /weights
### covert
modify [convert.yaml](configs/convert.yaml) file to trans .ckpt to .pt model
```shell
python convert.py
```
### inference
In order to show the real model infer ability, we crop image according to gt 2d box as yolo3d input, you can use following command to plot 3d results.
modify [inference.yaml](configs/inference.yaml) file to change .pt model path.
**export_onnx=True** can export onnx model.
```shell
python inference.py \
source_dir=./data/KITTI \
detector.classes=6 \
regressor_weights=./weights/pytorch-kitti.pt \
export_onnx=False \
func=image
```
- source_dir: path os datasets, include /image_2 and /label_2 folder
- detector.classes: kitti class
- regressor_weights: your model
- export_onnx: export onnx model for apollo
> result path: /outputs
### evaluate
generate label for 3d result:
```shell
python inference.py \
source_dir=./data/KITTI \
detector.classes=6 \
regressor_weights=./weights/pytorch-kitti.pt \
export_onnx=False \
func=label
```
> result path: /data/KITTI/result
```bash
├── data
│ └── KITTI
│ ├── calib
│ ├── images_2
│ ├── labels_2
│ └── result
```
modify label_path、result_path and label_split_file in [kitti_object_eval](kitti_object_eval) folder script run.sh, with the help of it we can calculate mAP:
```shell
cd kitti_object_eval
sh run.sh
```
## Acknowledgement
- [yolo3d-lighting](https://github.com/ruhyadi/yolo3d-lightning)
- [skhadem/3D-BoundingBox](https://github.com/skhadem/3D-BoundingBox)
- [Mousavian et al.](https://arxiv.org/abs/1612.00496)
```
@misc{mousavian20173d,
title={3D Bounding Box Estimation Using Deep Learning and Geometry},
author={Arsalan Mousavian and Dragomir Anguelov and John Flynn and Jana Kosecka},
year={2017},
eprint={1612.00496},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/setup.py | #!/usr/bin/env python
from setuptools import find_packages, setup
setup(
name="src",
version="0.0.1",
description="Describe Your Cool Project",
author="",
author_email="",
url="https://github.com/user/project", # REPLACE WITH YOUR OWN GITHUB PROJECT LINK
install_requires=["pytorch-lightning", "hydra-core"],
packages=find_packages(),
)
| 0 |
apollo_public_repos | apollo_public_repos/apollo-model-yolo3d/inference.py | """ Inference Code """
from typing import List
from PIL import Image
import cv2
from glob import glob
import numpy as np
import torch
from torchvision.transforms import transforms
from pytorch_lightning import LightningModule
from src.utils import Calib
from src.utils.averages import ClassAverages
from src.utils.Math import compute_orientaion, recover_angle, translation_constraints
from src.utils.Plotting import Plot3DBoxBev
import dotenv
import hydra
from omegaconf import DictConfig
import os
import pyrootutils
import src.utils
from src.utils.utils import KITTIObject
import torch.onnx
from torch.onnx import OperatorExportTypes
log = src.utils.get_pylogger(__name__)
try:
import onnxruntime
import openvino.runtime as ov
except ImportError:
log.warning("ONNX and OpenVINO not installed")
dotenv.load_dotenv(override=True)
root = pyrootutils.setup_root(__file__, dotenv=True, pythonpath=True)
class Bbox:
def __init__(self, box_2d, label, h, w, l, tx, ty, tz, ry, alpha):
self.box_2d = box_2d
self.detected_class = label
self.w = w
self.h = h
self.l = l
self.tx = tx
self.ty = ty
self.tz = tz
self.ry = ry
self.alpha = alpha
def mkdir(path):
folder = os.path.exists(path)
if not folder:
os.makedirs(path)
print("--- creating new folder... ---")
print("--- finished ---")
else:
# print("--- pass to create new folder ---")
pass
def format_img(img, box_2d):
# transforms
normalize = transforms.Normalize(
mean=[0.406, 0.456, 0.485],
std=[0.225, 0.224, 0.229])
process = transforms.Compose([
transforms.ToTensor(),
normalize
])
# crop image
pt1, pt2 = box_2d[0], box_2d[1]
point_list1 = [pt1[0], pt1[1]]
point_list2 = [pt2[0], pt2[1]]
if point_list1[0] < 0:
point_list1[0] = 0
if point_list1[1] < 0:
point_list1[1] = 0
if point_list2[0] < 0:
point_list2[0] = 0
if point_list2[1] < 0:
point_list2[1] = 0
if point_list1[0] >= img.shape[1]:
point_list1[0] = img.shape[1] - 1
if point_list2[0] >= img.shape[1]:
point_list2[0] = img.shape[1] - 1
if point_list1[1] >= img.shape[0]:
point_list1[1] = img.shape[0] - 1
if point_list2[1] >= img.shape[0]:
point_list2[1] = img.shape[0] - 1
crop = img[point_list1[1]:point_list2[1]+1, point_list1[0]:point_list2[0]+1]
try:
cv2.imwrite('./tmp/img.jpg', img)
crop = cv2.resize(crop, (224, 224), interpolation=cv2.INTER_CUBIC)
cv2.imwrite('./tmp/demo.jpg', crop)
except cv2.error:
print("pt1 is ", pt1, " pt2 is ", pt2)
print("image shape is ", img.shape)
print("box_2d is ", box_2d)
# apply transform for batch
batch = process(crop)
return batch
def inference_label(config: DictConfig):
"""Inference function"""
# ONNX provider
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] \
if config.get("device") == "cuda" else ['CPUExecutionProvider']
# global calibration P2 matrix
P2 = Calib.get_P(config.get("calib_file"))
# dimension averages
class_averages = ClassAverages()
# initialize regressor model
if config.get("inference_type") == "pytorch":
# pytorch regressor model
log.info(f"Instantiating regressor <{config.model._target_}>")
regressor: LightningModule = hydra.utils.instantiate(config.model)
regressor.load_state_dict(torch.load(config.get("regressor_weights"), map_location="cpu"))
regressor.eval().to(config.get("device"))
elif config.get("inference_type") == "onnx":
# onnx regressor model
log.info(f"Instantiating ONNX regressor <{config.get('regressor_weights').split('/')[-1]}>")
regressor = onnxruntime.InferenceSession(config.get("regressor_weights"), providers=providers)
input_name = regressor.get_inputs()[0].name
elif config.get("inference_type") == "openvino":
# openvino regressor model
log.info(f"Instantiating OpenVINO regressor <{config.get('regressor_weights').split('/')[-1]}>")
core = ov.Core()
model = core.read_model(config.get("regressor_weights"))
regressor = core.compile_model(model, 'CPU')
infer_req = regressor.create_infer_request()
# initialize preprocessing transforms
log.info(f"Instantiating Preprocessing Transforms")
preprocess: List[torch.nn.Module] = []
if "augmentation" in config:
for _, conf in config.augmentation.items():
if "_target_" in conf:
preprocess.append(hydra.utils.instantiate(conf))
preprocess = transforms.Compose(preprocess)
# Create output directory
os.makedirs(config.get("output_dir"), exist_ok=True)
# loop thru images
imgs_path = sorted(glob(os.path.join(config.get("source_dir") + "/image_2", "*")))
image_id = 0
for img_path in imgs_path:
image_id += 1
print("\r", end="|")
print("now is saving : {} ".format(image_id) + "/ {}".format(len(imgs_path)) + " label")
# read gt image ./eval_kitti/image_2_val/
img_id = img_path[-10:-4]
# dt result
result_label_root_path = config.get("source_dir") + '/result/'
mkdir(result_label_root_path)
f = open(result_label_root_path + img_id + '.txt', 'w')
# read image
img = cv2.imread(img_path)
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
gt_label_root_path = config.get("source_dir") + '/label_2/'
gt_f = gt_label_root_path + img_id + '.txt'
dets = []
try:
with open(gt_f, 'r') as file:
content = file.readlines()
for i in range(len(content)):
gt = content[i].split()
top_left, bottom_right = (int(float(gt[4])), int(float(gt[5]))), (int(float(gt[6])), int(float(gt[7])))
bbox_2d = [top_left, bottom_right]
label = gt[0]
dets.append(Bbox(bbox_2d, label, float(gt[8]), float(gt[9]), float(gt[10]), float(gt[11]), float(gt[12]), float(gt[13]), float(gt[14]), float(gt[3])))
except:
continue
DIMENSION = []
# loop thru detections
for det in dets:
# initialize object container
obj = KITTIObject()
obj.name = det.detected_class
if(obj.name == 'DontCare'):
continue
if(obj.name == 'Misc'):
continue
if(obj.name == 'Person_sitting'):
continue
obj.truncation = float(0.00)
obj.occlusion = int(-1)
obj.xmin, obj.ymin, obj.xmax, obj.ymax = det.box_2d[0][0], det.box_2d[0][1], det.box_2d[1][0], det.box_2d[1][1]
crop = format_img(img, det.box_2d)
# # preprocess img with torch.transforms
crop = crop.reshape((1, *crop.shape)).to(config.get("device"))
# regress 2D bbox with Regressor
if config.get("inference_type") == "pytorch":
[orient, conf, dim] = regressor(crop)
orient = orient.cpu().detach().numpy()[0, :, :]
conf = conf.cpu().detach().numpy()[0, :]
dim = dim.cpu().detach().numpy()[0, :]
# dimension averages
try:
dim += class_averages.get_item(obj.name)
DIMENSION.append(dim)
except:
dim = DIMENSION[-1]
obj.alpha = recover_angle(orient, conf, 2)
obj.h, obj.w, obj.l = dim[0], dim[1], dim[2]
obj.rot_global, rot_local = compute_orientaion(P2, obj)
obj.tx, obj.ty, obj.tz = translation_constraints(P2, obj, rot_local)
# output prediction label
obj.score = 1.0
output_line = obj.member_to_list()
output_line = " ".join([str(i) for i in output_line])
f.write(output_line + '\n')
f.close()
def inference_image(config: DictConfig):
"""Inference function"""
# ONNX provider
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] \
if config.get("device") == "cuda" else ['CPUExecutionProvider']
# global calibration P2 matrix
P2 = Calib.get_P(config.get("calib_file"))
# dimension averages
class_averages = ClassAverages()
export_onnx = config.get("export_onnx")
# initialize regressor model
if config.get("inference_type") == "pytorch":
# pytorch regressor model
log.info(f"Instantiating regressor <{config.model._target_}>")
regressor: LightningModule = hydra.utils.instantiate(config.model)
regressor.load_state_dict(torch.load(config.get("regressor_weights"), map_location="cpu"))
regressor.eval().to(config.get("device"))
elif config.get("inference_type") == "onnx":
# onnx regressor model
log.info(f"Instantiating ONNX regressor <{config.get('regressor_weights').split('/')[-1]}>")
regressor = onnxruntime.InferenceSession(config.get("regressor_weights"), providers=providers)
input_name = regressor.get_inputs()[0].name
elif config.get("inference_type") == "openvino":
# openvino regressor model
log.info(f"Instantiating OpenVINO regressor <{config.get('regressor_weights').split('/')[-1]}>")
core = ov.Core()
model = core.read_model(config.get("regressor_weights"))
regressor = core.compile_model(model, 'CPU')
infer_req = regressor.create_infer_request()
# initialize preprocessing transforms
log.info(f"Instantiating Preprocessing Transforms")
preprocess: List[torch.nn.Module] = []
if "augmentation" in config:
for _, conf in config.augmentation.items():
if "_target_" in conf:
preprocess.append(hydra.utils.instantiate(conf))
preprocess = transforms.Compose(preprocess)
# Create output directory
os.makedirs(config.get("output_dir"), exist_ok=True)
imgs_path = sorted(glob(os.path.join(config.get("source_dir") + "/image_2", "*")))
image_id = 0
for img_path in imgs_path:
image_id += 1
print("\r", end="|")
print("now is saving : {} ".format(image_id) + "/ {}".format(len(imgs_path)) + " image")
# Initialize object and plotting modules
plot3dbev = Plot3DBoxBev(P2)
img_name = img_path.split("/")[-1].split(".")[0]
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# check if image shape 1242 x 375
if img.shape != (375, 1242, 3):
# crop center of image to 1242 x 375
src_h, src_w, _ = img.shape
dst_h, dst_w = 375, 1242
dif_h, dif_w = src_h - dst_h, src_w - dst_w
img = img[dif_h // 2 : src_h - dif_h // 2, dif_w // 2 : src_w - dif_w // 2, :]
img_id = img_path[-10:-4]
# read image
img = cv2.imread(img_path)
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
gt_label_root_path = config.get("source_dir") + '/label_2/'
gt_f = gt_label_root_path + img_id + '.txt'
# use gt 2d result as output of first stage
dets = []
try:
with open(gt_f, 'r') as file:
content = file.readlines()
for i in range(len(content)):
gt = content[i].split()
top_left, bottom_right = (int(float(gt[4])), int(float(gt[5]))), (int(float(gt[6])), int(float(gt[7])))
bbox_2d = [top_left, bottom_right]
label = gt[0]
dets.append(Bbox(bbox_2d, label, float(gt[8]), float(gt[9]), float(gt[10]), float(gt[11]), float(gt[12]), float(gt[13]), float(gt[14]), float(gt[3])))
except:
continue
DIMENSION = []
for det in dets:
# initialize object container
obj = KITTIObject()
obj.name = det.detected_class
if(obj.name == 'DontCare'):
continue
if(obj.name == 'Misc'):
continue
if(obj.name == 'Person_sitting'):
continue
obj.truncation = float(0.00)
obj.occlusion = int(-1)
obj.xmin, obj.ymin, obj.xmax, obj.ymax = det.box_2d[0][0], det.box_2d[0][1], det.box_2d[1][0], det.box_2d[1][1]
crop = format_img(img, det.box_2d)
crop = crop.reshape((1, *crop.shape)).to(config.get("device"))
# regress 2D bbox with Regressor
if config.get("inference_type") == "pytorch":
[orient, conf, dim] = regressor(crop)
orient = orient.cpu().detach().numpy()[0, :, :]
conf = conf.cpu().detach().numpy()[0, :]
dim = dim.cpu().detach().numpy()[0, :]
if(export_onnx):
traced_script_module = torch.jit.trace(regressor, (crop))
traced_script_module.save("weights/yolo_libtorch_model_3d.pth")
onnx_model_save_path = "weights/yolo_onnx_model_3d.onnx"
# dynamic batch
# dynamic_axes = {"image": {0: "batch"},
# "orient": {0: "batch", 1: str(2), 2: str(2)}, # for multi batch
# "conf": {0: "batch"},
# "dim": {0: "batch"}}
if True:
torch.onnx.export(regressor, crop, onnx_model_save_path, opset_version=11,
verbose=False, export_params=True, operator_export_type=OperatorExportTypes.ONNX,
input_names=['image'], output_names=['orient','conf','dim']
# ,dynamic_axes=dynamic_axes
)
print("Please check onnx model in ", onnx_model_save_path)
import onnx
onnx_model = onnx.load(onnx_model_save_path)
# for dla&trt speedup
onnx_fp16_model_save_path = "weights/yolo_onnx_model_3d_fp16.onnx"
from onnxmltools.utils import float16_converter
trans_model = float16_converter.convert_float_to_float16(onnx_model,keep_io_types=True)
onnx.save_model(trans_model, onnx_fp16_model_save_path)
export_onnx = False # once
try:
dim += class_averages.get_item(obj.name)
DIMENSION.append(dim)
except:
dim = DIMENSION[-1]
obj.alpha = recover_angle(orient, conf, 2)
obj.h, obj.w, obj.l = dim[0], dim[1], dim[2]
obj.rot_global, rot_local = compute_orientaion(P2, obj)
obj.tx, obj.ty, obj.tz = translation_constraints(P2, obj, rot_local)
# output prediction label
output_line = obj.member_to_list()
output_line.append(1.0)
output_line = " ".join([str(i) for i in output_line]) + "\n"
# save results
if config.get("save_txt"):
with open(f"{config.get('output_dir')}/{img_name}.txt", "a") as f:
f.write(output_line)
if config.get("save_result"):
# dt
plot3dbev.plot(
img=img,
class_object=obj.name.lower(),
bbox=[obj.xmin, obj.ymin, obj.xmax, obj.ymax],
dim=[obj.h, obj.w, obj.l],
loc=[obj.tx, obj.ty, obj.tz],
rot_y=obj.rot_global,
gt=False
)
# gt
plot3dbev.plot(
img=img,
class_object=obj.name.lower(),
bbox=[obj.xmin, obj.ymin, obj.xmax, obj.ymax],
dim=[det.h, det.w, det.l],
loc=[det.tx, det.ty, det.tz],
rot_y=det.ry,
gt=True
)
# save images
if config.get("save_result"):
plot3dbev.save_plot(config.get("output_dir"), img_name)
def copy_eval_label():
label_path = './data/KITTI/ImageSets/val.txt'
label_root_path = './data/KITTI/label_2/'
label_save_path = './data/KITTI/label_2_val/'
# get all labels
label_files = []
sum_number = 0
from shutil import copyfile
with open(label_path, 'r') as file:
img_id = file.readlines()
for id in img_id:
label_path = label_root_path + id[:6] + '.txt'
copyfile(label_path, label_save_path + id[:6] + '.txt')
def copy_eval_image():
label_path = './data/KITTI/ImageSets/val.txt'
img_root_path = './data/KITTI/image_2/'
img_save_path = './data/KITTI/image_2_val'
# get all labels
label_files = []
sum_number = 0
with open(label_path, 'r') as file:
img_id = file.readlines()
for id in img_id:
img_path = img_root_path + id[:6] + '.png'
img = cv2.imread(img_path)
cv2.imwrite(f'{img_save_path}/{id[:6]}.png', img)
@hydra.main(version_base="1.2", config_path=root / "configs", config_name="inference.yaml")
def main(config: DictConfig):
if(config.get("func") == "image"):
# inference_image:
# inference for kitti bev and 3d image, without model
inference_image(config)
else:
# inference_label:
# for kitti gt label, predict without model
inference_label(config)
if __name__ == "__main__":
# # tools for copy target files
# copy_eval_label()
# copy_eval_image()
main() | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/LICENSE | MIT License
Copyright (c) 2018
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/run.sh | python evaluate.py evaluate \
--label_path=/home/your/path/data/KITTI/label_2 \
--result_path=/home/your/path/data/KITTI/result \
--label_split_file=/home/your/path/data/KITTI/ImageSets/val.txt \
--current_class=0,1,2 | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/README.md | # Note
This code is from [traveller59/kitti-object-eval-python](https://github.com/traveller59/kitti-object-eval-python)
# kitti-object-eval-python
Fast kitti object detection eval in python(finish eval in less than 10 second), support 2d/bev/3d/aos. , support coco-style AP. If you use command line interface, numba need some time to compile jit functions.
_WARNING_: The "coco" isn't official metrics. Only "AP(Average Precision)" is.
## Dependencies
Only support python 3.6+, need `numpy`, `skimage`, `numba`, `fire`, `scipy`. If you have Anaconda, just install `cudatoolkit` in anaconda. Otherwise, please reference to this [page](https://github.com/numba/numba#custom-python-environments) to set up llvm and cuda for numba.
* Install by conda:
```
conda install -c numba cudatoolkit=x.x (8.0, 9.0, 10.0, depend on your environment)
```
## Usage
* commandline interface:
```
python evaluate.py evaluate --label_path=/path/to/your_gt_label_folder --result_path=/path/to/your_result_folder --label_split_file=/path/to/val.txt --current_class=0 --coco=False
```
* python interface:
```Python
import kitti_common as kitti
from eval import get_official_eval_result, get_coco_eval_result
def _read_imageset_file(path):
with open(path, 'r') as f:
lines = f.readlines()
return [int(line) for line in lines]
det_path = "/path/to/your_result_folder"
dt_annos = kitti.get_label_annos(det_path)
gt_path = "/path/to/your_gt_label_folder"
gt_split_file = "/path/to/val.txt" # from https://xiaozhichen.github.io/files/mv3d/imagesets.tar.gz
val_image_ids = _read_imageset_file(gt_split_file)
gt_annos = kitti.get_label_annos(gt_path, val_image_ids)
print(get_official_eval_result(gt_annos, dt_annos, 0)) # 6s in my computer
print(get_coco_eval_result(gt_annos, dt_annos, 0)) # 18s in my computer
```
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/kitti_common.py | import concurrent.futures as futures
import os
import pathlib
import re
from collections import OrderedDict
import numpy as np
from skimage import io
def get_image_index_str(img_idx):
return "{:06d}".format(img_idx)
def get_kitti_info_path(idx,
prefix,
info_type='image_2',
file_tail='.png',
training=True,
relative_path=True):
img_idx_str = get_image_index_str(idx)
img_idx_str += file_tail
prefix = pathlib.Path(prefix)
if training:
file_path = pathlib.Path('training') / info_type / img_idx_str
else:
file_path = pathlib.Path('testing') / info_type / img_idx_str
if not (prefix / file_path).exists():
raise ValueError("file not exist: {}".format(file_path))
if relative_path:
return str(file_path)
else:
return str(prefix / file_path)
def get_image_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'image_2', '.png', training,
relative_path)
def get_label_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'label_2', '.txt', training,
relative_path)
def get_velodyne_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'velodyne', '.bin', training,
relative_path)
def get_calib_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'calib', '.txt', training,
relative_path)
def _extend_matrix(mat):
mat = np.concatenate([mat, np.array([[0., 0., 0., 1.]])], axis=0)
return mat
def get_kitti_image_info(path,
training=True,
label_info=True,
velodyne=False,
calib=False,
image_ids=7481,
extend_matrix=True,
num_worker=8,
relative_path=True,
with_imageshape=True):
# image_infos = []
root_path = pathlib.Path(path)
if not isinstance(image_ids, list):
image_ids = list(range(image_ids))
def map_func(idx):
image_info = {'image_idx': idx}
annotations = None
if velodyne:
image_info['velodyne_path'] = get_velodyne_path(
idx, path, training, relative_path)
image_info['img_path'] = get_image_path(idx, path, training,
relative_path)
if with_imageshape:
img_path = image_info['img_path']
if relative_path:
img_path = str(root_path / img_path)
image_info['img_shape'] = np.array(
io.imread(img_path).shape[:2], dtype=np.int32)
if label_info:
label_path = get_label_path(idx, path, training, relative_path)
if relative_path:
label_path = str(root_path / label_path)
annotations = get_label_anno(label_path)
if calib:
calib_path = get_calib_path(
idx, path, training, relative_path=False)
with open(calib_path, 'r') as f:
lines = f.readlines()
P0 = np.array(
[float(info) for info in lines[0].split(' ')[1:13]]).reshape(
[3, 4])
P1 = np.array(
[float(info) for info in lines[1].split(' ')[1:13]]).reshape(
[3, 4])
P2 = np.array(
[float(info) for info in lines[2].split(' ')[1:13]]).reshape(
[3, 4])
P3 = np.array(
[float(info) for info in lines[3].split(' ')[1:13]]).reshape(
[3, 4])
if extend_matrix:
P0 = _extend_matrix(P0)
P1 = _extend_matrix(P1)
P2 = _extend_matrix(P2)
P3 = _extend_matrix(P3)
image_info['calib/P0'] = P0
image_info['calib/P1'] = P1
image_info['calib/P2'] = P2
image_info['calib/P3'] = P3
R0_rect = np.array([
float(info) for info in lines[4].split(' ')[1:10]
]).reshape([3, 3])
if extend_matrix:
rect_4x4 = np.zeros([4, 4], dtype=R0_rect.dtype)
rect_4x4[3, 3] = 1.
rect_4x4[:3, :3] = R0_rect
else:
rect_4x4 = R0_rect
image_info['calib/R0_rect'] = rect_4x4
Tr_velo_to_cam = np.array([
float(info) for info in lines[5].split(' ')[1:13]
]).reshape([3, 4])
Tr_imu_to_velo = np.array([
float(info) for info in lines[6].split(' ')[1:13]
]).reshape([3, 4])
if extend_matrix:
Tr_velo_to_cam = _extend_matrix(Tr_velo_to_cam)
Tr_imu_to_velo = _extend_matrix(Tr_imu_to_velo)
image_info['calib/Tr_velo_to_cam'] = Tr_velo_to_cam
image_info['calib/Tr_imu_to_velo'] = Tr_imu_to_velo
if annotations is not None:
image_info['annos'] = annotations
add_difficulty_to_annos(image_info)
return image_info
with futures.ThreadPoolExecutor(num_worker) as executor:
image_infos = executor.map(map_func, image_ids)
return list(image_infos)
def filter_kitti_anno(image_anno,
used_classes,
used_difficulty=None,
dontcare_iou=None):
if not isinstance(used_classes, (list, tuple)):
used_classes = [used_classes]
img_filtered_annotations = {}
relevant_annotation_indices = [
i for i, x in enumerate(image_anno['name']) if x in used_classes
]
for key in image_anno.keys():
img_filtered_annotations[key] = (
image_anno[key][relevant_annotation_indices])
if used_difficulty is not None:
relevant_annotation_indices = [
i for i, x in enumerate(img_filtered_annotations['difficulty'])
if x in used_difficulty
]
for key in image_anno.keys():
img_filtered_annotations[key] = (
img_filtered_annotations[key][relevant_annotation_indices])
if 'DontCare' in used_classes and dontcare_iou is not None:
dont_care_indices = [
i for i, x in enumerate(img_filtered_annotations['name'])
if x == 'DontCare'
]
# bounding box format [y_min, x_min, y_max, x_max]
all_boxes = img_filtered_annotations['bbox']
ious = iou(all_boxes, all_boxes[dont_care_indices])
# Remove all bounding boxes that overlap with a dontcare region.
if ious.size > 0:
boxes_to_remove = np.amax(ious, axis=1) > dontcare_iou
for key in image_anno.keys():
img_filtered_annotations[key] = (img_filtered_annotations[key][
np.logical_not(boxes_to_remove)])
return img_filtered_annotations
def filter_annos_low_score(image_annos, thresh):
new_image_annos = []
for anno in image_annos:
img_filtered_annotations = {}
relevant_annotation_indices = [
i for i, s in enumerate(anno['score']) if s >= thresh
]
for key in anno.keys():
img_filtered_annotations[key] = (
anno[key][relevant_annotation_indices])
new_image_annos.append(img_filtered_annotations)
return new_image_annos
def kitti_result_line(result_dict, precision=4):
prec_float = "{" + ":.{}f".format(precision) + "}"
res_line = []
all_field_default = OrderedDict([
('name', None),
('truncated', -1),
('occluded', -1),
('alpha', -10),
('bbox', None),
('dimensions', [-1, -1, -1]),
('location', [-1000, -1000, -1000]),
('rotation_y', -10),
('score', None),
])
res_dict = [(key, None) for key, val in all_field_default.items()]
res_dict = OrderedDict(res_dict)
for key, val in result_dict.items():
if all_field_default[key] is None and val is None:
raise ValueError("you must specify a value for {}".format(key))
res_dict[key] = val
for key, val in res_dict.items():
if key == 'name':
res_line.append(val)
elif key in ['truncated', 'alpha', 'rotation_y', 'score']:
if val is None:
res_line.append(str(all_field_default[key]))
else:
res_line.append(prec_float.format(val))
elif key == 'occluded':
if val is None:
res_line.append(str(all_field_default[key]))
else:
res_line.append('{}'.format(val))
elif key in ['bbox', 'dimensions', 'location']:
if val is None:
res_line += [str(v) for v in all_field_default[key]]
else:
res_line += [prec_float.format(v) for v in val]
else:
raise ValueError("unknown key. supported key:{}".format(
res_dict.keys()))
return ' '.join(res_line)
def add_difficulty_to_annos(info):
min_height = [40, 25,
25] # minimum height for evaluated groundtruth/detections
max_occlusion = [
0, 1, 2
] # maximum occlusion level of the groundtruth used for evaluation
max_trunc = [
0.15, 0.3, 0.5
] # maximum truncation level of the groundtruth used for evaluation
annos = info['annos']
dims = annos['dimensions'] # lhw format
bbox = annos['bbox']
height = bbox[:, 3] - bbox[:, 1]
occlusion = annos['occluded']
truncation = annos['truncated']
diff = []
easy_mask = np.ones((len(dims), ), dtype=np.bool)
moderate_mask = np.ones((len(dims), ), dtype=np.bool)
hard_mask = np.ones((len(dims), ), dtype=np.bool)
i = 0
for h, o, t in zip(height, occlusion, truncation):
if o > max_occlusion[0] or h <= min_height[0] or t > max_trunc[0]:
easy_mask[i] = False
if o > max_occlusion[1] or h <= min_height[1] or t > max_trunc[1]:
moderate_mask[i] = False
if o > max_occlusion[2] or h <= min_height[2] or t > max_trunc[2]:
hard_mask[i] = False
i += 1
is_easy = easy_mask
is_moderate = np.logical_xor(easy_mask, moderate_mask)
is_hard = np.logical_xor(hard_mask, moderate_mask)
for i in range(len(dims)):
if is_easy[i]:
diff.append(0)
elif is_moderate[i]:
diff.append(1)
elif is_hard[i]:
diff.append(2)
else:
diff.append(-1)
annos["difficulty"] = np.array(diff, np.int32)
return diff
def get_label_anno(label_path):
annotations = {}
annotations.update({
'name': [],
'truncated': [],
'occluded': [],
'alpha': [],
'bbox': [],
'dimensions': [],
'location': [],
'rotation_y': []
})
with open(label_path, 'r') as f:
lines = f.readlines()
# if len(lines) == 0 or len(lines[0]) < 15:
# content = []
# else:
content = [line.strip().split(' ') for line in lines]
annotations['name'] = np.array([x[0] for x in content])
annotations['truncated'] = np.array([float(x[1]) for x in content])
annotations['occluded'] = np.array([int(x[2]) for x in content])
annotations['alpha'] = np.array([float(x[3]) for x in content])
annotations['bbox'] = np.array(
[[float(info) for info in x[4:8]] for x in content]).reshape(-1, 4)
# dimensions will convert hwl format to standard lhw(camera) format.
annotations['dimensions'] = np.array(
[[float(info) for info in x[8:11]] for x in content]).reshape(
-1, 3)[:, [2, 0, 1]]
annotations['location'] = np.array(
[[float(info) for info in x[11:14]] for x in content]).reshape(-1, 3)
annotations['rotation_y'] = np.array(
[float(x[14]) for x in content]).reshape(-1)
if len(content) != 0 and len(content[0]) == 16: # have score
annotations['score'] = np.array([float(x[15]) for x in content])
else:
annotations['score'] = np.zeros([len(annotations['bbox'])])
return annotations
def get_label_annos(label_folder, image_ids=None):
if image_ids is None:
filepaths = pathlib.Path(label_folder).glob('*.txt')
prog = re.compile(r'^\d{6}.txt$')
filepaths = filter(lambda f: prog.match(f.name), filepaths)
image_ids = [int(p.stem) for p in filepaths]
image_ids = sorted(image_ids)
if not isinstance(image_ids, list):
image_ids = list(range(image_ids))
annos = []
label_folder = pathlib.Path(label_folder)
for idx in image_ids:
image_idx = get_image_index_str(idx)
label_filename = label_folder / (image_idx + '.txt')
annos.append(get_label_anno(label_filename))
return annos
def area(boxes, add1=False):
"""Computes area of boxes.
Args:
boxes: Numpy array with shape [N, 4] holding N boxes
Returns:
a numpy array with shape [N*1] representing box areas
"""
if add1:
return (boxes[:, 2] - boxes[:, 0] + 1.0) * (
boxes[:, 3] - boxes[:, 1] + 1.0)
else:
return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
def intersection(boxes1, boxes2, add1=False):
"""Compute pairwise intersection areas between boxes.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes
boxes2: a numpy array with shape [M, 4] holding M boxes
Returns:
a numpy array with shape [N*M] representing pairwise intersection area
"""
[y_min1, x_min1, y_max1, x_max1] = np.split(boxes1, 4, axis=1)
[y_min2, x_min2, y_max2, x_max2] = np.split(boxes2, 4, axis=1)
all_pairs_min_ymax = np.minimum(y_max1, np.transpose(y_max2))
all_pairs_max_ymin = np.maximum(y_min1, np.transpose(y_min2))
if add1:
all_pairs_min_ymax += 1.0
intersect_heights = np.maximum(
np.zeros(all_pairs_max_ymin.shape),
all_pairs_min_ymax - all_pairs_max_ymin)
all_pairs_min_xmax = np.minimum(x_max1, np.transpose(x_max2))
all_pairs_max_xmin = np.maximum(x_min1, np.transpose(x_min2))
if add1:
all_pairs_min_xmax += 1.0
intersect_widths = np.maximum(
np.zeros(all_pairs_max_xmin.shape),
all_pairs_min_xmax - all_pairs_max_xmin)
return intersect_heights * intersect_widths
def iou(boxes1, boxes2, add1=False):
"""Computes pairwise intersection-over-union between box collections.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding N boxes.
Returns:
a numpy array with shape [N, M] representing pairwise iou scores.
"""
intersect = intersection(boxes1, boxes2, add1)
area1 = area(boxes1, add1)
area2 = area(boxes2, add1)
union = np.expand_dims(
area1, axis=1) + np.expand_dims(
area2, axis=0) - intersect
return intersect / union | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/evaluate.py | import time
import fire
import kitti_common as kitti
from eval import get_official_eval_result, get_coco_eval_result
def _read_imageset_file(path):
with open(path, 'r') as f:
lines = f.readlines()
return [int(line) for line in lines]
def evaluate(label_path, # gt
result_path, # dt
label_split_file,
current_class=0, # 0: bbox, 1: bev, 2: 3d
coco=False,
score_thresh=-1):
dt_annos = kitti.get_label_annos(result_path)
# print("dt_annos[0] is ", dt_annos[0], " shape is ", len(dt_annos))
# if score_thresh > 0:
# dt_annos = kitti.filter_annos_low_score(dt_annos, score_thresh)
# val_image_ids = _read_imageset_file(label_split_file)
gt_annos = kitti.get_label_annos(label_path)
# print("gt_annos[0] is ", gt_annos[0], " shape is ", len(gt_annos))
if coco:
print(get_coco_eval_result(gt_annos, dt_annos, current_class))
else:
print("not coco")
print(get_official_eval_result(gt_annos, dt_annos, current_class))
if __name__ == '__main__':
fire.Fire()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/eval.py | import io as sysio
import time
import numba
import numpy as np
from scipy.interpolate import interp1d
from rotate_iou import rotate_iou_gpu_eval
def get_mAP(prec):
sums = 0
for i in range(0, len(prec), 4):
sums += prec[i]
return sums / 11 * 100
@numba.jit
def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41):
scores.sort()
scores = scores[::-1]
current_recall = 0
thresholds = []
for i, score in enumerate(scores):
l_recall = (i + 1) / num_gt
if i < (len(scores) - 1):
r_recall = (i + 2) / num_gt
else:
r_recall = l_recall
if (((r_recall - current_recall) < (current_recall - l_recall))
and (i < (len(scores) - 1))):
continue
# recall = l_recall
thresholds.append(score)
current_recall += 1 / (num_sample_pts - 1.0)
# print(len(thresholds), len(scores), num_gt)
return thresholds
def clean_data(gt_anno, dt_anno, current_class, difficulty):
CLASS_NAMES = [
'car', 'pedestrian', 'cyclist', 'van', 'person_sitting', 'car',
'tractor', 'trailer'
]
MIN_HEIGHT = [40, 25, 25]
MAX_OCCLUSION = [0, 1, 2]
MAX_TRUNCATION = [0.15, 0.3, 0.5]
dc_bboxes, ignored_gt, ignored_dt = [], [], []
current_cls_name = CLASS_NAMES[current_class].lower()
num_gt = len(gt_anno["name"])
num_dt = len(dt_anno["name"])
num_valid_gt = 0
for i in range(num_gt):
bbox = gt_anno["bbox"][i]
gt_name = gt_anno["name"][i].lower()
height = bbox[3] - bbox[1]
valid_class = -1
if (gt_name == current_cls_name):
valid_class = 1
elif (current_cls_name == "Pedestrian".lower()
and "Person_sitting".lower() == gt_name):
valid_class = 0
elif (current_cls_name == "Car".lower() and "Van".lower() == gt_name):
valid_class = 0
else:
valid_class = -1
ignore = False
if ((gt_anno["occluded"][i] > MAX_OCCLUSION[difficulty])
or (gt_anno["truncated"][i] > MAX_TRUNCATION[difficulty])
or (height <= MIN_HEIGHT[difficulty])):
# if gt_anno["difficulty"][i] > difficulty or gt_anno["difficulty"][i] == -1:
ignore = True
if valid_class == 1 and not ignore:
ignored_gt.append(0)
num_valid_gt += 1
elif (valid_class == 0 or (ignore and (valid_class == 1))):
ignored_gt.append(1)
else:
ignored_gt.append(-1)
# for i in range(num_gt):
if gt_anno["name"][i] == "DontCare":
dc_bboxes.append(gt_anno["bbox"][i])
for i in range(num_dt):
if (dt_anno["name"][i].lower() == current_cls_name):
valid_class = 1
else:
valid_class = -1
height = abs(dt_anno["bbox"][i, 3] - dt_anno["bbox"][i, 1])
if height < MIN_HEIGHT[difficulty]:
ignored_dt.append(1)
elif valid_class == 1:
ignored_dt.append(0)
else:
ignored_dt.append(-1)
return num_valid_gt, ignored_gt, ignored_dt, dc_bboxes
@numba.jit(nopython=True)
def image_box_overlap(boxes, query_boxes, criterion=-1):
N = boxes.shape[0]
K = query_boxes.shape[0]
overlaps = np.zeros((N, K), dtype=boxes.dtype)
for k in range(K):
qbox_area = ((query_boxes[k, 2] - query_boxes[k, 0]) *
(query_boxes[k, 3] - query_boxes[k, 1]))
for n in range(N):
iw = (min(boxes[n, 2], query_boxes[k, 2]) - max(
boxes[n, 0], query_boxes[k, 0]))
if iw > 0:
ih = (min(boxes[n, 3], query_boxes[k, 3]) - max(
boxes[n, 1], query_boxes[k, 1]))
if ih > 0:
if criterion == -1:
ua = (
(boxes[n, 2] - boxes[n, 0]) *
(boxes[n, 3] - boxes[n, 1]) + qbox_area - iw * ih)
elif criterion == 0:
ua = ((boxes[n, 2] - boxes[n, 0]) *
(boxes[n, 3] - boxes[n, 1]))
elif criterion == 1:
ua = qbox_area
else:
ua = 1.0
overlaps[n, k] = iw * ih / ua
return overlaps
def bev_box_overlap(boxes, qboxes, criterion=-1):
riou = rotate_iou_gpu_eval(boxes, qboxes, criterion)
return riou
@numba.jit(nopython=True, parallel=True)
def d3_box_overlap_kernel(boxes,
qboxes,
rinc,
criterion=-1,
z_axis=1,
z_center=1.0):
"""
z_axis: the z (height) axis.
z_center: unified z (height) center of box.
"""
N, K = boxes.shape[0], qboxes.shape[0]
for i in range(N):
for j in range(K):
if rinc[i, j] > 0:
min_z = min(
boxes[i, z_axis] + boxes[i, z_axis + 3] * (1 - z_center),
qboxes[j, z_axis] + qboxes[j, z_axis + 3] * (1 - z_center))
max_z = max(
boxes[i, z_axis] - boxes[i, z_axis + 3] * z_center,
qboxes[j, z_axis] - qboxes[j, z_axis + 3] * z_center)
iw = min_z - max_z
if iw > 0:
area1 = boxes[i, 3] * boxes[i, 4] * boxes[i, 5]
area2 = qboxes[j, 3] * qboxes[j, 4] * qboxes[j, 5]
inc = iw * rinc[i, j]
if criterion == -1:
ua = (area1 + area2 - inc)
elif criterion == 0:
ua = area1
elif criterion == 1:
ua = area2
else:
ua = 1.0
rinc[i, j] = inc / ua
else:
rinc[i, j] = 0.0
def d3_box_overlap(boxes, qboxes, criterion=-1, z_axis=1, z_center=1.0):
"""kitti camera format z_axis=1.
"""
bev_axes = list(range(7))
bev_axes.pop(z_axis + 3)
bev_axes.pop(z_axis)
rinc = rotate_iou_gpu_eval(boxes[:, bev_axes], qboxes[:, bev_axes], 2)
d3_box_overlap_kernel(boxes, qboxes, rinc, criterion, z_axis, z_center)
return rinc
@numba.jit(nopython=True)
def compute_statistics_jit(overlaps,
gt_datas,
dt_datas,
ignored_gt,
ignored_det,
dc_bboxes,
metric,
min_overlap,
thresh=0,
compute_fp=False,
compute_aos=False):
det_size = dt_datas.shape[0]
gt_size = gt_datas.shape[0]
dt_scores = dt_datas[:, -1]
dt_alphas = dt_datas[:, 4]
gt_alphas = gt_datas[:, 4]
dt_bboxes = dt_datas[:, :4]
# gt_bboxes = gt_datas[:, :4]
assigned_detection = [False] * det_size
ignored_threshold = [False] * det_size
if compute_fp:
for i in range(det_size):
if (dt_scores[i] < thresh):
ignored_threshold[i] = True
NO_DETECTION = -10000000
tp, fp, fn, similarity = 0, 0, 0, 0
# thresholds = [0.0]
# delta = [0.0]
thresholds = np.zeros((gt_size, ))
thresh_idx = 0
delta = np.zeros((gt_size, ))
delta_idx = 0
for i in range(gt_size):
if ignored_gt[i] == -1:
continue
det_idx = -1
valid_detection = NO_DETECTION
max_overlap = 0
assigned_ignored_det = False
for j in range(det_size):
if (ignored_det[j] == -1):
continue
if (assigned_detection[j]):
continue
if (ignored_threshold[j]):
continue
overlap = overlaps[j, i]
dt_score = dt_scores[j]
if (not compute_fp and (overlap > min_overlap)
and dt_score > valid_detection):
det_idx = j
valid_detection = dt_score
elif (compute_fp and (overlap > min_overlap)
and (overlap > max_overlap or assigned_ignored_det)
and ignored_det[j] == 0):
max_overlap = overlap
det_idx = j
valid_detection = 1
assigned_ignored_det = False
elif (compute_fp and (overlap > min_overlap)
and (valid_detection == NO_DETECTION)
and ignored_det[j] == 1):
det_idx = j
valid_detection = 1
assigned_ignored_det = True
if (valid_detection == NO_DETECTION) and ignored_gt[i] == 0:
fn += 1
elif ((valid_detection != NO_DETECTION)
and (ignored_gt[i] == 1 or ignored_det[det_idx] == 1)):
assigned_detection[det_idx] = True
elif valid_detection != NO_DETECTION:
# only a tp add a threshold.
tp += 1
# thresholds.append(dt_scores[det_idx])
thresholds[thresh_idx] = dt_scores[det_idx]
thresh_idx += 1
if compute_aos:
# delta.append(gt_alphas[i] - dt_alphas[det_idx])
delta[delta_idx] = gt_alphas[i] - dt_alphas[det_idx]
delta_idx += 1
assigned_detection[det_idx] = True
if compute_fp:
for i in range(det_size):
if (not (assigned_detection[i] or ignored_det[i] == -1
or ignored_det[i] == 1 or ignored_threshold[i])):
fp += 1
nstuff = 0
if metric == 0:
overlaps_dt_dc = image_box_overlap(dt_bboxes, dc_bboxes, 0)
for i in range(dc_bboxes.shape[0]):
for j in range(det_size):
if (assigned_detection[j]):
continue
if (ignored_det[j] == -1 or ignored_det[j] == 1):
continue
if (ignored_threshold[j]):
continue
if overlaps_dt_dc[j, i] > min_overlap:
assigned_detection[j] = True
nstuff += 1
fp -= nstuff
if compute_aos:
tmp = np.zeros((fp + delta_idx, ))
# tmp = [0] * fp
for i in range(delta_idx):
tmp[i + fp] = (1.0 + np.cos(delta[i])) / 2.0
# tmp.append((1.0 + np.cos(delta[i])) / 2.0)
# assert len(tmp) == fp + tp
# assert len(delta) == tp
if tp > 0 or fp > 0:
similarity = np.sum(tmp)
else:
similarity = -1
return tp, fp, fn, similarity, thresholds[:thresh_idx]
def get_split_parts(num, num_part):
same_part = num // num_part
remain_num = num % num_part
if remain_num == 0:
return [same_part] * num_part
else:
return [same_part] * num_part + [remain_num]
@numba.jit(nopython=True)
def fused_compute_statistics(overlaps,
pr,
gt_nums,
dt_nums,
dc_nums,
gt_datas,
dt_datas,
dontcares,
ignored_gts,
ignored_dets,
metric,
min_overlap,
thresholds,
compute_aos=False):
gt_num = 0
dt_num = 0
dc_num = 0
for i in range(gt_nums.shape[0]):
for t, thresh in enumerate(thresholds):
overlap = overlaps[dt_num:dt_num + dt_nums[i], gt_num:gt_num +
gt_nums[i]]
gt_data = gt_datas[gt_num:gt_num + gt_nums[i]]
dt_data = dt_datas[dt_num:dt_num + dt_nums[i]]
ignored_gt = ignored_gts[gt_num:gt_num + gt_nums[i]]
ignored_det = ignored_dets[dt_num:dt_num + dt_nums[i]]
dontcare = dontcares[dc_num:dc_num + dc_nums[i]]
tp, fp, fn, similarity, _ = compute_statistics_jit(
overlap,
gt_data,
dt_data,
ignored_gt,
ignored_det,
dontcare,
metric,
min_overlap=min_overlap,
thresh=thresh,
compute_fp=True,
compute_aos=compute_aos)
pr[t, 0] += tp
pr[t, 1] += fp
pr[t, 2] += fn
if similarity != -1:
pr[t, 3] += similarity
gt_num += gt_nums[i]
dt_num += dt_nums[i]
dc_num += dc_nums[i]
def calculate_iou_partly(gt_annos,
dt_annos,
metric,
num_parts=50,
z_axis=1,
z_center=1.0):
"""fast iou algorithm. this function can be used independently to
do result analysis.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
metric: eval type. 0: bbox, 1: bev, 2: 3d
num_parts: int. a parameter for fast calculate algorithm
z_axis: height axis. kitti camera use 1, lidar use 2.
"""
assert len(gt_annos) == len(dt_annos)
total_dt_num = np.stack([len(a["name"]) for a in dt_annos], 0)
total_gt_num = np.stack([len(a["name"]) for a in gt_annos], 0)
num_examples = len(gt_annos)
split_parts = get_split_parts(num_examples, num_parts)
parted_overlaps = []
example_idx = 0
bev_axes = list(range(3))
bev_axes.pop(z_axis)
for num_part in split_parts:
gt_annos_part = gt_annos[example_idx:example_idx + num_part]
dt_annos_part = dt_annos[example_idx:example_idx + num_part]
if metric == 0:
gt_boxes = np.concatenate([a["bbox"] for a in gt_annos_part], 0)
dt_boxes = np.concatenate([a["bbox"] for a in dt_annos_part], 0)
overlap_part = image_box_overlap(gt_boxes, dt_boxes)
elif metric == 1:
loc = np.concatenate(
[a["location"][:, bev_axes] for a in gt_annos_part], 0)
dims = np.concatenate(
[a["dimensions"][:, bev_axes] for a in gt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in gt_annos_part], 0)
gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
loc = np.concatenate(
[a["location"][:, bev_axes] for a in dt_annos_part], 0)
dims = np.concatenate(
[a["dimensions"][:, bev_axes] for a in dt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in dt_annos_part], 0)
dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
overlap_part = bev_box_overlap(gt_boxes,
dt_boxes).astype(np.float64)
elif metric == 2:
loc = np.concatenate([a["location"] for a in gt_annos_part], 0)
dims = np.concatenate([a["dimensions"] for a in gt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in gt_annos_part], 0)
gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
loc = np.concatenate([a["location"] for a in dt_annos_part], 0)
dims = np.concatenate([a["dimensions"] for a in dt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in dt_annos_part], 0)
dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
overlap_part = d3_box_overlap(
gt_boxes, dt_boxes, z_axis=z_axis,
z_center=z_center).astype(np.float64)
else:
raise ValueError("unknown metric")
parted_overlaps.append(overlap_part)
example_idx += num_part
overlaps = []
example_idx = 0
for j, num_part in enumerate(split_parts):
gt_annos_part = gt_annos[example_idx:example_idx + num_part]
dt_annos_part = dt_annos[example_idx:example_idx + num_part]
gt_num_idx, dt_num_idx = 0, 0
for i in range(num_part):
gt_box_num = total_gt_num[example_idx + i]
dt_box_num = total_dt_num[example_idx + i]
overlaps.append(
parted_overlaps[j][gt_num_idx:gt_num_idx +
gt_box_num, dt_num_idx:dt_num_idx +
dt_box_num])
gt_num_idx += gt_box_num
dt_num_idx += dt_box_num
example_idx += num_part
return overlaps, parted_overlaps, total_gt_num, total_dt_num
def _prepare_data(gt_annos, dt_annos, current_class, difficulty):
gt_datas_list = []
dt_datas_list = []
total_dc_num = []
ignored_gts, ignored_dets, dontcares = [], [], []
total_num_valid_gt = 0
for i in range(len(gt_annos)):
rets = clean_data(gt_annos[i], dt_annos[i], current_class, difficulty)
num_valid_gt, ignored_gt, ignored_det, dc_bboxes = rets
ignored_gts.append(np.array(ignored_gt, dtype=np.int64))
ignored_dets.append(np.array(ignored_det, dtype=np.int64))
if len(dc_bboxes) == 0:
dc_bboxes = np.zeros((0, 4)).astype(np.float64)
else:
dc_bboxes = np.stack(dc_bboxes, 0).astype(np.float64)
total_dc_num.append(dc_bboxes.shape[0])
dontcares.append(dc_bboxes)
total_num_valid_gt += num_valid_gt
gt_datas = np.concatenate(
[gt_annos[i]["bbox"], gt_annos[i]["alpha"][..., np.newaxis]], 1)
dt_datas = np.concatenate([
dt_annos[i]["bbox"], dt_annos[i]["alpha"][..., np.newaxis],
dt_annos[i]["score"][..., np.newaxis]
], 1)
gt_datas_list.append(gt_datas)
dt_datas_list.append(dt_datas)
total_dc_num = np.stack(total_dc_num, axis=0)
return (gt_datas_list, dt_datas_list, ignored_gts, ignored_dets, dontcares,
total_dc_num, total_num_valid_gt)
def eval_class(gt_annos,
dt_annos,
current_classes,
difficultys,
metric,
min_overlaps,
compute_aos=False,
z_axis=1,
z_center=1.0,
num_parts=50):
"""Kitti eval. support 2d/bev/3d/aos eval. support 0.5:0.05:0.95 coco AP.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
current_class: int, 0: car, 1: pedestrian, 2: cyclist
difficulty: int. eval difficulty, 0: easy, 1: normal, 2: hard
metric: eval type. 0: bbox, 1: bev, 2: 3d
min_overlap: float, min overlap. official:
[[0.7, 0.5, 0.5], [0.7, 0.5, 0.5], [0.7, 0.5, 0.5]]
format: [metric, class]. choose one from matrix above.
num_parts: int. a parameter for fast calculate algorithm
Returns:
dict of recall, precision and aos
"""
assert len(gt_annos) == len(dt_annos)
num_examples = len(gt_annos)
split_parts = get_split_parts(num_examples, num_parts)
rets = calculate_iou_partly(
dt_annos,
gt_annos,
metric,
num_parts,
z_axis=z_axis,
z_center=z_center)
overlaps, parted_overlaps, total_dt_num, total_gt_num = rets
N_SAMPLE_PTS = 41
num_minoverlap = len(min_overlaps)
num_class = len(current_classes)
num_difficulty = len(difficultys)
precision = np.zeros(
[num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
recall = np.zeros(
[num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
aos = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
all_thresholds = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
for m, current_class in enumerate(current_classes):
for l, difficulty in enumerate(difficultys):
rets = _prepare_data(gt_annos, dt_annos, current_class, difficulty)
(gt_datas_list, dt_datas_list, ignored_gts, ignored_dets,
dontcares, total_dc_num, total_num_valid_gt) = rets
for k, min_overlap in enumerate(min_overlaps[:, metric, m]):
thresholdss = []
for i in range(len(gt_annos)):
rets = compute_statistics_jit(
overlaps[i],
gt_datas_list[i],
dt_datas_list[i],
ignored_gts[i],
ignored_dets[i],
dontcares[i],
metric,
min_overlap=min_overlap,
thresh=0.0,
compute_fp=False)
tp, fp, fn, similarity, thresholds = rets
thresholdss += thresholds.tolist()
thresholdss = np.array(thresholdss)
thresholds = get_thresholds(thresholdss, total_num_valid_gt)
thresholds = np.array(thresholds)
all_thresholds[m, l, k, :len(thresholds)] = thresholds
pr = np.zeros([len(thresholds), 4])
idx = 0
for j, num_part in enumerate(split_parts):
gt_datas_part = np.concatenate(
gt_datas_list[idx:idx + num_part], 0)
dt_datas_part = np.concatenate(
dt_datas_list[idx:idx + num_part], 0)
dc_datas_part = np.concatenate(
dontcares[idx:idx + num_part], 0)
ignored_dets_part = np.concatenate(
ignored_dets[idx:idx + num_part], 0)
ignored_gts_part = np.concatenate(
ignored_gts[idx:idx + num_part], 0)
fused_compute_statistics(
parted_overlaps[j],
pr,
total_gt_num[idx:idx + num_part],
total_dt_num[idx:idx + num_part],
total_dc_num[idx:idx + num_part],
gt_datas_part,
dt_datas_part,
dc_datas_part,
ignored_gts_part,
ignored_dets_part,
metric,
min_overlap=min_overlap,
thresholds=thresholds,
compute_aos=compute_aos)
idx += num_part
for i in range(len(thresholds)):
precision[m, l, k, i] = pr[i, 0] / (pr[i, 0] + pr[i, 1])
if compute_aos:
aos[m, l, k, i] = pr[i, 3] / (pr[i, 0] + pr[i, 1])
for i in range(len(thresholds)):
precision[m, l, k, i] = np.max(
precision[m, l, k, i:], axis=-1)
if compute_aos:
aos[m, l, k, i] = np.max(aos[m, l, k, i:], axis=-1)
ret_dict = {
# "recall": recall, # [num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]
"precision": precision,
"orientation": aos,
"thresholds": all_thresholds,
"min_overlaps": min_overlaps,
}
return ret_dict
def get_mAP_v2(prec):
sums = 0
for i in range(0, prec.shape[-1], 4):
sums = sums + prec[..., i]
return sums / 11 * 100
def do_eval_v2(gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos=False,
difficultys=(0),
z_axis=1,
z_center=1.0):
# min_overlaps: [num_minoverlap, metric, num_class]
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
0,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
# ret: [num_class, num_diff, num_minoverlap, num_sample_points]
mAP_bbox = get_mAP_v2(ret["precision"])
mAP_aos = None
if compute_aos:
mAP_aos = get_mAP_v2(ret["orientation"])
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
1,
min_overlaps,
z_axis=z_axis,
z_center=z_center)
mAP_bev = get_mAP_v2(ret["precision"])
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
2,
min_overlaps,
z_axis=z_axis,
z_center=z_center)
mAP_3d = get_mAP_v2(ret["precision"])
return mAP_bbox, mAP_bev, mAP_3d, mAP_aos
def do_eval_v3(gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos=False,
difficultys=(0, 1, 2),
z_axis=1,
z_center=1.0):
# min_overlaps: [num_minoverlap, metric, num_class]
types = ["bbox", "bev", "3d"]
metrics = {}
for i in range(3):
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
i,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
metrics[types[i]] = ret
return metrics
def do_coco_style_eval(gt_annos,
dt_annos,
current_classes,
overlap_ranges,
compute_aos,
z_axis=1,
z_center=1.0):
# overlap_ranges: [range, metric, num_class]
min_overlaps = np.zeros([10, *overlap_ranges.shape[1:]])
for i in range(overlap_ranges.shape[1]):
for j in range(overlap_ranges.shape[2]):
min_overlaps[:, i, j] = np.linspace(*overlap_ranges[:, i, j])
mAP_bbox, mAP_bev, mAP_3d, mAP_aos = do_eval_v2(
gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
# ret: [num_class, num_diff, num_minoverlap]
mAP_bbox = mAP_bbox.mean(-1)
mAP_bev = mAP_bev.mean(-1)
mAP_3d = mAP_3d.mean(-1)
if mAP_aos is not None:
mAP_aos = mAP_aos.mean(-1)
return mAP_bbox, mAP_bev, mAP_3d, mAP_aos
def print_str(value, *arg, sstream=None):
if sstream is None:
sstream = sysio.StringIO()
sstream.truncate(0)
sstream.seek(0)
print(value, *arg, file=sstream)
return sstream.getvalue()
def get_official_eval_result(gt_annos,
dt_annos,
current_classes,
difficultys=[0, 1, 2],
z_axis=1,
z_center=1.0):
"""
gt_annos and dt_annos must contains following keys:
[bbox, location, dimensions, rotation_y, score]
"""
overlap_mod = np.array([[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],
[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],
[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7]])
overlap_easy = np.array([[0.5, 0.5, 0.5, 0.7, 0.5, 0.5, 0.5, 0.5],
[0.25, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5],
[0.25, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5]])
min_overlaps = np.stack([overlap_mod, overlap_easy], axis=0) # [2, 3, 5]
class_to_name = {
0: 'Car',
1: 'Pedestrian',
2: 'Cyclist',
3: 'Van',
4: 'Person_sitting',
5: 'car',
6: 'tractor',
7: 'trailer',
}
name_to_class = {v: n for n, v in class_to_name.items()}
if not isinstance(current_classes, (list, tuple)):
current_classes = [current_classes]
current_classes_int = []
for curcls in current_classes:
if isinstance(curcls, str):
current_classes_int.append(name_to_class[curcls])
else:
current_classes_int.append(curcls)
current_classes = current_classes_int
min_overlaps = min_overlaps[:, :, current_classes]
result = ''
# check whether alpha is valid
compute_aos = False
for anno in dt_annos:
if anno['alpha'].shape[0] != 0:
if anno['alpha'][0] != -10:
compute_aos = True
break
metrics = do_eval_v3(
gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos,
difficultys,
z_axis=z_axis,
z_center=z_center)
for j, curcls in enumerate(current_classes):
# mAP threshold array: [num_minoverlap, metric, class]
# mAP result: [num_class, num_diff, num_minoverlap]
for i in range(min_overlaps.shape[0]):
mAPbbox = get_mAP_v2(metrics["bbox"]["precision"][j, :, i])
mAPbbox = ", ".join(f"{v:.2f}" for v in mAPbbox)
mAPbev = get_mAP_v2(metrics["bev"]["precision"][j, :, i])
mAPbev = ", ".join(f"{v:.2f}" for v in mAPbev)
mAP3d = get_mAP_v2(metrics["3d"]["precision"][j, :, i])
mAP3d = ", ".join(f"{v:.2f}" for v in mAP3d)
result += print_str(
(f"{class_to_name[curcls]} "
"AP(Average Precision)@{:.2f}, {:.2f}, {:.2f}:".format(*min_overlaps[i, :, j])))
result += print_str(f"bbox AP:{mAPbbox}")
result += print_str(f"bev AP:{mAPbev}")
result += print_str(f"3d AP:{mAP3d}")
if compute_aos:
mAPaos = get_mAP_v2(metrics["bbox"]["orientation"][j, :, i])
mAPaos = ", ".join(f"{v:.2f}" for v in mAPaos)
result += print_str(f"aos AP:{mAPaos}")
return result
def get_coco_eval_result(gt_annos,
dt_annos,
current_classes,
z_axis=1,
z_center=1.0):
class_to_name = {
0: 'Car',
1: 'Pedestrian',
2: 'Cyclist',
3: 'Van',
4: 'Person_sitting',
5: 'car',
6: 'tractor',
7: 'trailer',
}
class_to_range = {
0: [0.5, 1.0, 0.05],
1: [0.25, 0.75, 0.05],
2: [0.25, 0.75, 0.05],
3: [0.5, 1.0, 0.05],
4: [0.25, 0.75, 0.05],
5: [0.5, 1.0, 0.05],
6: [0.5, 1.0, 0.05],
7: [0.5, 1.0, 0.05],
}
class_to_range = {
0: [0.5, 0.95, 10],
1: [0.25, 0.7, 10],
2: [0.25, 0.7, 10],
3: [0.5, 0.95, 10],
4: [0.25, 0.7, 10],
5: [0.5, 0.95, 10],
6: [0.5, 0.95, 10],
7: [0.5, 0.95, 10],
}
name_to_class = {v: n for n, v in class_to_name.items()}
if not isinstance(current_classes, (list, tuple)):
current_classes = [current_classes]
current_classes_int = []
for curcls in current_classes:
if isinstance(curcls, str):
current_classes_int.append(name_to_class[curcls])
else:
current_classes_int.append(curcls)
current_classes = current_classes_int
overlap_ranges = np.zeros([3, 3, len(current_classes)])
for i, curcls in enumerate(current_classes):
overlap_ranges[:, :, i] = np.array(
class_to_range[curcls])[:, np.newaxis]
result = ''
# check whether alpha is valid
compute_aos = False
for anno in dt_annos:
if anno['alpha'].shape[0] != 0:
if anno['alpha'][0] != -10:
compute_aos = True
break
mAPbbox, mAPbev, mAP3d, mAPaos = do_coco_style_eval(
gt_annos,
dt_annos,
current_classes,
overlap_ranges,
compute_aos,
z_axis=z_axis,
z_center=z_center)
for j, curcls in enumerate(current_classes):
# mAP threshold array: [num_minoverlap, metric, class]
# mAP result: [num_class, num_diff, num_minoverlap]
o_range = np.array(class_to_range[curcls])[[0, 2, 1]]
o_range[1] = (o_range[2] - o_range[0]) / (o_range[1] - 1)
result += print_str((f"{class_to_name[curcls]} "
"coco AP@{:.2f}:{:.2f}:{:.2f}:".format(*o_range)))
result += print_str((f"bbox AP:{mAPbbox[j, 0]:.2f}, "
f"{mAPbbox[j, 1]:.2f}, "
f"{mAPbbox[j, 2]:.2f}"))
result += print_str((f"bev AP:{mAPbev[j, 0]:.2f}, "
f"{mAPbev[j, 1]:.2f}, "
f"{mAPbev[j, 2]:.2f}"))
result += print_str((f"3d AP:{mAP3d[j, 0]:.2f}, "
f"{mAP3d[j, 1]:.2f}, "
f"{mAP3d[j, 2]:.2f}"))
if compute_aos:
result += print_str((f"aos AP:{mAPaos[j, 0]:.2f}, "
f"{mAPaos[j, 1]:.2f}, "
f"{mAPaos[j, 2]:.2f}"))
return result | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/kitti_object_eval/rotate_iou.py | #####################
# Based on https://github.com/hongzhenwang/RRPN-revise
# Licensed under The MIT License
# Author: yanyan, scrin@foxmail.com
#####################
import math
import numba
import numpy as np
from numba import cuda
@numba.jit(nopython=True)
def div_up(m, n):
return m // n + (m % n > 0)
@cuda.jit('(float32[:], float32[:], float32[:])', device=True, inline=True)
def trangle_area(a, b, c):
return ((a[0] - c[0]) * (b[1] - c[1]) - (a[1] - c[1]) *
(b[0] - c[0])) / 2.0
@cuda.jit('(float32[:], int32)', device=True, inline=True)
def area(int_pts, num_of_inter):
area_val = 0.0
for i in range(num_of_inter - 2):
area_val += abs(
trangle_area(int_pts[:2], int_pts[2 * i + 2:2 * i + 4],
int_pts[2 * i + 4:2 * i + 6]))
return area_val
@cuda.jit('(float32[:], int32)', device=True, inline=True)
def sort_vertex_in_convex_polygon(int_pts, num_of_inter):
if num_of_inter > 0:
center = cuda.local.array((2, ), dtype=numba.float32)
center[:] = 0.0
for i in range(num_of_inter):
center[0] += int_pts[2 * i]
center[1] += int_pts[2 * i + 1]
center[0] /= num_of_inter
center[1] /= num_of_inter
v = cuda.local.array((2, ), dtype=numba.float32)
vs = cuda.local.array((16, ), dtype=numba.float32)
for i in range(num_of_inter):
v[0] = int_pts[2 * i] - center[0]
v[1] = int_pts[2 * i + 1] - center[1]
d = math.sqrt(v[0] * v[0] + v[1] * v[1])
v[0] = v[0] / d
v[1] = v[1] / d
if v[1] < 0:
v[0] = -2 - v[0]
vs[i] = v[0]
j = 0
temp = 0
for i in range(1, num_of_inter):
if vs[i - 1] > vs[i]:
temp = vs[i]
tx = int_pts[2 * i]
ty = int_pts[2 * i + 1]
j = i
while j > 0 and vs[j - 1] > temp:
vs[j] = vs[j - 1]
int_pts[j * 2] = int_pts[j * 2 - 2]
int_pts[j * 2 + 1] = int_pts[j * 2 - 1]
j -= 1
vs[j] = temp
int_pts[j * 2] = tx
int_pts[j * 2 + 1] = ty
@cuda.jit(
'(float32[:], float32[:], int32, int32, float32[:])',
device=True,
inline=True)
def line_segment_intersection(pts1, pts2, i, j, temp_pts):
A = cuda.local.array((2, ), dtype=numba.float32)
B = cuda.local.array((2, ), dtype=numba.float32)
C = cuda.local.array((2, ), dtype=numba.float32)
D = cuda.local.array((2, ), dtype=numba.float32)
A[0] = pts1[2 * i]
A[1] = pts1[2 * i + 1]
B[0] = pts1[2 * ((i + 1) % 4)]
B[1] = pts1[2 * ((i + 1) % 4) + 1]
C[0] = pts2[2 * j]
C[1] = pts2[2 * j + 1]
D[0] = pts2[2 * ((j + 1) % 4)]
D[1] = pts2[2 * ((j + 1) % 4) + 1]
BA0 = B[0] - A[0]
BA1 = B[1] - A[1]
DA0 = D[0] - A[0]
CA0 = C[0] - A[0]
DA1 = D[1] - A[1]
CA1 = C[1] - A[1]
acd = DA1 * CA0 > CA1 * DA0
bcd = (D[1] - B[1]) * (C[0] - B[0]) > (C[1] - B[1]) * (D[0] - B[0])
if acd != bcd:
abc = CA1 * BA0 > BA1 * CA0
abd = DA1 * BA0 > BA1 * DA0
if abc != abd:
DC0 = D[0] - C[0]
DC1 = D[1] - C[1]
ABBA = A[0] * B[1] - B[0] * A[1]
CDDC = C[0] * D[1] - D[0] * C[1]
DH = BA1 * DC0 - BA0 * DC1
Dx = ABBA * DC0 - BA0 * CDDC
Dy = ABBA * DC1 - BA1 * CDDC
temp_pts[0] = Dx / DH
temp_pts[1] = Dy / DH
return True
return False
@cuda.jit(
'(float32[:], float32[:], int32, int32, float32[:])',
device=True,
inline=True)
def line_segment_intersection_v1(pts1, pts2, i, j, temp_pts):
a = cuda.local.array((2, ), dtype=numba.float32)
b = cuda.local.array((2, ), dtype=numba.float32)
c = cuda.local.array((2, ), dtype=numba.float32)
d = cuda.local.array((2, ), dtype=numba.float32)
a[0] = pts1[2 * i]
a[1] = pts1[2 * i + 1]
b[0] = pts1[2 * ((i + 1) % 4)]
b[1] = pts1[2 * ((i + 1) % 4) + 1]
c[0] = pts2[2 * j]
c[1] = pts2[2 * j + 1]
d[0] = pts2[2 * ((j + 1) % 4)]
d[1] = pts2[2 * ((j + 1) % 4) + 1]
area_abc = trangle_area(a, b, c)
area_abd = trangle_area(a, b, d)
if area_abc * area_abd >= 0:
return False
area_cda = trangle_area(c, d, a)
area_cdb = area_cda + area_abc - area_abd
if area_cda * area_cdb >= 0:
return False
t = area_cda / (area_abd - area_abc)
dx = t * (b[0] - a[0])
dy = t * (b[1] - a[1])
temp_pts[0] = a[0] + dx
temp_pts[1] = a[1] + dy
return True
@cuda.jit('(float32, float32, float32[:])', device=True, inline=True)
def point_in_quadrilateral(pt_x, pt_y, corners):
ab0 = corners[2] - corners[0]
ab1 = corners[3] - corners[1]
ad0 = corners[6] - corners[0]
ad1 = corners[7] - corners[1]
ap0 = pt_x - corners[0]
ap1 = pt_y - corners[1]
abab = ab0 * ab0 + ab1 * ab1
abap = ab0 * ap0 + ab1 * ap1
adad = ad0 * ad0 + ad1 * ad1
adap = ad0 * ap0 + ad1 * ap1
return abab >= abap and abap >= 0 and adad >= adap and adap >= 0
@cuda.jit('(float32[:], float32[:], float32[:])', device=True, inline=True)
def quadrilateral_intersection(pts1, pts2, int_pts):
num_of_inter = 0
for i in range(4):
if point_in_quadrilateral(pts1[2 * i], pts1[2 * i + 1], pts2):
int_pts[num_of_inter * 2] = pts1[2 * i]
int_pts[num_of_inter * 2 + 1] = pts1[2 * i + 1]
num_of_inter += 1
if point_in_quadrilateral(pts2[2 * i], pts2[2 * i + 1], pts1):
int_pts[num_of_inter * 2] = pts2[2 * i]
int_pts[num_of_inter * 2 + 1] = pts2[2 * i + 1]
num_of_inter += 1
temp_pts = cuda.local.array((2, ), dtype=numba.float32)
for i in range(4):
for j in range(4):
has_pts = line_segment_intersection(pts1, pts2, i, j, temp_pts)
if has_pts:
int_pts[num_of_inter * 2] = temp_pts[0]
int_pts[num_of_inter * 2 + 1] = temp_pts[1]
num_of_inter += 1
return num_of_inter
@cuda.jit('(float32[:], float32[:])', device=True, inline=True)
def rbbox_to_corners(corners, rbbox):
# generate clockwise corners and rotate it clockwise
angle = rbbox[4]
a_cos = math.cos(angle)
a_sin = math.sin(angle)
center_x = rbbox[0]
center_y = rbbox[1]
x_d = rbbox[2]
y_d = rbbox[3]
corners_x = cuda.local.array((4, ), dtype=numba.float32)
corners_y = cuda.local.array((4, ), dtype=numba.float32)
corners_x[0] = -x_d / 2
corners_x[1] = -x_d / 2
corners_x[2] = x_d / 2
corners_x[3] = x_d / 2
corners_y[0] = -y_d / 2
corners_y[1] = y_d / 2
corners_y[2] = y_d / 2
corners_y[3] = -y_d / 2
for i in range(4):
corners[2 *
i] = a_cos * corners_x[i] + a_sin * corners_y[i] + center_x
corners[2 * i
+ 1] = -a_sin * corners_x[i] + a_cos * corners_y[i] + center_y
@cuda.jit('(float32[:], float32[:])', device=True, inline=True)
def inter(rbbox1, rbbox2):
corners1 = cuda.local.array((8, ), dtype=numba.float32)
corners2 = cuda.local.array((8, ), dtype=numba.float32)
intersection_corners = cuda.local.array((16, ), dtype=numba.float32)
rbbox_to_corners(corners1, rbbox1)
rbbox_to_corners(corners2, rbbox2)
num_intersection = quadrilateral_intersection(corners1, corners2,
intersection_corners)
sort_vertex_in_convex_polygon(intersection_corners, num_intersection)
# print(intersection_corners.reshape([-1, 2])[:num_intersection])
return area(intersection_corners, num_intersection)
@cuda.jit('(float32[:], float32[:], int32)', device=True, inline=True)
def devRotateIoUEval(rbox1, rbox2, criterion=-1):
area1 = rbox1[2] * rbox1[3]
area2 = rbox2[2] * rbox2[3]
area_inter = inter(rbox1, rbox2)
if criterion == -1:
return area_inter / (area1 + area2 - area_inter)
elif criterion == 0:
return area_inter / area1
elif criterion == 1:
return area_inter / area2
else:
return area_inter
@cuda.jit('(int64, int64, float32[:], float32[:], float32[:], int32)', fastmath=False)
def rotate_iou_kernel_eval(N, K, dev_boxes, dev_query_boxes, dev_iou, criterion=-1):
threadsPerBlock = 8 * 8
row_start = cuda.blockIdx.x
col_start = cuda.blockIdx.y
tx = cuda.threadIdx.x
row_size = min(N - row_start * threadsPerBlock, threadsPerBlock)
col_size = min(K - col_start * threadsPerBlock, threadsPerBlock)
block_boxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32)
block_qboxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32)
dev_query_box_idx = threadsPerBlock * col_start + tx
dev_box_idx = threadsPerBlock * row_start + tx
if (tx < col_size):
block_qboxes[tx * 5 + 0] = dev_query_boxes[dev_query_box_idx * 5 + 0]
block_qboxes[tx * 5 + 1] = dev_query_boxes[dev_query_box_idx * 5 + 1]
block_qboxes[tx * 5 + 2] = dev_query_boxes[dev_query_box_idx * 5 + 2]
block_qboxes[tx * 5 + 3] = dev_query_boxes[dev_query_box_idx * 5 + 3]
block_qboxes[tx * 5 + 4] = dev_query_boxes[dev_query_box_idx * 5 + 4]
if (tx < row_size):
block_boxes[tx * 5 + 0] = dev_boxes[dev_box_idx * 5 + 0]
block_boxes[tx * 5 + 1] = dev_boxes[dev_box_idx * 5 + 1]
block_boxes[tx * 5 + 2] = dev_boxes[dev_box_idx * 5 + 2]
block_boxes[tx * 5 + 3] = dev_boxes[dev_box_idx * 5 + 3]
block_boxes[tx * 5 + 4] = dev_boxes[dev_box_idx * 5 + 4]
cuda.syncthreads()
if tx < row_size:
for i in range(col_size):
offset = row_start * threadsPerBlock * K + col_start * threadsPerBlock + tx * K + i
dev_iou[offset] = devRotateIoUEval(block_qboxes[i * 5:i * 5 + 5],
block_boxes[tx * 5:tx * 5 + 5], criterion)
def rotate_iou_gpu_eval(boxes, query_boxes, criterion=-1, device_id=0):
"""rotated box iou running in gpu. 500x faster than cpu version
(take 5ms in one example with numba.cuda code).
convert from [this project](
https://github.com/hongzhenwang/RRPN-revise/tree/master/lib/rotation).
Args:
boxes (float tensor: [N, 5]): rbboxes. format: centers, dims,
angles(clockwise when positive)
query_boxes (float tensor: [K, 5]): [description]
device_id (int, optional): Defaults to 0. [description]
Returns:
[type]: [description]
"""
box_dtype = boxes.dtype
boxes = boxes.astype(np.float32)
query_boxes = query_boxes.astype(np.float32)
N = boxes.shape[0]
K = query_boxes.shape[0]
iou = np.zeros((N, K), dtype=np.float32)
if N == 0 or K == 0:
return iou
threadsPerBlock = 8 * 8
cuda.select_device(device_id)
blockspergrid = (div_up(N, threadsPerBlock), div_up(K, threadsPerBlock))
stream = cuda.stream()
with stream.auto_synchronize():
boxes_dev = cuda.to_device(boxes.reshape([-1]), stream)
query_boxes_dev = cuda.to_device(query_boxes.reshape([-1]), stream)
iou_dev = cuda.to_device(iou.reshape([-1]), stream)
rotate_iou_kernel_eval[blockspergrid, threadsPerBlock, stream](
N, K, boxes_dev, query_boxes_dev, iou_dev, criterion)
iou_dev.copy_to_host(iou.reshape([-1]), stream=stream)
return iou.astype(boxes.dtype) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/weights/get_regressor_weights.py | """Get checkpoint from W&B"""
import wandb
run = wandb.init()
artifact = run.use_artifact('3ddetection/yolo3d-regressor/experiment-ckpts:v11', type='checkpoints')
artifact_dir = artifact.download() | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_configs.py | import hydra
from hydra.core.hydra_config import HydraConfig
from omegaconf import DictConfig
def test_train_config(cfg_train: DictConfig):
assert cfg_train
assert cfg_train.datamodule
assert cfg_train.model
assert cfg_train.trainer
HydraConfig().set_config(cfg_train)
hydra.utils.instantiate(cfg_train.datamodule)
hydra.utils.instantiate(cfg_train.model)
hydra.utils.instantiate(cfg_train.trainer)
def test_eval_config(cfg_eval: DictConfig):
assert cfg_eval
assert cfg_eval.datamodule
assert cfg_eval.model
assert cfg_eval.trainer
HydraConfig().set_config(cfg_eval)
hydra.utils.instantiate(cfg_eval.datamodule)
hydra.utils.instantiate(cfg_eval.model)
hydra.utils.instantiate(cfg_eval.trainer)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/conftest.py | import pyrootutils
import pytest
from hydra import compose, initialize
from hydra.core.global_hydra import GlobalHydra
from omegaconf import DictConfig, open_dict
@pytest.fixture(scope="package")
def cfg_train_global() -> DictConfig:
with initialize(version_base="1.2", config_path="../configs"):
cfg = compose(config_name="train.yaml", return_hydra_config=True, overrides=[])
# set defaults for all tests
with open_dict(cfg):
cfg.paths.root_dir = str(pyrootutils.find_root())
cfg.trainer.max_epochs = 1
cfg.trainer.limit_train_batches = 0.01
cfg.trainer.limit_val_batches = 0.1
cfg.trainer.limit_test_batches = 0.1
cfg.trainer.accelerator = "cpu"
cfg.trainer.devices = 1
cfg.datamodule.num_workers = 0
cfg.datamodule.pin_memory = False
cfg.extras.print_config = False
cfg.extras.enforce_tags = False
cfg.logger = None
return cfg
@pytest.fixture(scope="package")
def cfg_eval_global() -> DictConfig:
with initialize(version_base="1.2", config_path="../configs"):
cfg = compose(config_name="eval.yaml", return_hydra_config=True, overrides=["ckpt_path=."])
# set defaults for all tests
with open_dict(cfg):
cfg.paths.root_dir = str(pyrootutils.find_root())
cfg.trainer.max_epochs = 1
cfg.trainer.limit_test_batches = 0.1
cfg.trainer.accelerator = "cpu"
cfg.trainer.devices = 1
cfg.datamodule.num_workers = 0
cfg.datamodule.pin_memory = False
cfg.extras.print_config = False
cfg.extras.enforce_tags = False
cfg.logger = None
return cfg
# this is called by each test which uses `cfg_train` arg
# each test generates its own temporary logging path
@pytest.fixture(scope="function")
def cfg_train(cfg_train_global, tmp_path) -> DictConfig:
cfg = cfg_train_global.copy()
with open_dict(cfg):
cfg.paths.output_dir = str(tmp_path)
cfg.paths.log_dir = str(tmp_path)
yield cfg
GlobalHydra.instance().clear()
# this is called by each test which uses `cfg_eval` arg
# each test generates its own temporary logging path
@pytest.fixture(scope="function")
def cfg_eval(cfg_eval_global, tmp_path) -> DictConfig:
cfg = cfg_eval_global.copy()
with open_dict(cfg):
cfg.paths.output_dir = str(tmp_path)
cfg.paths.log_dir = str(tmp_path)
yield cfg
GlobalHydra.instance().clear()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_train.py | import os
import pytest
from hydra.core.hydra_config import HydraConfig
from omegaconf import open_dict
from src.train import train
from tests.helpers.run_if import RunIf
def test_train_fast_dev_run(cfg_train):
"""Run for 1 train, val and test step."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.fast_dev_run = True
cfg_train.trainer.accelerator = "cpu"
train(cfg_train)
@RunIf(min_gpus=1)
def test_train_fast_dev_run_gpu(cfg_train):
"""Run for 1 train, val and test step on GPU."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.fast_dev_run = True
cfg_train.trainer.accelerator = "gpu"
train(cfg_train)
@RunIf(min_gpus=1)
@pytest.mark.slow
def test_train_epoch_gpu_amp(cfg_train):
"""Train 1 epoch on GPU with mixed-precision."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
cfg_train.trainer.accelerator = "cpu"
cfg_train.trainer.precision = 16
train(cfg_train)
@pytest.mark.slow
def test_train_epoch_double_val_loop(cfg_train):
"""Train 1 epoch with validation loop twice per epoch."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
cfg_train.trainer.val_check_interval = 0.5
train(cfg_train)
@pytest.mark.slow
def test_train_ddp_sim(cfg_train):
"""Simulate DDP (Distributed Data Parallel) on 2 CPU processes."""
HydraConfig().set_config(cfg_train)
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 2
cfg_train.trainer.accelerator = "cpu"
cfg_train.trainer.devices = 2
cfg_train.trainer.strategy = "ddp_spawn"
train(cfg_train)
@pytest.mark.slow
def test_train_resume(tmp_path, cfg_train):
"""Run 1 epoch, finish, and resume for another epoch."""
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
HydraConfig().set_config(cfg_train)
metric_dict_1, _ = train(cfg_train)
files = os.listdir(tmp_path / "checkpoints")
assert "last.ckpt" in files
assert "epoch_000.ckpt" in files
with open_dict(cfg_train):
cfg_train.ckpt_path = str(tmp_path / "checkpoints" / "last.ckpt")
cfg_train.trainer.max_epochs = 2
metric_dict_2, _ = train(cfg_train)
files = os.listdir(tmp_path / "checkpoints")
assert "epoch_001.ckpt" in files
assert "epoch_002.ckpt" not in files
assert metric_dict_1["train/acc"] < metric_dict_2["train/acc"]
assert metric_dict_1["val/acc"] < metric_dict_2["val/acc"]
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_sweeps.py | import pytest
from tests.helpers.run_if import RunIf
from tests.helpers.run_sh_command import run_sh_command
startfile = "src/train.py"
overrides = ["logger=[]"]
@RunIf(sh=True)
@pytest.mark.slow
def test_experiments(tmp_path):
"""Test running all available experiment configs with fast_dev_run=True."""
command = [
startfile,
"-m",
"experiment=glob(*)",
"hydra.sweep.dir=" + str(tmp_path),
"++trainer.fast_dev_run=true",
] + overrides
run_sh_command(command)
@RunIf(sh=True)
@pytest.mark.slow
def test_hydra_sweep(tmp_path):
"""Test default hydra sweep."""
command = [
startfile,
"-m",
"hydra.sweep.dir=" + str(tmp_path),
"model.optimizer.lr=0.005,0.01",
"++trainer.fast_dev_run=true",
] + overrides
run_sh_command(command)
@RunIf(sh=True)
@pytest.mark.slow
def test_hydra_sweep_ddp_sim(tmp_path):
"""Test default hydra sweep with ddp sim."""
command = [
startfile,
"-m",
"hydra.sweep.dir=" + str(tmp_path),
"trainer=ddp_sim",
"trainer.max_epochs=3",
"+trainer.limit_train_batches=0.01",
"+trainer.limit_val_batches=0.1",
"+trainer.limit_test_batches=0.1",
"model.optimizer.lr=0.005,0.01,0.02",
] + overrides
run_sh_command(command)
@RunIf(sh=True)
@pytest.mark.slow
def test_optuna_sweep(tmp_path):
"""Test optuna sweep."""
command = [
startfile,
"-m",
"hparams_search=mnist_optuna",
"hydra.sweep.dir=" + str(tmp_path),
"hydra.sweeper.n_trials=10",
"hydra.sweeper.sampler.n_startup_trials=5",
"++trainer.fast_dev_run=true",
] + overrides
run_sh_command(command)
@RunIf(wandb=True, sh=True)
@pytest.mark.slow
def test_optuna_sweep_ddp_sim_wandb(tmp_path):
"""Test optuna sweep with wandb and ddp sim."""
command = [
startfile,
"-m",
"hparams_search=mnist_optuna",
"hydra.sweep.dir=" + str(tmp_path),
"hydra.sweeper.n_trials=5",
"trainer=ddp_sim",
"trainer.max_epochs=3",
"+trainer.limit_train_batches=0.01",
"+trainer.limit_val_batches=0.1",
"+trainer.limit_test_batches=0.1",
"logger=wandb",
]
run_sh_command(command)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_mnist_datamodule.py | from pathlib import Path
import pytest
import torch
from src.datamodules.mnist_datamodule import MNISTDataModule
@pytest.mark.parametrize("batch_size", [32, 128])
def test_mnist_datamodule(batch_size):
data_dir = "data/"
dm = MNISTDataModule(data_dir=data_dir, batch_size=batch_size)
dm.prepare_data()
assert not dm.data_train and not dm.data_val and not dm.data_test
assert Path(data_dir, "MNIST").exists()
assert Path(data_dir, "MNIST", "raw").exists()
dm.setup()
assert dm.data_train and dm.data_val and dm.data_test
assert dm.train_dataloader() and dm.val_dataloader() and dm.test_dataloader()
num_datapoints = len(dm.data_train) + len(dm.data_val) + len(dm.data_test)
assert num_datapoints == 70_000
batch = next(iter(dm.train_dataloader()))
x, y = batch
assert len(x) == batch_size
assert len(y) == batch_size
assert x.dtype == torch.float32
assert y.dtype == torch.int64
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/tests/test_eval.py | import os
import pytest
from hydra.core.hydra_config import HydraConfig
from omegaconf import open_dict
from src.eval import evaluate
from src.train import train
@pytest.mark.slow
def test_train_eval(tmp_path, cfg_train, cfg_eval):
"""Train for 1 epoch with `train.py` and evaluate with `eval.py`"""
assert str(tmp_path) == cfg_train.paths.output_dir == cfg_eval.paths.output_dir
with open_dict(cfg_train):
cfg_train.trainer.max_epochs = 1
cfg_train.test = True
HydraConfig().set_config(cfg_train)
train_metric_dict, _ = train(cfg_train)
assert "last.ckpt" in os.listdir(tmp_path / "checkpoints")
with open_dict(cfg_eval):
cfg_eval.ckpt_path = str(tmp_path / "checkpoints" / "last.ckpt")
HydraConfig().set_config(cfg_eval)
test_metric_dict, _ = evaluate(cfg_eval)
assert test_metric_dict["test/acc"] > 0.0
assert abs(train_metric_dict["test/acc"].item() - test_metric_dict["test/acc"].item()) < 0.001
| 0 |
apollo_public_repos/apollo-model-yolo3d/tests | apollo_public_repos/apollo-model-yolo3d/tests/helpers/package_available.py | import platform
import pkg_resources
from pytorch_lightning.utilities.xla_device import XLADeviceUtils
def _package_available(package_name: str) -> bool:
"""Check if a package is available in your environment."""
try:
return pkg_resources.require(package_name) is not None
except pkg_resources.DistributionNotFound:
return False
_TPU_AVAILABLE = XLADeviceUtils.tpu_device_exists()
_IS_WINDOWS = platform.system() == "Windows"
_SH_AVAILABLE = not _IS_WINDOWS and _package_available("sh")
_DEEPSPEED_AVAILABLE = not _IS_WINDOWS and _package_available("deepspeed")
_FAIRSCALE_AVAILABLE = not _IS_WINDOWS and _package_available("fairscale")
_WANDB_AVAILABLE = _package_available("wandb")
_NEPTUNE_AVAILABLE = _package_available("neptune")
_COMET_AVAILABLE = _package_available("comet_ml")
_MLFLOW_AVAILABLE = _package_available("mlflow")
| 0 |
apollo_public_repos/apollo-model-yolo3d/tests | apollo_public_repos/apollo-model-yolo3d/tests/helpers/run_if.py | """Adapted from:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/helpers/runif.py
"""
import sys
from typing import Optional
import pytest
import torch
from packaging.version import Version
from pkg_resources import get_distribution
from tests.helpers.package_available import (
_COMET_AVAILABLE,
_DEEPSPEED_AVAILABLE,
_FAIRSCALE_AVAILABLE,
_IS_WINDOWS,
_MLFLOW_AVAILABLE,
_NEPTUNE_AVAILABLE,
_SH_AVAILABLE,
_TPU_AVAILABLE,
_WANDB_AVAILABLE,
)
class RunIf:
"""RunIf wrapper for conditional skipping of tests.
Fully compatible with `@pytest.mark`.
Example:
@RunIf(min_torch="1.8")
@pytest.mark.parametrize("arg1", [1.0, 2.0])
def test_wrapper(arg1):
assert arg1 > 0
"""
def __new__(
self,
min_gpus: int = 0,
min_torch: Optional[str] = None,
max_torch: Optional[str] = None,
min_python: Optional[str] = None,
skip_windows: bool = False,
sh: bool = False,
tpu: bool = False,
fairscale: bool = False,
deepspeed: bool = False,
wandb: bool = False,
neptune: bool = False,
comet: bool = False,
mlflow: bool = False,
**kwargs,
):
"""
Args:
min_gpus: min number of GPUs required to run test
min_torch: minimum pytorch version to run test
max_torch: maximum pytorch version to run test
min_python: minimum python version required to run test
skip_windows: skip test for Windows platform
tpu: if TPU is available
sh: if `sh` module is required to run the test
fairscale: if `fairscale` module is required to run the test
deepspeed: if `deepspeed` module is required to run the test
wandb: if `wandb` module is required to run the test
neptune: if `neptune` module is required to run the test
comet: if `comet` module is required to run the test
mlflow: if `mlflow` module is required to run the test
kwargs: native pytest.mark.skipif keyword arguments
"""
conditions = []
reasons = []
if min_gpus:
conditions.append(torch.cuda.device_count() < min_gpus)
reasons.append(f"GPUs>={min_gpus}")
if min_torch:
torch_version = get_distribution("torch").version
conditions.append(Version(torch_version) < Version(min_torch))
reasons.append(f"torch>={min_torch}")
if max_torch:
torch_version = get_distribution("torch").version
conditions.append(Version(torch_version) >= Version(max_torch))
reasons.append(f"torch<{max_torch}")
if min_python:
py_version = (
f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
)
conditions.append(Version(py_version) < Version(min_python))
reasons.append(f"python>={min_python}")
if skip_windows:
conditions.append(_IS_WINDOWS)
reasons.append("does not run on Windows")
if tpu:
conditions.append(not _TPU_AVAILABLE)
reasons.append("TPU")
if sh:
conditions.append(not _SH_AVAILABLE)
reasons.append("sh")
if fairscale:
conditions.append(not _FAIRSCALE_AVAILABLE)
reasons.append("fairscale")
if deepspeed:
conditions.append(not _DEEPSPEED_AVAILABLE)
reasons.append("deepspeed")
if wandb:
conditions.append(not _WANDB_AVAILABLE)
reasons.append("wandb")
if neptune:
conditions.append(not _NEPTUNE_AVAILABLE)
reasons.append("neptune")
if comet:
conditions.append(not _COMET_AVAILABLE)
reasons.append("comet")
if mlflow:
conditions.append(not _MLFLOW_AVAILABLE)
reasons.append("mlflow")
reasons = [rs for cond, rs in zip(conditions, reasons) if cond]
return pytest.mark.skipif(
condition=any(conditions),
reason=f"Requires: [{' + '.join(reasons)}]",
**kwargs,
)
| 0 |
apollo_public_repos/apollo-model-yolo3d/tests | apollo_public_repos/apollo-model-yolo3d/tests/helpers/run_sh_command.py | from typing import List
import pytest
from tests.helpers.package_available import _SH_AVAILABLE
if _SH_AVAILABLE:
import sh
def run_sh_command(command: List[str]):
"""Default method for executing shell commands with pytest and sh package."""
msg = None
try:
sh.python(command)
except sh.ErrorReturnCode as e:
msg = e.stderr.decode()
if msg:
pytest.fail(msg=msg)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/docs/command.md | # Quick Command
## Train Regressor Model
- Train original
```bash
python src/train.py
```
- With experiment
```bash
python src/train.py \
experiment=sample
```
## Train Detector Model
### Yolov5
- Multi GPU Training
```bash
cd yolov5
python -m torch.distributed.launch \
--nproc_per_node 4 train.py \
--epochs 10 \
--batch 64 \
--data ../configs/detector/yolov5_kitti.yaml \
--weights yolov5s.pt \
--device 0,1,2,3
```
- Single GPU Training
```bash
cd yolov5
python train.py \
--data ../configs/detector/yolov5_kitti.yaml \
--weights yolov5s.pt \
--img 640
```
## Hyperparameter Tuning with Hydra
```bash
python src/train.py -m \
hparams_search=regressor_optuna \
experiment=sample_optuna
``` | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/docs/index.md | # YOLO3D: 3D Object Detection with YOLO
<div align="center">
<a href="https://www.python.org/"><img alt="Python" src="https://img.shields.io/badge/-Python 3.8+-blue?style=flat&logo=python&logoColor=white"></a>
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/-PyTorch 1.8+-ee4c2c?style=flat&logo=pytorch&logoColor=white"></a>
<a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning 1.5+-792ee5?style=flat&logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: hydra" src="https://img.shields.io/badge/config-hydra 1.1-89b8cd?style=flat&labelColor=gray"></a>
<a href="https://black.readthedocs.io/en/stable/"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-black.svg?style=flat&labelColor=gray"></a>
<a href="https://github.com/ashleve/lightning-hydra-template"><img alt="Template" src="https://img.shields.io/badge/-Lightning--Hydra--Template-017F2F?style=flat&logo=github&labelColor=gray"></a><br>
</div>
## ⚠️ Cautions
> This repository currently under development
## 📼 Demo
<div align="center">
![demo](./assets/demo.gif)
</div>
## 📌 Introduction
Unofficial implementation of [Mousavian et al.](https://arxiv.org/abs/1612.00496) in their paper **3D Bounding Box Estimation Using Deep Learning and Geometry**. YOLO3D uses a different approach, as the detector uses **YOLOv5** which previously used Faster-RCNN, and Regressor uses **ResNet18/VGG11** which was previously VGG19.
## 🚀 Quickstart
> We use hydra as the config manager; if you are unfamiliar with hydra, you can visit the official website or see the tutorial on this web.
### 🍿 Inference
You can use pretrained weight from [Release](https://github.com/ruhyadi/yolo3d-lightning/releases), you can download it using script `get_weights.py`:
```bash
# download pretrained model
python script/get_weights.py \
--tag v0.1 \
--dir ./weights
```
Inference with `inference.py`:
```bash
python inference.py \
source_dir="./data/demo/images" \
detector.model_path="./weights/detector_yolov5s.pt" \
regressor_weights="./weights/regressor_resnet18.pt"
```
### ⚔️ Training
There are two models that will be trained here: **detector** and **regressor**. For now, the detector model that can be used is only **YOLOv5**, while the regressor model can use all models supported by **Torchvision**.
#### 🧭 Training YOLOv5 Detector
The first step is to change the `label_2` format from KITTI to YOLO. You can use the following `src/kitti_to_yolo.py`.
```bash
cd yolo3d-lightning/src
python kitti_to_yolo.py \
--dataset_path ../data/KITTI/training/
--classes ["car", "van", "truck", "pedestrian", "cyclist"]
--img_width 1224
--img_height 370
```
The next step is to follow the [wiki provided by ultralytics](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data). **Note:** *readme will updated in future*.
#### 🪀 Training Regessor
Selanjutnya, kamu dapat melakukan training model regressor. Model regressor yang dapat dipakai bisa mengacu pada yang tersedia di `torchvision`, atau kamu bisa mengkustomnya sendiri.
Langkah pertama adalah membuat train dan validation sets. Kamu dapat menggunakan `script/generate_sets.py`:
```bash
cd yolo3d-lightning/script
python generate_sets.py \
--images_path ../data/KITTI/training/images # or image_2
--dump_dir ../data/KITTI/training
--postfix _80
--train_size 0.8
```
Pada langkah selanjutnya, kita hanya akan menggunakan model yang ada di `torchvision` saja. Langkah termudah adalah dengan mengubah configurasi di `configs.model.regressor.yaml`, seperti di bawah:
```yaml
_target_: src.models.regressor.RegressorModel
net:
_target_: src.models.components.base.RegressorNet
backbone:
_target_: torchvision.models.resnet18 # edit this
pretrained: True # maybe this too
bins: 2
lr: 0.001
momentum: 0.9
w: 0.4
alpha: 0.6
```
Langkah selanjutnya adalah dengan membuat konfigurasi experiment pada `configs/experiment/your_exp.yaml`. Jika bingung, kamu dapat mengacu pada [`configs/experiment/demo.yaml`](./configs/experiment/demo.yaml).
Setelah konfigurasi experiment dibuat. Kamu dapat dengan mudah menjalankan perintah `train.py`, seperti berikut:
```bash
cd yolo3d-lightning
python train.py \
experiment=demo
```
## ❤️ Acknowledgement
- [YOLOv5 by Ultralytics](https://github.com/ultralytics/yolov5)
- [skhadem/3D-BoundingBox](https://github.com/skhadem/3D-BoundingBox)
- [Mousavian et al.](https://arxiv.org/abs/1612.00496)
```
@misc{mousavian20173d,
title={3D Bounding Box Estimation Using Deep Learning and Geometry},
author={Arsalan Mousavian and Dragomir Anguelov and John Flynn and Jana Kosecka},
year={2017},
eprint={1612.00496},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | 0 |
apollo_public_repos/apollo-model-yolo3d/docs | apollo_public_repos/apollo-model-yolo3d/docs/javascripts/mathjax.js | window.MathJax = {
tex: {
inlineMath: [["\\(", "\\)"]],
displayMath: [["\\[", "\\]"]],
processEscapes: true,
processEnvironments: true
},
options: {
ignoreHtmlClass: ".*|",
processHtmlClass: "arithmatex"
}
};
document$.subscribe(() => {
MathJax.typesetPromise()
}) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/convert.yaml | # @package _global_
# specify here default training configuration
defaults:
- _self_
- model: regressor.yaml
# enable color logging
- override hydra/hydra_logging: colorlog
- override hydra/job_logging: colorlog
# pretty print config at the start of the run using Rich library
print_config: True
# disable python warnings if they annoy you
ignore_warnings: True
# root
root: ${hydra:runtime.cwd}
# TODO: cahnge to your checkpoint file
checkpoint_dir: ${root}/weights/last.ckpt
# dump dir
dump_dir: ${root}/weights
# input sample shape
input_sample:
__target__: torch.randn
size: (1, 3, 224, 224)
# convert to
convert_to: "pytorch" # [pytorch, onnx, tensorrt]
# TODO: model name without extension
name: ${dump_dir}/pytorch-kitti
# convert_to: "onnx" # [pytorch, onnx, tensorrt]
# name: ${dump_dir}/onnx-3d-0817-5
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/train.yaml | # @package _global_
# specify here default configuration
# order of defaults determines the order in which configs override each other
defaults:
- _self_
- datamodule: kitti_datamodule.yaml
- model: regressor.yaml
- callbacks: default.yaml
- logger: null # set logger here or use command line (e.g. `python train.py logger=tensorboard`)
- trainer: dgx.yaml
- paths: default.yaml
- extras: default.yaml
- hydra: default.yaml
# experiment configs allow for version control of specific hyperparameters
# e.g. best hyperparameters for given model and datamodule
- experiment: null
# config for hyperparameter optimization
- hparams_search: null
# optional local config for machine/user specific settings
# it's optional since it doesn't need to exist and is excluded from version control
- optional local: default.yaml
# debugging config (enable through command line, e.g. `python train.py debug=default)
- debug: null
# task name, determines output directory path
task_name: "train"
# tags to help you identify your experiments
# you can overwrite this in experiment configs
# overwrite from command line with `python train.py tags="[first_tag, second_tag]"`
# appending lists from command line is currently not supported :(
# https://github.com/facebookresearch/hydra/issues/1547
tags: ["dev"]
# set False to skip model training
train: True
# evaluate on test set, using best model weights achieved during training
# lightning chooses best weights based on the metric specified in checkpoint callback
test: False
# simply provide checkpoint path to resume training
# ckpt_path: weights/last.ckpt
ckpt_path: null
# seed for random number generators in pytorch, numpy and python.random
seed: null
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/inference.yaml | # @package _global_
# specify here default training configuration
defaults:
- _self_
- detector: yolov5.yaml
- model: regressor.yaml
- augmentation: inference_preprocessing.yaml
# debugging config (enable through command line, e.g. `python train.py debug=default)
- debug: null
# enable color logging
- override hydra/hydra_logging: colorlog
- override hydra/job_logging: colorlog
# run name
name: inference
# directory
root: ${hydra:runtime.cwd}
output_dir: ${root}/${hydra:run.dir}/inference
# calib_file
calib_file: ${root}/assets/global_calib.txt
# save 2D bounding box
save_det2d: False
# show and save result
save_result: True
# save result in txt
# save_txt: True
# regressor weights
regressor_weights: ${root}/weights/regressor_resnet18.pt
# regressor_weights: ${root}/weights/mobilenetv3-best.pt
# inference type
inference_type: pytorch # [pytorch, onnx, openvino, tensorrt]
# source directory
# source_dir: ${root}/tmp/kitti/
source_dir: ${root}/tmp/video_001
# device to inference
device: 'cpu'
export_onnx: False
func: "label" # image/label
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/evaluate.yaml | # @package _global_
# specify here default training configuration
defaults:
- _self_
- detector: yolov5.yaml
- model: regressor.yaml
- augmentation: inference_preprocessing.yaml
# debugging config (enable through command line, e.g. `python train.py debug=default)
- debug: null
# enable color logging
- override hydra/hydra_logging: colorlog
- override hydra/job_logging: colorlog
# run name
name: evaluate
# directory
root: ${hydra:runtime.cwd}
# predictions/output directory
# pred_dir: ${root}/${hydra:run.dir}/${name}
# calib_file
calib_file: ${root}/assets/global_calib.txt
# regressor weights
regressor_weights: ${root}/weights/regressor_resnet18.pt
# validation images directory
val_images_path: ${root}/data/KITTI/images_2
# validation sets directory
val_sets: ${root}/data/KITTI/ImageSets/val.txt
# class to evaluated
classes: 6
# class_to_name = {
# 0: 'Car',
# 1: 'Cyclist',
# 2: 'Truck',
# 3: 'Van',
# 4: 'Pedestrian',
# 5: 'Tram',
# }
# gt label path
gt_dir: ${root}/data/KITTI/label_2
# dt label path
pred_dir: ${root}/data/KITTI/result
# device to inference
device: 'cuda:0' | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/configs/eval.yaml | # @package _global_
defaults:
- _self_
- datamodule: mnist.yaml # choose datamodule with `test_dataloader()` for evaluation
- model: mnist.yaml
- logger: null
- trainer: default.yaml
- paths: default.yaml
- extras: default.yaml
- hydra: default.yaml
task_name: "eval"
tags: ["dev"]
# passing checkpoint path is necessary for evaluation
ckpt_path: ???
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/hparams_search/optuna.yaml | # @package _global_
# example hyperparameter optimization of some experiment with Optuna:
# python train.py -m hparams_search=mnist_optuna experiment=example
defaults:
- override /hydra/sweeper: optuna
# choose metric which will be optimized by Optuna
# make sure this is the correct name of some metric logged in lightning module!
optimized_metric: "val/loss"
# here we define Optuna hyperparameter search
# it optimizes for value returned from function with @hydra.main decorator
# docs: https://hydra.cc/docs/next/plugins/optuna_sweeper
hydra:
mode: "MULTIRUN"
sweeper:
_target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
# storage URL to persist optimization results
# for example, you can use SQLite if you set 'sqlite:///example.db'
storage: null
# name of the study to persist optimization results
study_name: null
# number of parallel workers
n_jobs: 2
# 'minimize' or 'maximize' the objective
direction: 'minimize'
# total number of runs that will be executed
n_trials: 10
# choose Optuna hyperparameter sampler
# docs: https://optuna.readthedocs.io/en/stable/reference/samplers.html
sampler:
_target_: optuna.samplers.TPESampler
seed: 42069
n_startup_trials: 10 # number of random sampling runs before optimization starts
# define range of hyperparameters
params:
model.lr: interval(0.0001, 0.001)
datamodule.batch_size: choice(32, 64, 128)
model.optimizer: choice(adam, sgd) | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/hparams_search/mnist_optuna.yaml | # @package _global_
# example hyperparameter optimization of some experiment with Optuna:
# python train.py -m hparams_search=mnist_optuna experiment=example
defaults:
- override /hydra/sweeper: optuna
# choose metric which will be optimized by Optuna
# make sure this is the correct name of some metric logged in lightning module!
optimized_metric: "val/acc_best"
# here we define Optuna hyperparameter search
# it optimizes for value returned from function with @hydra.main decorator
# docs: https://hydra.cc/docs/next/plugins/optuna_sweeper
hydra:
mode: "MULTIRUN" # set hydra to multirun by default if this config is attached
sweeper:
_target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
# storage URL to persist optimization results
# for example, you can use SQLite if you set 'sqlite:///example.db'
storage: null
# name of the study to persist optimization results
study_name: null
# number of parallel workers
n_jobs: 1
# 'minimize' or 'maximize' the objective
direction: maximize
# total number of runs that will be executed
n_trials: 20
# choose Optuna hyperparameter sampler
# you can choose bayesian sampler (tpe), random search (without optimization), grid sampler, and others
# docs: https://optuna.readthedocs.io/en/stable/reference/samplers.html
sampler:
_target_: optuna.samplers.TPESampler
seed: 1234
n_startup_trials: 10 # number of random sampling runs before optimization starts
# define hyperparameter search space
params:
model.optimizer.lr: interval(0.0001, 0.1)
datamodule.batch_size: choice(32, 64, 128, 256)
model.net.lin1_size: choice(64, 128, 256)
model.net.lin2_size: choice(64, 128, 256)
model.net.lin3_size: choice(32, 64, 128, 256)
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/datamodule/kitti_datamodule.yaml | _target_: src.datamodules.kitti_datamodule.KITTIDataModule
dataset_path: ${paths.data_dir} # data_dir is specified in config.yaml
train_sets: ${paths.data_dir}/train_80.txt
val_sets: ${paths.data_dir}/val_80.txt
test_sets: ${paths.data_dir}/test_80.txt
batch_size: 64
num_worker: 32 | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/augmentation/inference_preprocessing.yaml | to_tensor:
_target_: torchvision.transforms.ToTensor
normalize:
_target_: torchvision.transforms.Normalize
mean: [0.406, 0.456, 0.485]
std: [0.225, 0.224, 0.229] | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/comet.yaml | # https://www.comet.ml
comet:
_target_: pytorch_lightning.loggers.comet.CometLogger
api_key: ${oc.env:COMET_API_TOKEN} # api key is loaded from environment variable
save_dir: "${paths.output_dir}"
project_name: "lightning-hydra-template"
rest_api_key: null
# experiment_name: ""
experiment_key: null # set to resume experiment
offline: False
prefix: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/csv.yaml | # csv logger built in lightning
csv:
_target_: pytorch_lightning.loggers.csv_logs.CSVLogger
save_dir: "${paths.output_dir}"
name: "csv/"
prefix: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/tensorboard.yaml | # https://www.tensorflow.org/tensorboard/
tensorboard:
_target_: pytorch_lightning.loggers.tensorboard.TensorBoardLogger
save_dir: "${paths.output_dir}/tensorboard/"
name: null
log_graph: False
default_hp_metric: True
prefix: ""
# version: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/neptune.yaml | # https://neptune.ai
neptune:
_target_: pytorch_lightning.loggers.neptune.NeptuneLogger
api_key: ${oc.env:NEPTUNE_API_TOKEN} # api key is loaded from environment variable
project: username/lightning-hydra-template
# name: ""
log_model_checkpoints: True
prefix: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/wandb.yaml | # https://wandb.ai
wandb:
_target_: pytorch_lightning.loggers.wandb.WandbLogger
# name: "" # name of the run (normally generated by wandb)
save_dir: "${paths.output_dir}"
offline: False
id: null # pass correct id to resume experiment!
anonymous: null # enable anonymous logging
project: "yolo3d-regressor"
log_model: True # upload lightning ckpts
prefix: "" # a string to put at the beginning of metric keys
# entity: "" # set to name of your wandb team
group: ""
tags: []
job_type: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/many_loggers.yaml | # train with many loggers at once
defaults:
# - comet.yaml
- csv.yaml
# - mlflow.yaml
# - neptune.yaml
- tensorboard.yaml
- wandb.yaml
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/logger/mlflow.yaml | # https://mlflow.org
mlflow:
_target_: pytorch_lightning.loggers.mlflow.MLFlowLogger
# experiment_name: ""
# run_name: ""
tracking_uri: ${paths.log_dir}/mlflow/mlruns # run `mlflow ui` command inside the `logs/mlflow/` dir to open the UI
tags: null
# save_dir: "./mlruns"
prefix: ""
artifact_location: null
# run_id: ""
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/rich_progress_bar.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichProgressBar.html
# Create a progress bar with rich text formatting.
# Look at the above link for more detailed information.
rich_progress_bar:
_target_: pytorch_lightning.callbacks.RichProgressBar
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/wandb.yaml | defaults:
- default.yaml
watch_model:
_target_: src.callbacks.wandb_callbacks.WatchModel
log: "all"
log_freq: 100
upload_code_as_artifact:
_target_: src.callbacks.wandb_callbacks.UploadCodeAsArtifact
code_dir: ${original_work_dir}/src
upload_ckpts_as_artifact:
_target_: src.callbacks.wandb_callbacks.UploadCheckpointsAsArtifact
ckpt_dir: "checkpoints/"
upload_best_only: True
# log_f1_precision_recall_heatmap:
# _target_: src.callbacks.wandb_callbacks.LogF1PrecRecHeatmap
# log_confusion_matrix:
# _target_: src.callbacks.wandb_callbacks.LogConfusionMatrix
# log_image_predictions:
# _target_: src.callbacks.wandb_callbacks.LogImagePredictions
# num_samples: 8 | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/early_stopping.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.EarlyStopping.html
# Monitor a metric and stop training when it stops improving.
# Look at the above link for more detailed information.
early_stopping:
_target_: pytorch_lightning.callbacks.EarlyStopping
monitor: ??? # quantity to be monitored, must be specified !!!
min_delta: 0. # minimum change in the monitored quantity to qualify as an improvement
patience: 3 # number of checks with no improvement after which training will be stopped
verbose: False # verbosity mode
mode: "min" # "max" means higher metric value is better, can be also "min"
strict: True # whether to crash the training if monitor is not found in the validation metrics
check_finite: True # when set True, stops training when the monitor becomes NaN or infinite
stopping_threshold: null # stop training immediately once the monitored quantity reaches this threshold
divergence_threshold: null # stop training as soon as the monitored quantity becomes worse than this threshold
check_on_train_epoch_end: null # whether to run early stopping at the end of the training epoch
# log_rank_zero_only: False # this keyword argument isn't available in stable version
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/model_summary.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichModelSummary.html
# Generates a summary of all layers in a LightningModule with rich text formatting.
# Look at the above link for more detailed information.
model_summary:
_target_: pytorch_lightning.callbacks.RichModelSummary
max_depth: 1 # the maximum depth of layer nesting that the summary will include
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/model_checkpoint.yaml | # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.ModelCheckpoint.html
# Save the model periodically by monitoring a quantity.
# Look at the above link for more detailed information.
model_checkpoint:
_target_: pytorch_lightning.callbacks.ModelCheckpoint
dirpath: null # directory to save the model file
filename: null # checkpoint filename
monitor: null # name of the logged metric which determines when model is improving
verbose: False # verbosity mode
save_last: null # additionally always save an exact copy of the last checkpoint to a file last.ckpt
save_top_k: 1 # save k best models (determined by above metric)
mode: "min" # "max" means higher metric value is better, can be also "min"
auto_insert_metric_name: True # when True, the checkpoints filenames will contain the metric name
save_weights_only: False # if True, then only the model’s weights will be saved
every_n_train_steps: null # number of training steps between checkpoints
train_time_interval: null # checkpoints are monitored at the specified time interval
every_n_epochs: null # number of epochs between checkpoints
save_on_train_epoch_end: null # whether to run checkpointing at the end of the training epoch or the end of validation
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/callbacks/default.yaml | defaults:
- model_checkpoint.yaml
- early_stopping.yaml
- model_summary.yaml
- rich_progress_bar.yaml
- _self_
# model save config
model_checkpoint:
dirpath: "weights"
filename: "epoch_{epoch:03d}"
monitor: "val/loss"
mode: "min"
save_last: True
save_top_k: 1
auto_insert_metric_name: False
early_stopping:
monitor: "val/loss"
patience: 100
mode: "min"
model_summary:
max_depth: -1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/paths/default.yaml | # path to root directory
# this requires PROJECT_ROOT environment variable to exist
# PROJECT_ROOT is inferred and set by pyrootutils package in `train.py` and `eval.py`
root_dir: ${oc.env:PROJECT_ROOT}
# path to data directory
data_dir: ${paths.root_dir}/data/KITTI
# path to logging directory
log_dir: ${paths.root_dir}/logs/
# path to output directory, created dynamically by hydra
# path generation pattern is specified in `configs/hydra/default.yaml`
# use it to store all files generated during the run, like ckpts and metrics
output_dir: ${hydra:runtime.output_dir}
# path to working directory
work_dir: ${hydra:runtime.cwd}
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/detector/yolov5.yaml | _target_: inference.detector_yolov5
model_path: ${root}/weights/detector_yolov5s.pt
cfg_path: ${root}/yolov5/models/yolov5s.yaml
classes: 5
device: 'cpu' | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/detector/yolov5_kitti.yaml | # KITTI to YOLO
path: ../data/KITTI/ # dataset root dir
train: train_yolo.txt # train images (relative to 'path') 3712 images
val: val_yolo.txt # val images (relative to 'path') 3768 images
# Classes
nc: 5 # number of classes
names: ['car', 'van', 'truck', 'pedestrian', 'cyclist'] | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/model/regressor.yaml | _target_: src.models.regressor.RegressorModel
net:
_target_: src.models.components.base.RegressorNet
backbone:
_target_: torchvision.models.resnet18 # change model on this
pretrained: True
bins: 2
optimizer: adam
lr: 0.0001
momentum: 0.9
w: 0.8
alpha: 0.2 | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/experiment/sample.yaml | # @package _global_
# to execute this experiment run:
# python train.py experiment=example
defaults:
- override /datamodule: kitti_datamodule.yaml
- override /model: regressor.yaml
- override /callbacks: default.yaml
- override /logger: wandb.yaml
- override /trainer: dgx.yaml
# all parameters below will be merged with parameters from default configurations set above
# this allows you to overwrite only specified parameters
seed: 42069
# name of the run determines folder name in logs
name: "new_network"
datamodule:
train_sets: ${paths.data_dir}/ImageSets/train.txt
val_sets: ${paths.data_dir}/ImageSets/val.txt
test_sets: ${paths.data_dir}/ImageSets/test.txt
trainer:
min_epochs: 1
max_epochs: 200
# limit_train_batches: 1.0
# limit_val_batches: 1.0
gpus: [0]
strategy: ddp | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/ddp.yaml | defaults:
- default.yaml
# use "ddp_spawn" instead of "ddp",
# it's slower but normal "ddp" currently doesn't work ideally with hydra
# https://github.com/facebookresearch/hydra/issues/2070
# https://pytorch-lightning.readthedocs.io/en/latest/accelerators/gpu_intermediate.html#distributed-data-parallel-spawn
strategy: ddp_spawn
accelerator: gpu
devices: 4
num_nodes: 1
sync_batchnorm: True
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/kaggle.yaml | _target_: pytorch_lightning.Trainer
gpus: 0
min_epochs: 1
max_epochs: 10
# number of validation steps to execute at the beginning of the training
# num_sanity_val_steps: 0
# ckpt path
resume_from_checkpoint: null
# disable progress_bar
enable_progress_bar: False | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/ddp_sim.yaml | defaults:
- default.yaml
# simulate DDP on CPU, useful for debugging
accelerator: cpu
devices: 2
strategy: ddp_spawn
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/cpu.yaml | defaults:
- default.yaml
accelerator: cpu
devices: 1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/dgx.yaml | defaults:
- default.yaml
# strategy: ddp
accelerator: gpu
devices: [0]
num_nodes: 1
sync_batchnorm: True | 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/gpu.yaml | defaults:
- default.yaml
accelerator: gpu
devices: 1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/mps.yaml | defaults:
- default.yaml
accelerator: mps
devices: 1
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/trainer/default.yaml | _target_: pytorch_lightning.Trainer
default_root_dir: ${paths.output_dir}
min_epochs: 1 # prevents early stopping
max_epochs: 25
accelerator: cpu
devices: 1
# mixed precision for extra speed-up
# precision: 16
# set True to to ensure deterministic results
# makes training slower but gives more reproducibility than just setting seeds
deterministic: False
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/hydra/default.yaml | # https://hydra.cc/docs/configure_hydra/intro/
# enable color logging
defaults:
- override hydra_logging: colorlog
- override job_logging: colorlog
# output directory, generated dynamically on each run
run:
dir: ${paths.log_dir}/${task_name}/runs/${now:%Y-%m-%d}_${now:%H-%M-%S}
sweep:
dir: ${paths.log_dir}/${task_name}/multiruns/${now:%Y-%m-%d}_${now:%H-%M-%S}
subdir: ${hydra.job.num}
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/profiler.yaml | # @package _global_
# runs with execution time profiling
defaults:
- default.yaml
trainer:
max_epochs: 1
profiler: "simple"
# profiler: "advanced"
# profiler: "pytorch"
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/overfit.yaml | # @package _global_
# overfits to 3 batches
defaults:
- default.yaml
trainer:
max_epochs: 20
overfit_batches: 3
# model ckpt and early stopping need to be disabled during overfitting
callbacks: null
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/limit.yaml | # @package _global_
# uses only 1% of the training data and 5% of validation/test data
defaults:
- default.yaml
trainer:
max_epochs: 3
limit_train_batches: 0.01
limit_val_batches: 0.05
limit_test_batches: 0.05
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/fdr.yaml | # @package _global_
# runs 1 train, 1 validation and 1 test step
defaults:
- default.yaml
trainer:
fast_dev_run: true
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/debug/default.yaml | # @package _global_
# default debugging setup, runs 1 full epoch
# other debugging configs can inherit from this one
# overwrite task name so debugging logs are stored in separate folder
task_name: "debug"
# disable callbacks and loggers during debugging
callbacks: null
logger: null
extras:
ignore_warnings: False
enforce_tags: False
# sets level of all command line loggers to 'DEBUG'
# https://hydra.cc/docs/tutorials/basic/running_your_app/logging/
hydra:
job_logging:
root:
level: DEBUG
# use this to also set hydra loggers to 'DEBUG'
# verbose: True
trainer:
max_epochs: 1
accelerator: cpu # debuggers don't like gpus
devices: 1 # debuggers don't like multiprocessing
detect_anomaly: true # raise exception if NaN or +/-inf is detected in any tensor
datamodule:
num_workers: 0 # debuggers don't like multiprocessing
pin_memory: False # disable gpu memory pin
| 0 |
apollo_public_repos/apollo-model-yolo3d/configs | apollo_public_repos/apollo-model-yolo3d/configs/extras/default.yaml | # disable python warnings if they annoy you
ignore_warnings: False
# ask user for tags if none are provided in the config
enforce_tags: True
# pretty print config tree at the start of the run using Rich library
print_config: True
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/video_to_gif.py | """Convert video to gif with moviepy"""
import argparse
import moviepy.editor as mpy
def generate(video_path, gif_path, fps):
"""Generate gif from video"""
clip = mpy.VideoFileClip(video_path)
clip.write_gif(gif_path, fps=fps)
clip.close()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Convert video to gif")
parser.add_argument("--video_path", type=str, default="outputs/videos/004.mp4", help="Path to video")
parser.add_argument("--gif_path", type=str, default="outputs/gif/002.gif", help="Path to gif")
parser.add_argument("--fps", type=int, default=5, help="GIF fps")
args = parser.parse_args()
# generate gif
generate(args.video_path, args.gif_path, args.fps) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/schedule.sh | #!/bin/bash
# Schedule execution of many runs
# Run from root folder with: bash scripts/schedule.sh
python src/train.py trainer.max_epochs=5 logger=csv
python src/train.py trainer.max_epochs=10 logger=csv
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/frames_to_video.py | """
Generate frames to vid
Usage:
python scripts/frames_to_video.py \
--imgs_path /path/to/imgs \
--vid_path /path/to/vid \
--fps 24 \
--frame_size 1242 375 \
--resize
python scripts/frames_to_video.py \
--imgs_path outputs/2023-05-13/22-51-34/inference \
--vid_path tmp/output_videos/001.mp4 \
--fps 3 \
--frame_size 1550 387 \
--resize
"""
import argparse
import cv2
from glob import glob
import os
from tqdm import tqdm
def generate(imgs_path, vid_path, fps=30, frame_size=(1242, 375), resize=True):
"""Generate frames to vid"""
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
vid_writer = cv2.VideoWriter(vid_path, fourcc, fps, frame_size)
imgs_glob = sorted(glob(os.path.join(imgs_path, "*.png")))
if resize:
for img_path in tqdm(imgs_glob):
img = cv2.imread(img_path)
img = cv2.resize(img, frame_size)
vid_writer.write(img)
else:
for img_path in imgs_glob:
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
vid_writer.write(img)
vid_writer.release()
print('[INFO] Video saved to {}'.format(vid_path))
if __name__ == "__main__":
# create argparser
parser = argparse.ArgumentParser(description="Generate frames to vid")
parser.add_argument("--imgs_path", type=str, default="outputs/2022-10-23/21-03-50/inference", help="path to imgs")
parser.add_argument("--vid_path", type=str, default="outputs/videos/004.mp4", help="path to vid")
parser.add_argument("--fps", type=int, default=24, help="fps")
parser.add_argument("--frame_size", type=int, nargs=2, default=(int(1242), int(375)), help="frame size")
parser.add_argument("--resize", action="store_true", help="resize")
args = parser.parse_args()
# generate vid
generate(args.imgs_path, args.vid_path, args.fps, args.frame_size) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/get_weights.py | """Download pretrained weights from github release"""
from pprint import pprint
import requests
import os
import shutil
import argparse
from zipfile import ZipFile
def get_assets(tag):
"""Get release assets by tag name"""
url = 'https://api.github.com/repos/ruhyadi/yolo3d-lightning/releases/tags/' + tag
response = requests.get(url)
return response.json()['assets']
def download_assets(assets, dir):
"""Download assets to dir"""
for asset in assets:
url = asset['browser_download_url']
filename = asset['name']
print('[INFO] Downloading {}'.format(filename))
response = requests.get(url, stream=True)
with open(os.path.join(dir, filename), 'wb') as f:
shutil.copyfileobj(response.raw, f)
del response
with ZipFile(os.path.join(dir, filename), 'r') as zip_file:
zip_file.extractall(dir)
os.remove(os.path.join(dir, filename))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Download pretrained weights')
parser.add_argument('--tag', type=str, default='v0.1', help='tag name')
parser.add_argument('--dir', type=str, default='./', help='directory to save weights')
args = parser.parse_args()
assets = get_assets(args.tag)
download_assets(assets, args.dir)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/post_weights.py | """Upload weights to github release"""
from pprint import pprint
import requests
import os
import dotenv
import argparse
from zipfile import ZipFile
dotenv.load_dotenv()
def create_release(tag, name, description, target="main"):
"""Create release"""
token = os.environ.get("GITHUB_TOKEN")
headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization": f"token {token}",
"Content-Type": "application/zip"
}
url = "https://api.github.com/repos/ruhyadi/yolo3d-lightning/releases"
payload = {
"tag_name": tag,
"target_commitish": target,
"name": name,
"body": description,
"draft": True,
"prerelease": False,
"generate_release_notes": True,
}
print("[INFO] Creating release {}".format(tag))
response = requests.post(url, json=payload, headers=headers)
print("[INFO] Release created id: {}".format(response.json()["id"]))
return response.json()
def post_assets(assets, release_id):
"""Post assets to release"""
token = os.environ.get("GITHUB_TOKEN")
headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization": f"token {token}",
"Content-Type": "application/zip"
}
for asset in assets:
asset_path = os.path.join(os.getcwd(), asset)
with ZipFile(f"{asset_path}.zip", "w") as zip_file:
zip_file.write(asset)
asset_path = f"{asset_path}.zip"
filename = asset_path.split("/")[-1]
url = (
"https://uploads.github.com/repos/ruhyadi/yolo3d-lightning/releases/"
+ str(release_id)
+ f"/assets?name={filename}"
)
print("[INFO] Uploading {}".format(filename))
response = requests.post(url, files={"name": open(asset_path, "rb")}, headers=headers)
pprint(response.json())
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Upload weights to github release")
parser.add_argument("--tag", type=str, default="v0.6", help="tag name")
parser.add_argument("--name", type=str, default="Release v0.6", help="release name")
parser.add_argument("--description", type=str, default="v0.6", help="release description")
parser.add_argument("--assets", type=tuple, default=["weights/mobilenetv3-best.pt", "weights/mobilenetv3-last.pt", "logs/train/runs/2022-09-28_10-36-08/checkpoints/epoch_007.ckpt", "logs/train/runs/2022-09-28_10-36-08/checkpoints/last.ckpt"], help="directory to save weights",)
args = parser.parse_args()
release_id = create_release(args.tag, args.name, args.description)["id"]
post_assets(args.assets, release_id)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/kitti_to_yolo.py | """
Convert KITTI format to YOLO format.
"""
import os
import numpy as np
from glob import glob
from tqdm import tqdm
import argparse
from typing import Tuple
class KITTI2YOLO:
def __init__(
self,
dataset_path: str = "../data/KITTI",
classes: Tuple = ["car", "van", "truck", "pedestrian", "cyclist"],
img_width: int = 1224,
img_height: int = 370,
):
self.dataset_path = dataset_path
self.img_width = img_width
self.img_height = img_height
self.classes = classes
self.ids = {self.classes[i]: i for i in range(len(self.classes))}
# create new directory
self.label_path = os.path.join(self.dataset_path, "labels")
if not os.path.isdir(self.label_path):
os.makedirs(self.label_path)
else:
print("[INFO] Directory already exist...")
def convert(self):
files = glob(os.path.join(self.dataset_path, "label_2", "*.txt"))
for file in tqdm(files):
with open(file, "r") as f:
filename = os.path.join(self.label_path, file.split("/")[-1])
dump_txt = open(filename, "w")
for line in f:
parse_line = self.parse_line(line)
if parse_line["name"].lower() not in self.classes:
continue
xmin, ymin, xmax, ymax = parse_line["bbox_camera"]
xcenter = ((xmax - xmin) / 2 + xmin) / self.img_width
if xcenter > 1.0:
xcenter = 1.0
ycenter = ((ymax - ymin) / 2 + ymin) / self.img_height
if ycenter > 1.0:
ycenter = 1.0
width = (xmax - xmin) / self.img_width
if width > 1.0:
width = 1.0
height = (ymax - ymin) / self.img_height
if height > 1.0:
height = 1.0
bbox_yolo = f"{self.ids[parse_line['name'].lower()]} {xcenter:.3f} {ycenter:.3f} {width:.3f} {height:.3f}"
dump_txt.write(bbox_yolo + "\n")
dump_txt.close()
def parse_line(self, line):
parts = line.split(" ")
output = {
"name": parts[0].strip(),
"xyz_camera": (float(parts[11]), float(parts[12]), float(parts[13])),
"wlh": (float(parts[9]), float(parts[10]), float(parts[8])),
"yaw_camera": float(parts[14]),
"bbox_camera": (
float(parts[4]),
float(parts[5]),
float(parts[6]),
float(parts[7]),
),
"truncation": float(parts[1]),
"occlusion": float(parts[2]),
"alpha": float(parts[3]),
}
# Add score if specified
if len(parts) > 15:
output["score"] = float(parts[15])
else:
output["score"] = np.nan
return output
if __name__ == "__main__":
# argparser
parser = argparse.ArgumentParser(description="KITTI to YOLO Convertion")
parser.add_argument("--dataset_path", type=str, default="../data/KITTI")
parser.add_argument(
"--classes",
type=Tuple,
default=["car", "van", "truck", "pedestrian", "cyclist"],
)
parser.add_argument("--img_width", type=int, default=1224)
parser.add_argument("--img_height", type=int, default=370)
args = parser.parse_args()
kitit2yolo = KITTI2YOLO(
dataset_path=args.dataset_path,
classes=args.classes,
img_width=args.img_width,
img_height=args.img_height,
)
kitit2yolo.convert()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/generate_sets.py | """Create training and validation sets"""
from glob import glob
import os
import argparse
def generate_sets(
images_path: str,
dump_dir: str,
postfix: str = "",
train_size: float = 0.8,
is_yolo: bool = False,
):
images = glob(os.path.join(images_path, "*.png"))
ids = [id_.split("/")[-1].split(".")[0] for id_ in images]
train_sets = sorted(ids[: int(len(ids) * train_size)])
val_sets = sorted(ids[int(len(ids) * train_size) :])
for name, sets in zip(["train", "val"], [train_sets, val_sets]):
name = os.path.join(dump_dir, f"{name}{postfix}.txt")
with open(name, "w") as f:
for id in sets:
if is_yolo:
f.write(f"./images/{id}.png\n")
else:
f.write(f"{id}\n")
print(f"[INFO] Training set: {len(train_sets)}")
print(f"[INFO] Validation set: {len(val_sets)}")
print(f"[INFO] Total: {len(train_sets) + len(val_sets)}")
print(f"[INFO] Success Generate Sets")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create training and validation sets")
parser.add_argument("--images_path", type=str, default="./data/KITTI/images")
parser.add_argument("--dump_dir", type=str, default="./data/KITTI")
parser.add_argument("--postfix", type=str, default="_95")
parser.add_argument("--train_size", type=float, default=0.95)
parser.add_argument("--is_yolo", action="store_true")
args = parser.parse_args()
generate_sets(
images_path=args.images_path,
dump_dir=args.dump_dir,
postfix=args.postfix,
train_size=args.train_size,
is_yolo=False,
)
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/scripts/video_to_frame.py | """
Convert video to frame
Usage:
python video_to_frame.py \
--video_path /path/to/video \
--output_path /path/to/output/folder \
--fps 24
python scripts/video_to_frame.py \
--video_path tmp/video/20230513_100429.mp4 \
--output_path tmp/video_001 \
--fps 20
"""
import argparse
import os
import cv2
def video_to_frame(video_path: str, output_path: str, fps: int = 5):
"""
Convert video to frame
Args:
video_path: path to video
output_path: path to output folder
fps: how many frames per second to save
"""
if not os.path.exists(output_path):
os.makedirs(output_path)
cap = cv2.VideoCapture(video_path)
frame_count = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
if frame_count % fps == 0:
cv2.imwrite(os.path.join(output_path, f"{frame_count:06d}.jpg"), frame)
frame_count += 1
cap.release()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--video_path", type=str, required=True)
parser.add_argument("--output_path", type=str, required=True)
parser.add_argument("--fps", type=int, default=30)
args = parser.parse_args()
video_to_frame(args.video_path, args.output_path, args.fps) | 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/data/datasplit.py | #!/usr/bin/env python
# Copyright (c) Baidu apollo, Inc.
# All Rights Reserved
import os
import random
# TODO: change this to your own data path
pnglabelfilepath = r'./KITTI/label_2'
savePath = r"./KITTI/ImageSets/"
target_png = os.listdir(pnglabelfilepath)
total_png = []
for t in target_png:
if t.endswith(".txt"):
id = str(int(t.split('.')[0])).zfill(6)
total_png.append(id + '.png')
print("--- iter for image finished ---")
# TODO: change this ratio to your own
train_percent = 0.85
val_percent = 0.1
test_percent = 0.05
num = len(total_png)
# train = random.sample(num,0.9*num)
list = list(range(num))
num_train = int(num * train_percent)
num_val = int(num * val_percent)
train = random.sample(list, num_train)
num1 = len(train)
for i in range(num1):
list.remove(train[i])
val_test = [i for i in list if not i in train]
val = random.sample(val_test, num_val)
num2 = len(val)
for i in range(num2):
list.remove(val[i])
def mkdir(path):
folder = os.path.exists(path)
if not folder:
os.makedirs(path)
print("--- creating new folder... ---")
print("--- finished ---")
else:
print("--- pass to create new folder ---")
mkdir(savePath)
ftrain = open(os.path.join(savePath, 'train.txt'), 'w')
fval = open(os.path.join(savePath, 'val.txt'), 'w')
ftest = open(os.path.join(savePath, 'test.txt'), 'w')
for i in train:
name = total_png[i][:-4]+ '\n'
ftrain.write(name)
for i in val:
name = total_png[i][:-4] + '\n'
fval.write(name)
for i in list:
name = total_png[i][:-4] + '\n'
ftest.write(name)
ftrain.close()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/assets/global_calib.txt | # KITTI
P_rect_02: 7.188560e+02 0.000000e+00 6.071928e+02 4.538225e+01 0.000000e+00 7.188560e+02 1.852157e+02 -1.130887e-01 0.000000e+00 0.000000e+00 1.000000e+00 3.779761e-03
calib_time: 09-Jan-2012 14:00:15
corner_dist: 9.950000e-02
S_00: 1.392000e+03 5.120000e+02
K_00: 9.799200e+02 0.000000e+00 6.900000e+02 0.000000e+00 9.741183e+02 2.486443e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_00: -3.745594e-01 2.049385e-01 1.110145e-03 1.379375e-03 -7.084798e-02
R_00: 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00
T_00: -9.251859e-17 8.326673e-17 -7.401487e-17
S_rect_00: 1.241000e+03 3.760000e+02
R_rect_00: 9.999454e-01 7.259129e-03 -7.519551e-03 -7.292213e-03 9.999638e-01 -4.381729e-03 7.487471e-03 4.436324e-03 9.999621e-01
P_rect_00: 7.188560e+02 0.000000e+00 6.071928e+02 0.000000e+00 0.000000e+00 7.188560e+02 1.852157e+02 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00
S_01: 1.392000e+03 5.120000e+02
K_01: 9.903522e+02 0.000000e+00 7.020000e+02 0.000000e+00 9.855674e+02 2.607319e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_01: -3.712084e-01 1.978723e-01 -3.709831e-05 -3.440494e-04 -6.724045e-02
R_01: 9.993440e-01 1.814887e-02 -3.134011e-02 -1.842595e-02 9.997935e-01 -8.575221e-03 3.117801e-02 9.147067e-03 9.994720e-01
T_01: -5.370000e-01 5.964270e-03 -1.274584e-02
S_rect_01: 1.241000e+03 3.760000e+02
R_rect_01: 9.996568e-01 -1.110284e-02 2.372712e-02 1.099810e-02 9.999292e-01 4.539964e-03 -2.377585e-02 -4.277453e-03 9.997082e-01
P_rect_01: 7.188560e+02 0.000000e+00 6.071928e+02 -3.861448e+02 0.000000e+00 7.188560e+02 1.852157e+02 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00
S_02: 1.392000e+03 5.120000e+02
K_02: 9.601149e+02 0.000000e+00 6.947923e+02 0.000000e+00 9.548911e+02 2.403547e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_02: -3.685917e-01 1.928022e-01 4.069233e-04 7.247536e-04 -6.276909e-02
R_02: 9.999788e-01 -5.008404e-03 -4.151018e-03 4.990516e-03 9.999783e-01 -4.308488e-03 4.172506e-03 4.287682e-03 9.999821e-01
T_02: 5.954406e-02 -7.675338e-04 3.582565e-03
S_rect_02: 1.241000e+03 3.760000e+02
R_rect_02: 9.999191e-01 1.228161e-02 -3.316013e-03 -1.228209e-02 9.999246e-01 -1.245511e-04 3.314233e-03 1.652686e-04 9.999945e-01
S_03: 1.392000e+03 5.120000e+02
K_03: 9.049931e+02 0.000000e+00 6.957698e+02 0.000000e+00 9.004945e+02 2.389820e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_03: -3.735725e-01 2.066816e-01 -6.133284e-04 -1.193269e-04 -7.600861e-02
R_03: 9.995578e-01 1.656369e-02 -2.469315e-02 -1.663353e-02 9.998582e-01 -2.625576e-03 2.464616e-02 3.035149e-03 9.996916e-01
T_03: -4.738786e-01 5.991982e-03 -3.215069e-03
S_rect_03: 1.241000e+03 3.760000e+02
R_rect_03: 9.998092e-01 -9.354781e-03 1.714961e-02 9.382303e-03 9.999548e-01 -1.525064e-03 -1.713457e-02 1.685675e-03 9.998518e-01
P_rect_03: 7.188560e+02 0.000000e+00 6.071928e+02 -3.372877e+02 0.000000e+00 7.188560e+02 1.852157e+02 2.369057e+00 0.000000e+00 0.000000e+00 1.000000e+00 4.915215e-03
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/src/train.py | import pyrootutils
root = pyrootutils.setup_root(
search_from=__file__,
indicator=[".git", "pyproject.toml"],
pythonpath=True,
dotenv=True,
)
# ------------------------------------------------------------------------------------ #
# `pyrootutils.setup_root(...)` is recommended at the top of each start file
# to make the environment more robust and consistent
#
# the line above searches for ".git" or "pyproject.toml" in present and parent dirs
# to determine the project root dir
#
# adds root dir to the PYTHONPATH (if `pythonpath=True`)
# so this file can be run from any place without installing project as a package
#
# sets PROJECT_ROOT environment variable which is used in "configs/paths/default.yaml"
# this makes all paths relative to the project root
#
# additionally loads environment variables from ".env" file (if `dotenv=True`)
#
# you can get away without using `pyrootutils.setup_root(...)` if you:
# 1. move this file to the project root dir or install project as a package
# 2. modify paths in "configs/paths/default.yaml" to not use PROJECT_ROOT
# 3. always run this file from the project root dir
#
# https://github.com/ashleve/pyrootutils
# ------------------------------------------------------------------------------------ #
from typing import List, Optional, Tuple
import hydra
import pytorch_lightning as pl
from omegaconf import DictConfig
from pytorch_lightning import Callback, LightningDataModule, LightningModule, Trainer
from pytorch_lightning.loggers import LightningLoggerBase
from src import utils
log = utils.get_pylogger(__name__)
@utils.task_wrapper
def train(cfg: DictConfig) -> Tuple[dict, dict]:
"""Trains the model. Can additionally evaluate on a testset, using best weights obtained during
training.
This method is wrapped in optional @task_wrapper decorator which applies extra utilities
before and after the call.
Args:
cfg (DictConfig): Configuration composed by Hydra.
Returns:
Tuple[dict, dict]: Dict with metrics and dict with all instantiated objects.
"""
# set seed for random number generators in pytorch, numpy and python.random
if cfg.get("seed"):
pl.seed_everything(cfg.seed, workers=True)
log.info(f"Instantiating datamodule <{cfg.datamodule._target_}>")
datamodule: LightningDataModule = hydra.utils.instantiate(cfg.datamodule)
log.info(f"Instantiating model <{cfg.model._target_}>")
model: LightningModule = hydra.utils.instantiate(cfg.model)
log.info("Instantiating callbacks...")
callbacks: List[Callback] = utils.instantiate_callbacks(cfg.get("callbacks"))
log.info("Instantiating loggers...")
logger: List[LightningLoggerBase] = utils.instantiate_loggers(cfg.get("logger"))
log.info(f"Instantiating trainer <{cfg.trainer._target_}>")
trainer: Trainer = hydra.utils.instantiate(cfg.trainer, callbacks=callbacks, logger=logger)
object_dict = {
"cfg": cfg,
"datamodule": datamodule,
"model": model,
"callbacks": callbacks,
"logger": logger,
"trainer": trainer,
}
if logger:
log.info("Logging hyperparameters!")
utils.log_hyperparameters(object_dict)
# train
if cfg.get("train"):
log.info("Starting training!")
trainer.fit(model=model, datamodule=datamodule, ckpt_path=cfg.get("ckpt_path"))
train_metrics = trainer.callback_metrics
if cfg.get("test"):
log.info("Starting testing!")
ckpt_path = trainer.checkpoint_callback.best_model_path
if ckpt_path == "":
log.warning("Best ckpt not found! Using current weights for testing...")
ckpt_path = None
trainer.test(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
log.info(f"Best ckpt path: {ckpt_path}")
test_metrics = trainer.callback_metrics
# merge train and test metrics
metric_dict = {**train_metrics, **test_metrics}
return metric_dict, object_dict
@hydra.main(version_base="1.2", config_path=root / "configs", config_name="train.yaml")
def main(cfg: DictConfig) -> Optional[float]:
# train the model
metric_dict, _ = train(cfg)
# safely retrieve metric value for hydra-based hyperparameter optimization
metric_value = utils.get_metric_value(
metric_dict=metric_dict, metric_name=cfg.get("optimized_metric")
)
# return optimized metric
return metric_value
if __name__ == "__main__":
main()
| 0 |
apollo_public_repos/apollo-model-yolo3d | apollo_public_repos/apollo-model-yolo3d/src/eval.py | import pyrootutils
root = pyrootutils.setup_root(
search_from=__file__,
indicator=[".git", "pyproject.toml"],
pythonpath=True,
dotenv=True,
)
# ------------------------------------------------------------------------------------ #
# `pyrootutils.setup_root(...)` is recommended at the top of each start file
# to make the environment more robust and consistent
#
# the line above searches for ".git" or "pyproject.toml" in present and parent dirs
# to determine the project root dir
#
# adds root dir to the PYTHONPATH (if `pythonpath=True`)
# so this file can be run from any place without installing project as a package
#
# sets PROJECT_ROOT environment variable which is used in "configs/paths/default.yaml"
# this makes all paths relative to the project root
#
# additionally loads environment variables from ".env" file (if `dotenv=True`)
#
# you can get away without using `pyrootutils.setup_root(...)` if you:
# 1. move this file to the project root dir or install project as a package
# 2. modify paths in "configs/paths/default.yaml" to not use PROJECT_ROOT
# 3. always run this file from the project root dir
#
# https://github.com/ashleve/pyrootutils
# ------------------------------------------------------------------------------------ #
from typing import List, Tuple
import hydra
from omegaconf import DictConfig
from pytorch_lightning import LightningDataModule, LightningModule, Trainer
from pytorch_lightning.loggers import LightningLoggerBase
from src import utils
log = utils.get_pylogger(__name__)
@utils.task_wrapper
def evaluate(cfg: DictConfig) -> Tuple[dict, dict]:
"""Evaluates given checkpoint on a datamodule testset.
This method is wrapped in optional @task_wrapper decorator which applies extra utilities
before and after the call.
Args:
cfg (DictConfig): Configuration composed by Hydra.
Returns:
Tuple[dict, dict]: Dict with metrics and dict with all instantiated objects.
"""
assert cfg.ckpt_path
log.info(f"Instantiating datamodule <{cfg.datamodule._target_}>")
datamodule: LightningDataModule = hydra.utils.instantiate(cfg.datamodule)
log.info(f"Instantiating model <{cfg.model._target_}>")
model: LightningModule = hydra.utils.instantiate(cfg.model)
log.info("Instantiating loggers...")
logger: List[LightningLoggerBase] = utils.instantiate_loggers(cfg.get("logger"))
log.info(f"Instantiating trainer <{cfg.trainer._target_}>")
trainer: Trainer = hydra.utils.instantiate(cfg.trainer, logger=logger)
object_dict = {
"cfg": cfg,
"datamodule": datamodule,
"model": model,
"logger": logger,
"trainer": trainer,
}
if logger:
log.info("Logging hyperparameters!")
utils.log_hyperparameters(object_dict)
log.info("Starting testing!")
trainer.test(model=model, datamodule=datamodule, ckpt_path=cfg.ckpt_path)
# for predictions use trainer.predict(...)
# predictions = trainer.predict(model=model, dataloaders=dataloaders, ckpt_path=cfg.ckpt_path)
metric_dict = trainer.callback_metrics
return metric_dict, object_dict
@hydra.main(version_base="1.2", config_path=root / "configs", config_name="eval.yaml")
def main(cfg: DictConfig) -> None:
evaluate(cfg)
if __name__ == "__main__":
main()
| 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/datamodules/kitti_datamodule.py | """
Dataset lightning class
"""
from pytorch_lightning import LightningDataModule
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from src.datamodules.components.kitti_dataset import KITTIDataset, KITTIDataset2, KITTIDataset3
class KITTIDataModule(LightningDataModule):
def __init__(
self,
dataset_path: str = './data/KITTI',
train_sets: str = './data/KITTI/train.txt',
val_sets: str = './data/KITTI/val.txt',
test_sets: str = './data/KITTI/test.txt',
batch_size: int = 32,
num_worker: int = 4,
):
super().__init__()
# save hyperparameters
self.save_hyperparameters(logger=False)
# transforms
# TODO: using albumentations
self.dataset_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
def setup(self, stage=None):
""" Split dataset to training and validation """
self.KITTI_train = KITTIDataset(self.hparams.dataset_path, self.hparams.train_sets)
self.KITTI_val = KITTIDataset(self.hparams.dataset_path, self.hparams.val_sets)
# self.KITTI_test = KITTIDataset(self.hparams.dataset_path, self.hparams.test_sets)
# TODO: add test datasets dan test sets
def train_dataloader(self):
return DataLoader(
dataset=self.KITTI_train,
batch_size=self.hparams.batch_size,
num_workers=self.hparams.num_worker,
shuffle=True
)
def val_dataloader(self):
return DataLoader(
dataset=self.KITTI_val,
batch_size=self.hparams.batch_size,
num_workers=self.hparams.num_worker,
shuffle=False
)
# def test_dataloader(self):
# return DataLoader(
# dataset=self.KITTI_test,
# batch_size=self.hparams.batch_size,
# num_workers=self.hparams.num_worker,
# shuffle=False
# )
class KITTIDataModule2(LightningDataModule):
def __init__(
self,
dataset_path: str = './data/KITTI',
train_sets: str = './data/KITTI/train.txt',
val_sets: str = './data/KITTI/val.txt',
test_sets: str = './data/KITTI/test.txt',
batch_size: int = 32,
num_worker: int = 4,
):
super().__init__()
# save hyperparameters
self.save_hyperparameters(logger=False)
def setup(self, stage=None):
""" Split dataset to training and validation """
self.KITTI_train = KITTIDataset2(self.hparams.dataset_path, self.hparams.train_sets)
self.KITTI_val = KITTIDataset2(self.hparams.dataset_path, self.hparams.val_sets)
# self.KITTI_test = KITTIDataset(self.hparams.dataset_path, self.hparams.test_sets)
# TODO: add test datasets dan test sets
def train_dataloader(self):
return DataLoader(
dataset=self.KITTI_train,
batch_size=self.hparams.batch_size,
num_workers=self.hparams.num_worker,
shuffle=True
)
def val_dataloader(self):
return DataLoader(
dataset=self.KITTI_val,
batch_size=self.hparams.batch_size,
num_workers=self.hparams.num_worker,
shuffle=False
)
class KITTIDataModule3(LightningDataModule):
def __init__(
self,
dataset_path: str = './data/KITTI',
train_sets: str = './data/KITTI/train.txt',
val_sets: str = './data/KITTI/val.txt',
test_sets: str = './data/KITTI/test.txt',
batch_size: int = 32,
num_worker: int = 4,
):
super().__init__()
# save hyperparameters
self.save_hyperparameters(logger=False)
# transforms
# TODO: using albumentations
self.dataset_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
def setup(self, stage=None):
""" Split dataset to training and validation """
self.KITTI_train = KITTIDataset3(self.hparams.dataset_path, self.hparams.train_sets)
self.KITTI_val = KITTIDataset3(self.hparams.dataset_path, self.hparams.val_sets)
# self.KITTI_test = KITTIDataset(self.hparams.dataset_path, self.hparams.test_sets)
# TODO: add test datasets dan test sets
def train_dataloader(self):
return DataLoader(
dataset=self.KITTI_train,
batch_size=self.hparams.batch_size,
num_workers=self.hparams.num_worker,
shuffle=True
)
def val_dataloader(self):
return DataLoader(
dataset=self.KITTI_val,
batch_size=self.hparams.batch_size,
num_workers=self.hparams.num_worker,
shuffle=False
)
if __name__ == '__main__':
from time import time
start1 = time()
datamodule1 = KITTIDataModule(
dataset_path='./data/KITTI',
train_sets='./data/KITTI/train_95.txt',
val_sets='./data/KITTI/val_95.txt',
test_sets='./data/KITTI/test_95.txt',
batch_size=5,
)
datamodule1.setup()
trainloader = datamodule1.val_dataloader()
for img, label in trainloader:
print(label["Orientation"])
break
results1 = (time() - start1) * 1000
start2 = time()
datamodule2 = KITTIDataModule3(
dataset_path='./data/KITTI',
train_sets='./data/KITTI/train_95.txt',
val_sets='./data/KITTI/val_95.txt',
test_sets='./data/KITTI/test_95.txt',
batch_size=5,
)
datamodule2.setup()
trainloader = datamodule2.val_dataloader()
for img, label in trainloader:
print(label["orientation"])
break
results2 = (time() - start2) * 1000
print(f'Time taken for datamodule1: {results1} ms')
print(f'Time taken for datamodule2: {results2} ms')
| 0 |
apollo_public_repos/apollo-model-yolo3d/src/datamodules | apollo_public_repos/apollo-model-yolo3d/src/datamodules/components/kitti_dataset.py | """
Dataset modules for load kitti dataset and convert to yolo3d format
"""
import csv
import os
from pathlib import Path
from typing import List
import cv2
import numpy as np
from src.utils import Calib as calib
from src.utils.averages import ClassAverages, DimensionAverages
from torch.utils.data import Dataset
from torchvision.transforms import transforms
class KITTIDataset3(Dataset):
"""KITTI dataset loader"""
def __init__(
self,
dataset_dir: str = "./data/KITTI",
dataset_sets: str = "./data/KITTI/train.txt", # or val.txt
bins: int = 2,
overlap: float = 0.1,
image_size: int = 224,
categories: List[str] = ["car", "pedestrian", "cyclist"],
):
super().__init__()
self.dataset_dir = Path(dataset_dir)
with open(dataset_sets, "r") as f:
self.ids = [id.split("\n")[0] for id in f.readlines()]
self.bins = bins
self.overlap = overlap
self.image_size = image_size
self.categories = categories
self.images_dir = self.dataset_dir / "images" # image_2
self.labels_dir = self.dataset_dir / "label_2"
self.P2 = calib.get_P(self.dataset_dir / "calib_kitti.txt") # calibration matrix P2
# get images and labels paths
self.images_path = [self.images_dir / (id + ".png") for id in self.ids]
self.labels_path = [self.labels_dir / (id + ".txt") for id in self.ids]
# get dimension average for every object in categories
self.dimensions_averages = DimensionAverages(self.categories)
self.dimensions_averages.add_items(self.labels_path)
# KITTI fieldnames
self.fieldnames = [
"type", "truncated", "occluded", "alpha",
"xmin", "ymin", "xmax", "ymax", "dh", "dw", "dl",
"lx", "ly", "lz", "ry"
]
# get images data
self.images_data = self.preprocess_labels(self.labels_path)
def __len__(self):
return len(self.images_data)
def __getitem__(self, idx):
"""Data loader looper"""
image_data = self.images_data[idx]
image = self.preprocess_image(image_data)
orientation = image_data["orientation"]
confidence = image_data["confidence"]
dimensions = image_data["dimensions"]
return image, {"orientation": orientation, "confidence": confidence, "dimensions": dimensions}
def preprocess_labels(self, labels_path: str):
"""
> Preprocessing labels for yolo3d format.
The function takes in a list of paths to the labels,
and returns a list of dictionaries, where
each dictionary contains the information for one image
Args:
labels_path (str): The path to the labels file.
Returns:
A list of dictionaries, each dictionary contains the information of one object in one image.
"""
IMAGES_DATA = []
# generate angle bins, center of each bin [pi/2, 3pi/2] for 2 bin
center_bins = self.generate_bins(self.bins)
# initialize orientation and confidence
for path in labels_path:
with open(path, "r") as f:
reader = csv.DictReader(f, delimiter=" ", fieldnames=self.fieldnames)
for line, row in enumerate(reader):
if row["type"].lower() in self.categories:
orientation = np.zeros((self.bins, 2))
confidence = np.zeros(self.bins)
# convert from [-pi, pi] to [0, 2pi]
angle = float(row["alpha"]) + np.pi # or new_alpha
bin_idxs = self.get_bin_idxs(angle)
# update orientation and confidence
for idx in bin_idxs:
angle_diff = angle - center_bins[idx]
orientation[idx, :] = np.array([np.cos(angle_diff), np.sin(angle_diff)])
confidence[idx] = 1
# averaging dimensions
dimensions = np.array([float(row["dh"]), float(row["dw"]), float(row["dl"])])
dimensions -= self.dimensions_averages.get_item(row["type"])
image_data = {
"name": row["type"],
"image_path": self.images_dir / (path.name.split(".")[0] + ".png"),
"xmin": int(float(row["xmin"])),
"ymin": int(float(row["ymin"])),
"xmax": int(float(row["xmax"])),
"ymax": int(float(row["ymax"])),
"alpha": float(row["alpha"]),
"orientation": orientation,
"confidence": confidence,
"dimensions": dimensions
}
IMAGES_DATA.append(image_data)
return IMAGES_DATA
def preprocess_image(self, image_data: dict):
"""
It takes an image and a bounding box, crops the image to the bounding box,
resizes the cropped image
to the size of the input to the model, and then normalizes the image
Args:
image_data (dict): a dictionary containing the following keys:
Returns:
A tensor of the image
"""
image = cv2.imread(str(image_data["image_path"]))
# image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
crop = image[image_data["ymin"]:image_data["ymax"]+1, image_data["xmin"]:image_data["xmax"]+1]
crop = cv2.resize(crop, (self.image_size, self.image_size), interpolation=cv2.INTER_CUBIC)
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.406, 0.456, 0.485], std=[0.225, 0.224, 0.229])
])
return transform(crop)
def generate_bins(self, bins):
"""
It takes the number of bins you want to use and returns
an array of angles that are the centers of those bins
Args:
bins: number of bins to use for the histogram
Returns:
the angle_bins array.
"""
angle_bins = np.zeros(bins)
interval = 2 * np.pi / bins
for i in range(1, bins):
angle_bins[i] = i * interval
angle_bins += interval / 2 # center of bins
return angle_bins
def get_bin_idxs(self, angle):
"""
It takes an angle and returns the indices of the bins that the angle falls into
Args:
angle: the angle of the line
Returns:
The bin_idxs are being returned.
"""
interval = 2 * np.pi / self.bins
# range of bins from [0, 2pi]
bin_ranges = []
for i in range(0, self.bins):
bin_ranges.append((
(i * (interval - self.overlap)) % (2 * np.pi),
((i * interval) + interval + self.overlap) % (2 * np.pi)
))
def is_between(min, max, angle):
max = (max - min) if (max - min) > 0 else (max - min) + 2 * np.pi
angle = (angle - min) if (angle - min) > 0 else (angle - min) + 2 * np.pi
return angle < max
bin_idxs = []
for bin_idx, bin_range in enumerate(bin_ranges):
if is_between(bin_range[0], bin_range[1], angle):
bin_idxs.append(bin_idx)
return bin_idxs
def get_average_dimension(self, labels_path: str):
"""
> For each line in the labels file,
if the object type is in the categories list, add the dimensions
to the dimensions_averages object
Args:
labels_path (str): The path to the labels file.
"""
for path in labels_path:
with open(path, "r") as f:
reader = csv.DictReader(f, delimiter=" ", fieldnames=self.fieldnames)
for line, row in enumerate(reader):
if row["type"] in self.categories:
self.dimensions_averages.add_item(
row["type"],
row["dh"],
row["dw"],
row["dl"]
)
class KITTIDataset(Dataset):
def __init__(
self,
dataset_path: str = "./data/KITTI",
dataset_sets: str = "./data/KITTI/train.txt", # [train.txt, val.txt]
bins: int = 2,
overlap: float = 0.1,
):
super().__init__()
# dataset path
dataset_path = Path(dataset_path)
self.image_path = dataset_path / "image_2" # image_2
self.label_path = dataset_path / "label_2"
self.calib_path = dataset_path / "calib"
self.global_calib = dataset_path / "calib_kitti.txt"
self.dataset_sets = Path(dataset_sets)
# set projection matrix
self.proj_matrix = calib.get_P(self.global_calib)
# index from images_path
self.sets = open(self.dataset_sets, "r")
self.ids = [id.split("\n")[0] for id in self.sets.readlines()]
# self.ids = [x.split(".")[0] for x in sorted(os.listdir(self.image_path))]
self.num_images = len(self.ids)
# set ANGLE BINS
self.bins = bins
self.angle_bins = self.generate_bins(self.bins)
self.interval = 2 * np.pi / self.bins
self.overlap = overlap
# ranges for confidence
# [(min angle in bin, max angle in bin), ... ]
self.bin_ranges = []
for i in range(0, bins):
self.bin_ranges.append(
(
(i * self.interval - overlap) % (2 * np.pi),
(i * self.interval + self.interval + overlap) % (2 * np.pi),
)
)
# AVERANGE num classes dataset
# class_list same as in detector
self.class_list = ["Car", "Cyclist", "Truck", "Van", "Pedestrian", "Tram"] # KITTI
self.averages = ClassAverages(self.class_list)
# list of object [id (000001), line_num]
self.object_list = self.get_objects(self.ids)
# label: contain image label params {bbox, dimension, etc}
self.labels = {}
last_id = ""
for obj in self.object_list:
id = obj[0]
line_num = obj[1]
label = self.get_label(id, line_num)
if id != last_id:
self.labels[id] = {}
last_id = id
self.labels[id][str(line_num)] = label
# current id and image
self.curr_id = ""
self.curr_img = None
def __getitem__(self, index):
id = self.object_list[index][0]
line_num = self.object_list[index][1]
if id != self.curr_id:
self.curr_id = id
# read image (.png)
self.curr_img = cv2.imread(str(self.image_path / f"{id}.png"))
label = self.labels[id][str(line_num)]
obj = DetectedObject(
self.curr_img, label["Class"], label["Box_2D"], self.proj_matrix, label=label
)
return obj.img, label
def __len__(self):
return len(self.object_list)
# def generate_sets(self, sets_file):
# with open(self.dataset_sets) as file:
# for line_num, line in enumerate(file):
# ids = line
def generate_bins(self, bins):
angle_bins = np.zeros(bins)
interval = 2 * np.pi / bins
for i in range(1, bins):
angle_bins[i] = i * interval
angle_bins += interval / 2 # center of bins
return angle_bins
def get_objects(self, ids):
"""Get objects parameter from labels, like dimension and class name."""
objects = []
for id in ids:
with open(self.label_path / f"{id}.txt") as file:
for line_num, line in enumerate(file):
line = line[:-1].split(" ")
obj_class = line[0]
if obj_class not in self.class_list:
continue
dimension = np.array(
[float(line[8]), float(line[9]), float(line[10])], dtype=np.double
)
self.averages.add_item(obj_class, dimension)
objects.append((id, line_num))
self.averages.dump_to_file()
return objects
def get_label(self, id, line_num):
lines = open(self.label_path / f"{id}.txt").read().splitlines()
label = self.format_label(lines[line_num])
return label
def get_bin(self, angle):
bin_idxs = []
def is_between(min, max, angle):
max = (max - min) if (max - min) > 0 else (max - min) + 2 * np.pi
angle = (angle - min) if (angle - min) > 0 else (angle - min) + 2 * np.pi
return angle < max
for bin_idx, bin_range in enumerate(self.bin_ranges):
if is_between(bin_range[0], bin_range[1], angle):
bin_idxs.append(bin_idx)
return bin_idxs
def format_label(self, line):
line = line[:-1].split(" ")
Class = line[0]
for i in range(1, len(line)):
line[i] = float(line[i])
# Alpha is orientation will be regressing
# Alpha = [-pi, pi]
Alpha = line[3]
Ry = line[14]
top_left = (int(round(line[4])), int(round(line[5])))
bottom_right = (int(round(line[6])), int(round(line[7])))
Box_2D = [top_left, bottom_right]
# Dimension: height, width, length
Dimension = np.array([line[8], line[9], line[10]], dtype=np.double)
# modify the average
Dimension -= self.averages.get_item(Class)
# Locattion: x, y, z
Location = [line[11], line[12], line[13]]
# bring the KITTI center up to the middle of the object
Location[1] -= Dimension[0] / 2
Orientation = np.zeros((self.bins, 2))
Confidence = np.zeros(self.bins)
# angle on range [0, 2pi]
angle = Alpha + np.pi
bin_idxs = self.get_bin(angle)
for bin_idx in bin_idxs:
angle_diff = angle - self.angle_bins[bin_idx]
Orientation[bin_idx, :] = np.array([np.cos(angle_diff), np.sin(angle_diff)])
Confidence[bin_idx] = 1
label = {
"Class": Class,
"Box_2D": Box_2D,
"Dimensions": Dimension,
"Alpha": Alpha,
"Orientation": Orientation,
"Confidence": Confidence,
}
return label
def get_averages(self):
dims_avg = {key: np.array([0, 0, 0]) for key in self.class_list}
dims_count = {key: 0 for key in self.class_list}
for i in range(len(os.listdir(self.image_path))):
current_data = self.image_path[i]
class DetectedObject:
"""Processing image for NN input."""
def __init__(self, img, detection_class, box_2d, proj_matrix, label=None):
# check if proj_matrix is path
if isinstance(proj_matrix, str):
proj_matrix = calib.get_P(proj_matrix)
self.proj_matrix = proj_matrix
self.theta_ray = self.calc_theta_ray(img, box_2d, proj_matrix)
self.img = self.format_img(img, box_2d)
self.label = label
self.detection_class = detection_class
def calc_theta_ray(self, img, box_2d, proj_matrix):
"""Calculate global angle of object, see paper."""
width = img.shape[1]
# Angle of View: fovx (rad) => 3.14
fovx = 2 * np.arctan(width / (2 * proj_matrix[0][0]))
center = (box_2d[1][0] + box_2d[0][0]) / 2
dx = center - (width / 2)
mult = 1
if dx < 0:
mult = -1
dx = abs(dx)
angle = np.arctan((2 * dx * np.tan(fovx / 2)) / width)
angle = angle * mult
return angle
def format_img(self, img, box_2d):
# transforms
normalize = transforms.Normalize(mean=[0.406, 0.456, 0.485], std=[0.225, 0.224, 0.229])
process = transforms.Compose([transforms.ToTensor(), normalize])
# crop image
pt1, pt2 = box_2d[0], box_2d[1]
point_list1 = [pt1[0], pt1[1]]
point_list2 = [pt2[0], pt2[1]]
if point_list1[0] < 0:
point_list1[0] = 0
if point_list1[1] < 0:
point_list1[1] = 0
if point_list2[0] < 0:
point_list2[0] = 0
if point_list2[1] < 0:
point_list2[1] = 0
if point_list1[0] >= img.shape[1]:
point_list1[0] = img.shape[1] - 1
if point_list2[0] >= img.shape[1]:
point_list2[0] = img.shape[1] - 1
if point_list1[1] >= img.shape[0]:
point_list1[1] = img.shape[0] - 1
if point_list2[1] >= img.shape[0]:
point_list2[1] = img.shape[0] - 1
crop = img[point_list1[1]:point_list2[1]+1, point_list1[0]:point_list2[0]+1]
try:
crop = cv2.resize(crop, (224, 224), interpolation=cv2.INTER_CUBIC)
except cv2.error:
print("pt1 is ", pt1, " pt2 is ", pt2)
print("image shape is ", img.shape)
print("box_2d is ", box_2d)
# apply transform for batch
batch = process(crop)
return batch
def generate_bins(bins):
angle_bins = np.zeros(bins)
interval = 2 * np.pi / bins
for i in range(1, bins):
angle_bins[i] = i * interval
angle_bins += interval / 2 # center of bins
return angle_bins
"""
KITTI DataLoader
Source: https://github.com/lzccccc/3d-bounding-box-estimation-for-autonomous-driving
"""
class KITTIDataset2(Dataset):
"""KITTI Data Loader"""
def __init__(
self,
dataset_dir: str = "./data//KITTI",
dataset_sets_path: str = "./data/KITTI/train.txt",
bin: int = 2,
overlap: float = 0.5,
image_size: int = 224,
categories: list = ["Car", "Pedestrian", "Cyclist", "Van", "Truck"],
) -> None:
"""Initialize dataset"""
super().__init__()
# arguments
self.dataset_dir = Path(dataset_dir)
self.images_dir = self.dataset_dir / "images" # image_2
self.labels_dir = self.dataset_dir / "label_2" # label_2
self.P2 = calib.get_P(self.dataset_dir / "calib_kitti.txt") # calibration matrix
with open(dataset_sets_path, "r") as f:
self.dataset_sets = [id.split("\n")[0] for id in f.readlines()] # sets training/validation/test sets
self.bin = bin # binning factor
self.overlap = overlap # overlap factor
self.image_size = image_size # image size
self.categories = categories # object categories
# get image and label paths
self.images_path = self.get_paths(self.images_dir)
self.labels_path = self.get_paths(self.labels_dir)
# get label annotations data
self.images_data = self.get_label_annos(self.labels_path)
# get dimension average
self.dims_avg, self.dims_cnt = self.get_average_dimension(self.images_data)
# get orientation, confidence, and augmented annotations
self.images_data = self.orientation_confidence_flip(self.images_data, self.dims_avg)
def __len__(self):
return len(self.images_data)
def __getitem__(self, idx):
"""Get item"""
# preprocessing and augmenting data
image, label = self.get_augmentation(self.images_data[idx])
return image, label
def get_paths(self, dir: Path) -> List[Path]:
"""Get image and label paths"""
return [
path
for path in dir.iterdir()
if path.name.split(".")[0] in self.dataset_sets
]
def get_label_annos(self, labels_path) -> list:
"""Get label annotations"""
IMAGES_DATA = []
fieldnames = [
"type", "truncated", "occluded", "alpha",
"xmin", "ymin", "xmax", "ymax", "dh", "dw", "dl",
"lx", "ly", "lz", "ry"
]
for path in labels_path:
with open(path, "r") as f:
reader = csv.DictReader(f, delimiter=" ", fieldnames=fieldnames)
for line, row in enumerate(reader):
if row["type"] in self.categories:
new_alpha = self.get_new_alpha(row["alpha"])
dimensions = np.array([float(row["dh"]), float(row["dw"]), float(row["dl"])])
image_data = {
"name": row["type"],
"image": self.images_dir / (path.name.split(".")[0] + ".png"),
"xmin": int(float(row["xmin"])),
"ymin": int(float(row["ymin"])),
"xmax": int(float(row["xmax"])),
"ymax": int(float(row["ymax"])),
"dims": dimensions,
"new_alpha": new_alpha,
}
IMAGES_DATA.append(image_data)
return IMAGES_DATA
def get_new_alpha(self, alpha: float):
"""
Change the range of orientation from [-pi, pi] to [0, 2pi]
:param alpha: original orientation in KITTI
:return: new alpha
"""
new_alpha = float(alpha) + np.pi / 2.0
if new_alpha < 0:
new_alpha = new_alpha + 2.0 * np.pi
# make sure angle lies in [0, 2pi]
new_alpha = new_alpha - int(new_alpha / (2.0 * np.pi)) * (2.0 * np.pi)
return new_alpha
def get_average_dimension(self, images_data: list = None) -> tuple:
"""Get average dimension for every object in categories"""
dims_avg = {key: np.array([0.0, 0.0, 0.0]) for key in self.categories}
dims_cnt = {key: 0 for key in self.categories}
for i in range(len(images_data)):
current = images_data[i]
if current["name"] in self.categories:
dims_avg[current["name"]] = (
dims_cnt[current["name"]] * dims_avg[current["name"]]
+ current["dims"]
)
dims_cnt[current["name"]] += 1
dims_avg[current["name"]] /= dims_cnt[current["name"]]
return [dims_avg, dims_cnt]
def compute_anchors(self, angle):
"""
compute angle offset and which bin the angle lies in
input: fixed local orientation [0, 2pi]
output: [bin number, angle offset]
For two bins:
if angle < pi, l = 0, r = 1
if angle < 1.65, return [0, angle]
elif pi - angle < 1.65, return [1, angle - pi]
if angle > pi, l = 1, r = 2
if angle - pi < 1.65, return [1, angle - pi]
elif 2pi - angle < 1.65, return [0, angle - 2pi]
"""
anchors = []
wedge = 2.0 * np.pi / self.bin # 2pi / bin = pi
l_index = int(angle / wedge) # angle/pi
r_index = l_index + 1
# (angle - l_index*pi) < pi/2 * 1.05 = 1.65
if (angle - l_index * wedge) < wedge / 2 * (1 + self.overlap / 2):
anchors.append([l_index, angle - l_index * wedge])
# (r*pi + pi - angle) < pi/2 * 1.05 = 1.65
if (r_index * wedge - angle) < wedge / 2 * (1 + self.overlap / 2):
anchors.append([r_index % self.bin, angle - r_index * wedge])
return anchors
def orientation_confidence_flip(self, images_data, dims_avg):
"""Generate orientation, confidence and augment with flip"""
for data in images_data:
# minus the average dimensions
data["dims"] = data["dims"] - dims_avg[data["name"]]
# fix orientation and confidence for no flip
orientation = np.zeros((self.bin, 2))
confidence = np.zeros(self.bin)
anchors = self.compute_anchors(data["new_alpha"])
for anchor in anchors:
# each angle is represented in sin and cos
orientation[anchor[0]] = np.array(
[np.cos(anchor[1]), np.sin(anchor[1])]
)
confidence[anchor[0]] = 1
confidence = confidence / np.sum(confidence)
data["orient"] = orientation
data["conf"] = confidence
# Fix orientation and confidence for random flip
orientation = np.zeros((self.bin, 2))
confidence = np.zeros(self.bin)
anchors = self.compute_anchors(
2.0 * np.pi - data["new_alpha"]
) # compute orientation and bin
# for flipped images
for anchor in anchors:
orientation[anchor[0]] = np.array(
[np.cos(anchor[1]), np.sin(anchor[1])]
)
confidence[anchor[0]] = 1
confidence = confidence / np.sum(confidence)
data["orient_flipped"] = orientation
data["conf_flipped"] = confidence
return images_data
def get_augmentation(self, data):
"""
Preprocess image and augmentation
input: image_data
output: image, bounding box
"""
normalizer = transforms.Normalize(mean=[0.406, 0.456, 0.485], std=[0.225, 0.224, 0.229])
preprocess = transforms.Compose([transforms.ToTensor(), normalizer])
xmin = data["xmin"]
ymin = data["ymin"]
xmax = data["xmax"]
ymax = data["ymax"]
# read and crop image
img = cv2.imread(str(data["image"]))
crop_img = img[ymin : ymax + 1, xmin : xmax + 1]
crop_img = cv2.resize(crop_img, (self.image_size, self.image_size))
# NOTE: Disable Augmentation
# augmented image with flip
# flip = np.random.random(1, 0.5)
# if flip > 0.5:
# crop_img = cv2.flip(crop_img, 1)
# transforms image
crop_img = preprocess(crop_img)
# if flip > 0.5:
# return (
# crop_img,
# {"orientation": data["orient_flipped"],
# "confidence": data["conf_flipped"],
# "dimensions": data["dims"],
# },
# )
# else:
return (
crop_img,
{"orientation": data["orient"],
"confidence": data["conf"],
"dimensions": data["dims"],
},
)
if __name__ == "__main__":
from torch.utils.data import DataLoader
from time import time
# start1 = time()
# dataset1 = KITTIDataset(
# dataset_path="./data/KITTI",
# dataset_sets="./data/KITTI/val_95.txt",
# )
# dataloader1 = DataLoader(
# dataset1, batch_size=5, shuffle=False, num_workers=0, pin_memory=True)
# for img, label in dataloader1:
# print(label["Dimensions"])
# break
# results1 = (time() - start1) * 1000
start2 = time()
dataset2 = KITTIDataset3(
dataset_dir="./data/KITTI",
dataset_sets="./data/KITTI/all.txt",
categories=["car", "pedestrian", "cyclist"],
)
dataloader2 = DataLoader(
dataset2, batch_size=5, shuffle=False, num_workers=0, pin_memory=True)
for img, label in dataloader2:
print(label["orientation"])
break
results2 = (time() - start2) * 1000
# print("KITTI Dataset: {} ms".format(results1))
print("KITTI Dataset3: {} ms".format(results2)) | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/Plotting.py | import os
from matplotlib.path import Path
import cv2
from PIL import Image
import numpy as np
from enum import Enum
import itertools
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.gridspec import GridSpec
from src.utils.utils import detectionInfo, save_file
from src.utils.Calib import *
from src.utils.Math import *
# from .Calib import *
# from .Math import *
from src.utils import Calib
class cv_colors(Enum):
RED = (0, 0, 255)
GREEN = (0, 255, 0)
BLUE = (255, 0, 0)
PURPLE = (247, 44, 200)
ORANGE = (44, 162, 247)
MINT = (239, 255, 66)
YELLOW = (2, 255, 250)
def constraint_to_color(constraint_idx):
return {
0: cv_colors.PURPLE.value, # left
1: cv_colors.ORANGE.value, # top
2: cv_colors.MINT.value, # right
3: cv_colors.YELLOW.value, # bottom
}[constraint_idx]
# from the 2 corners, return the 4 corners of a box in CCW order
# coulda just used cv2.rectangle haha
def create_2d_box(box_2d):
corner1_2d = box_2d[0]
corner2_2d = box_2d[1]
pt1 = corner1_2d
pt2 = (corner1_2d[0], corner2_2d[1])
pt3 = corner2_2d
pt4 = (corner2_2d[0], corner1_2d[1])
return pt1, pt2, pt3, pt4
# takes in a 3d point and projects it into 2d
def project_3d_pt(pt, cam_to_img, calib_file=None):
if calib_file is not None:
cam_to_img = get_calibration_cam_to_image(calib_file)
R0_rect = get_R0(calib_file)
Tr_velo_to_cam = get_tr_to_velo(calib_file)
point = np.array(pt)
point = np.append(point, 1)
point = np.dot(cam_to_img, point)
# point = np.dot(np.dot(np.dot(cam_to_img, R0_rect), Tr_velo_to_cam), point)
point = point[:2] / point[2]
point = point.astype(np.int16)
return point
# take in 3d points and plot them on image as red circles
def plot_3d_pts(
img,
pts,
center,
calib_file=None,
cam_to_img=None,
relative=False,
constraint_idx=None,
):
if calib_file is not None:
cam_to_img = get_calibration_cam_to_image(calib_file)
for pt in pts:
if relative:
pt = [i + center[j] for j, i in enumerate(pt)] # more pythonic
point = project_3d_pt(pt, cam_to_img)
color = cv_colors.RED.value
if constraint_idx is not None:
color = constraint_to_color(constraint_idx)
cv2.circle(img, (point[0], point[1]), 3, color, thickness=-1)
def plot_3d_box(img, cam_to_img, ry, dimension, center):
# plot_3d_pts(img, [center], center, calib_file=calib_file, cam_to_img=cam_to_img)
R = rotation_matrix(ry)
corners = create_corners(dimension, location=center, R=R)
# to see the corners on image as red circles
# plot_3d_pts(img, corners, center,cam_to_img=cam_to_img, relative=False)
box_3d = []
for corner in corners:
point = project_3d_pt(corner, cam_to_img)
box_3d.append(point)
# LINE
cv2.line(
img,
(box_3d[0][0], box_3d[0][1]),
(box_3d[2][0], box_3d[2][1]),
cv_colors.GREEN.value,
2,
)
cv2.line(
img,
(box_3d[4][0], box_3d[4][1]),
(box_3d[6][0], box_3d[6][1]),
cv_colors.GREEN.value,
2,
)
cv2.line(
img,
(box_3d[0][0], box_3d[0][1]),
(box_3d[4][0], box_3d[4][1]),
cv_colors.GREEN.value,
2,
)
cv2.line(
img,
(box_3d[2][0], box_3d[2][1]),
(box_3d[6][0], box_3d[6][1]),
cv_colors.GREEN.value,
2,
)
cv2.line(
img,
(box_3d[1][0], box_3d[1][1]),
(box_3d[3][0], box_3d[3][1]),
cv_colors.GREEN.value,
2,
)
cv2.line(
img,
(box_3d[1][0], box_3d[1][1]),
(box_3d[5][0], box_3d[5][1]),
cv_colors.GREEN.value,
2,
)
cv2.line(
img,
(box_3d[7][0], box_3d[7][1]),
(box_3d[3][0], box_3d[3][1]),
cv_colors.GREEN.value,
2,
)
cv2.line(
img,
(box_3d[7][0], box_3d[7][1]),
(box_3d[5][0], box_3d[5][1]),
cv_colors.GREEN.value,
2,
)
for i in range(0, 7, 2):
cv2.line(
img,
(box_3d[i][0], box_3d[i][1]),
(box_3d[i + 1][0], box_3d[i + 1][1]),
cv_colors.GREEN.value,
2,
)
# frame to drawing polygon
frame = np.zeros_like(img, np.uint8)
# front side
cv2.fillPoly(
frame,
np.array(
[[[box_3d[0]], [box_3d[1]], [box_3d[3]], [box_3d[2]]]], dtype=np.int32
),
cv_colors.BLUE.value,
)
alpha = 0.5
mask = frame.astype(bool)
img[mask] = cv2.addWeighted(img, alpha, frame, 1 - alpha, 0)[mask]
def plot_2d_box(img, box_2d):
# create a square from the corners
pt1, pt2, pt3, pt4 = create_2d_box(box_2d)
# plot the 2d box
cv2.line(img, pt1, pt2, cv_colors.BLUE.value, 2)
cv2.line(img, pt2, pt3, cv_colors.BLUE.value, 2)
cv2.line(img, pt3, pt4, cv_colors.BLUE.value, 2)
cv2.line(img, pt4, pt1, cv_colors.BLUE.value, 2)
def calc_theta_ray(img_width, box_2d, proj_matrix):
"""Calculate global angle of object, see paper."""
# check if proj_matrix is path
if isinstance(proj_matrix, str):
proj_matrix = Calib.get_P(proj_matrix)
# Angle of View: fovx (rad) => 3.14
fovx = 2 * np.arctan(img_width / (2 * proj_matrix[0][0]))
# center_x = (box_2d[1][0] + box_2d[0][0]) / 2
center_x = ((box_2d[2] - box_2d[0]) / 2) + box_2d[0]
dx = center_x - (img_width / 2)
mult = 1
if dx < 0:
mult = -1
dx = abs(dx)
angle = np.arctan((2 * dx * np.tan(fovx / 2)) / img_width)
angle = angle * mult
return angle
def calc_alpha(orient, conf, bins=2):
angle_bins = generate_bins(bins=bins)
argmax = np.argmax(conf)
orient = orient[argmax, :]
cos = orient[0]
sin = orient[1]
alpha = np.arctan2(sin, cos)
alpha += angle_bins[argmax]
alpha -= np.pi
return alpha
def generate_bins(bins):
angle_bins = np.zeros(bins)
interval = 2 * np.pi / bins
for i in range(1, bins):
angle_bins[i] = i * interval
angle_bins += interval / 2 # center of bins
return angle_bins
class Plot3DBox:
"""
Plotting 3DBox
source: https://github.com/lzccccc/3d-bounding-box-estimation-for-autonomous-driving
"""
def __init__(
self,
image_path: str = None,
pred_path: str = None,
label_path: str = None,
calib_path: str = None,
vehicle_list: list = ["car", "truck", "bus", "motorcycle", "bicycle", "pedestrian"],
mode: str = "training",
save_path: str = None,
) -> None:
self.image_path = image_path
self.pred_path = pred_path
self.label_path = label_path if label_path is not None else pred_path
self.calib_path = calib_path
self.vehicle_list = vehicle_list
self.mode = mode
self.save_path = save_path
self.dataset = [name.split('.')[0] for name in sorted(os.listdir(self.image_path))]
self.start_frame = 0
self.end_frame = len(self.dataset)
def compute_birdviewbox(self, line, shape, scale):
npline = [np.float64(line[i]) for i in range(1, len(line))]
h = npline[7] * scale
w = npline[8] * scale
l = npline[9] * scale
x = npline[10] * scale
y = npline[11] * scale
z = npline[12] * scale
rot_y = npline[13]
R = np.array([[-np.cos(rot_y), np.sin(rot_y)], [np.sin(rot_y), np.cos(rot_y)]])
t = np.array([x, z]).reshape(1, 2).T
x_corners = [0, l, l, 0] # -l/2
z_corners = [w, w, 0, 0] # -w/2
x_corners += -w / 2
z_corners += -l / 2
# bounding box in object coordinate
corners_2D = np.array([x_corners, z_corners])
# rotate
corners_2D = R.dot(corners_2D)
# translation
corners_2D = t - corners_2D
# in camera coordinate
corners_2D[0] += int(shape / 2)
corners_2D = (corners_2D).astype(np.int16)
corners_2D = corners_2D.T
return np.vstack((corners_2D, corners_2D[0, :]))
def draw_birdeyes(self, ax2, line_gt, line_p, shape):
# shape = 900
scale = 15
pred_corners_2d = self.compute_birdviewbox(line_p, shape, scale)
gt_corners_2d = self.compute_birdviewbox(line_gt, shape, scale)
codes = [Path.LINETO] * gt_corners_2d.shape[0]
codes[0] = Path.MOVETO
codes[-1] = Path.CLOSEPOLY
pth = Path(gt_corners_2d, codes)
p = patches.PathPatch(pth, fill=False, color="orange", label="ground truth")
ax2.add_patch(p)
codes = [Path.LINETO] * pred_corners_2d.shape[0]
codes[0] = Path.MOVETO
codes[-1] = Path.CLOSEPOLY
pth = Path(pred_corners_2d, codes)
p = patches.PathPatch(pth, fill=False, color="green", label="prediction")
ax2.add_patch(p)
def compute_3Dbox(self, P2, line):
obj = detectionInfo(line)
# Draw 2D Bounding Box
xmin = int(obj.xmin)
xmax = int(obj.xmax)
ymin = int(obj.ymin)
ymax = int(obj.ymax)
# width = xmax - xmin
# height = ymax - ymin
# box_2d = patches.Rectangle((xmin, ymin), width, height, fill=False, color='red', linewidth='3')
# ax.add_patch(box_2d)
# Draw 3D Bounding Box
R = np.array(
[
[np.cos(obj.rot_global), 0, np.sin(obj.rot_global)],
[0, 1, 0],
[-np.sin(obj.rot_global), 0, np.cos(obj.rot_global)],
]
)
x_corners = [0, obj.l, obj.l, obj.l, obj.l, 0, 0, 0] # -l/2
y_corners = [0, 0, obj.h, obj.h, 0, 0, obj.h, obj.h] # -h
z_corners = [0, 0, 0, obj.w, obj.w, obj.w, obj.w, 0] # -w/2
x_corners = [i - obj.l / 2 for i in x_corners]
y_corners = [i - obj.h for i in y_corners]
z_corners = [i - obj.w / 2 for i in z_corners]
corners_3D = np.array([x_corners, y_corners, z_corners])
corners_3D = R.dot(corners_3D)
corners_3D += np.array([obj.tx, obj.ty, obj.tz]).reshape((3, 1))
corners_3D_1 = np.vstack((corners_3D, np.ones((corners_3D.shape[-1]))))
corners_2D = P2.dot(corners_3D_1)
corners_2D = corners_2D / corners_2D[2]
corners_2D = corners_2D[:2]
return corners_2D
def draw_3Dbox(self, ax, P2, line, color, gt=False):
corners_2D = self.compute_3Dbox(P2, line, gt)
# draw all lines through path
# https://matplotlib.org/users/path_tutorial.html
bb3d_lines_verts_idx = [0, 1, 2, 3, 4, 5, 6, 7, 0, 5, 4, 1, 2, 7, 6, 3]
bb3d_on_2d_lines_verts = corners_2D[:, bb3d_lines_verts_idx]
verts = bb3d_on_2d_lines_verts.T
codes = [Path.LINETO] * verts.shape[0]
codes[0] = Path.MOVETO
# codes[-1] = Path.CLOSEPOLYq
pth = Path(verts, codes)
p = patches.PathPatch(pth, fill=False, color=color, linewidth=2)
width = corners_2D[:, 3][0] - corners_2D[:, 1][0]
height = corners_2D[:, 2][1] - corners_2D[:, 1][1]
# put a mask on the front
front_fill = patches.Rectangle(
(corners_2D[:, 1]), width, height, fill=True, color=color, alpha=0.4
)
ax.add_patch(p)
ax.add_patch(front_fill)
def visualization(self):
for index in range(self.start_frame, self.end_frame):
image_file = os.path.join(self.image_path, self.dataset[index] + ".png")
label_file = os.path.join(self.label_path, self.dataset[index] + ".txt")
prediction_file = os.path.join(self.pred_path, self.dataset[index] + ".txt")
if self.calib_path.endswith(".txt"):
calibration_file = self.calib_path
else:
calibration_file = os.path.join(self.calib_path, self.dataset[index] + ".txt")
for line in open(calibration_file):
if "P2" in line:
P2 = line.split(" ")
P2 = np.asarray([float(i) for i in P2[1:]])
P2 = np.reshape(P2, (3, 4))
fig = plt.figure(figsize=(20.00, 5.12), dpi=100)
# fig.tight_layout()
gs = GridSpec(1, 4)
gs.update(wspace=0) # set the spacing between axes.
ax = fig.add_subplot(gs[0, :3])
ax2 = fig.add_subplot(gs[0, 3:])
# with writer.saving(fig, "kitti_30_20fps.mp4", dpi=100):
image = Image.open(image_file).convert("RGB")
shape = 900
birdimage = np.zeros((shape, shape, 3), np.uint8)
with open(label_file) as f1, open(prediction_file) as f2:
for line_gt, line_p in zip(f1, f2):
line_gt = line_gt.strip().split(" ")
line_p = line_p.strip().split(" ")
truncated = np.abs(float(line_p[1]))
occluded = np.abs(float(line_p[2]))
trunc_level = 1 if self.mode == "training" else 255
# truncated object in dataset is not observable
if line_p[0].lower() in self.vehicle_list and truncated < trunc_level:
color = "green"
if line_p[0] == "Cyclist":
color = "yellow"
elif line_p[0] == "Pedestrian":
color = "cyan"
self.draw_3Dbox(ax, P2, line_p, color)
self.draw_birdeyes(ax2, line_gt, line_p, shape)
# visualize 3D bounding box
ax.imshow(image)
ax.set_xticks([]) # remove axis value
ax.set_yticks([])
# plot camera view range
x1 = np.linspace(0, shape / 2)
x2 = np.linspace(shape / 2, shape)
ax2.plot(x1, shape / 2 - x1, ls="--", color="grey", linewidth=1, alpha=0.5)
ax2.plot(x2, x2 - shape / 2, ls="--", color="grey", linewidth=1, alpha=0.5)
ax2.plot(shape / 2, 0, marker="+", markersize=16, markeredgecolor="red")
# visualize bird eye view
ax2.imshow(birdimage, origin="lower")
ax2.set_xticks([])
ax2.set_yticks([])
# add legend
handles, labels = ax2.get_legend_handles_labels()
legend = ax2.legend(
[handles[0], handles[1]],
[labels[0], labels[1]],
loc="lower right",
fontsize="x-small",
framealpha=0.2,
)
for text in legend.get_texts():
plt.setp(text, color="w")
if self.save_path is None:
plt.show()
else:
fig.savefig(
os.path.join(self.save_path, self.dataset[index]),
dpi=fig.dpi,
bbox_inches="tight",
pad_inches=0,
)
# video_writer.write(np.uint8(fig))
class Plot3DBoxBev:
"""Plot 3D bounding box and bird eye view"""
def __init__(
self,
proj_matrix = None, # projection matrix P2
object_list = ["car", "pedestrian", "truck", "cyclist", "van", "bus", 'trafficcone'],
) -> None:
self.proj_matrix = proj_matrix
self.object_list = object_list
self.fig = plt.figure(figsize=(20.00, 5.12), dpi=100)
gs = GridSpec(1, 4)
gs.update(wspace=0)
self.ax = self.fig.add_subplot(gs[0, :3])
self.ax2 = self.fig.add_subplot(gs[0, 3:])
self.shape = 900
self.scale = 15
self.COLOR = {
"car": "blue",
"pedestrian": "green",
"truck": "yellow",
"cyclist": "red",
"trafficcone": "red",
"van": "cyan",
"bus": "magenta",
}
def compute_bev(self, dim, loc, rot_y, gt=False):
"""compute bev"""
# convert dimension, location and rotation
h = dim[0] * self.scale
w = dim[1] * self.scale
l = dim[2] * self.scale
x = loc[0] * self.scale
y = loc[1] * self.scale
z = loc[2] * self.scale
rot_y = np.float64(rot_y)
R = np.array([[-np.cos(rot_y), np.sin(rot_y)], [np.sin(rot_y), np.cos(rot_y)]])
t = np.array([x, z]).reshape(1, 2).T
if not gt:
x_corners = [0, l, l, 0] # -l/2
z_corners = [w, w, 0, 0] # -w/2
x_corners += -w / 2
z_corners += -l / 2
else:
x_corners = [-w/2, l-w/2, l-w/2, -w/2] # -l/2
z_corners = [w-l/2, w-l/2, -l/2, -l/2] # -w/2
# bounding box in object coordinate
corners_2D = np.array([x_corners, z_corners])
# rotate
corners_2D = R.dot(corners_2D)
# translation
corners_2D = t - corners_2D
# in camera coordinate
corners_2D[0] += int(self.shape / 2)
corners_2D = (corners_2D).astype(np.int16)
corners_2D = corners_2D.T
return np.vstack((corners_2D, corners_2D[0, :]))
def draw_bev(self, dim, loc, rot_y, gt=False):
"""draw bev"""
pred_corners_2d = self.compute_bev(dim, loc, rot_y, gt)
codes = [Path.LINETO] * pred_corners_2d.shape[0]
codes[0] = Path.MOVETO
codes[-1] = Path.CLOSEPOLY
pth = Path(pred_corners_2d, codes)
if not gt:
patch = patches.PathPatch(pth, fill=False, color="red", label="prediction")
else:
patch = patches.PathPatch(pth, fill=False, color="green", label="groundtruth")
self.ax2.add_patch(patch)
def compute_3dbox(self, bbox, dim, loc, rot_y, gt=False):
"""compute 3d box"""
# 2d bounding box
xmin, ymin = int(bbox[0]), int(bbox[1])
xmax, ymax = int(bbox[2]), int(bbox[3])
# convert dimension, location
h, w, l = dim[0], dim[1], dim[2]
x, y, z = loc[0], loc[1], loc[2]
R = np.array([[np.cos(rot_y), 0, np.sin(rot_y)], [0, 1, 0], [-np.sin(rot_y), 0, np.cos(rot_y)]])
if not gt:
x_corners = [0, l, l, l, l, 0, 0, 0] # -l/2
y_corners = [0, 0, h, h, 0, 0, h, h] # -h
z_corners = [0, 0, 0, w, w, w, w, 0] # -w/2
x_corners += -l / 2
y_corners += -h / 2
z_corners += -w / 2
else:
x_corners = [-l/2, l/2, l/2, l/2, l/2, -l/2, -l/2, -l/2] # -l/2
y_corners = [-h/2, -h/2, 0, 0, -h/2, -h/2, 0, 0] # -h/2
z_corners = [-w/2, -w/2, -w/2, w/2, w/2, w/2, w/2, -w/2] # -w/2
corners_3D = np.array([x_corners, y_corners, z_corners])
corners_3D = R.dot(corners_3D)
corners_3D += np.array([x, y, z]).reshape(3, 1)
corners_3D_1 = np.vstack((corners_3D, np.ones((corners_3D.shape[-1]))))
corners_2D = self.proj_matrix.dot(corners_3D_1)
corners_2D = corners_2D / corners_2D[2]
corners_2D = corners_2D[:2]
return corners_2D
def draw_3dbox(self, class_object, bbox, dim, loc, rot_y, gt=False):
"""draw 3d box"""
color = self.COLOR[class_object]
corners_2D = self.compute_3dbox(bbox, dim, loc, rot_y, gt)
# draw all lines through path
# https://matplotlib.org/users/path_tutorial.html
bb3d_lines_verts_idx = [0, 1, 2, 3, 4, 5, 6, 7, 0, 5, 4, 1, 2, 7, 6, 3]
bb3d_on_2d_lines_verts = corners_2D[:, bb3d_lines_verts_idx]
verts = bb3d_on_2d_lines_verts.T
codes = [Path.LINETO] * verts.shape[0]
codes[0] = Path.MOVETO
pth = Path(verts, codes)
patch = patches.PathPatch(pth, fill=False, color=color, linewidth=2)
width = corners_2D[:, 3][0] - corners_2D[:, 1][0]
height = corners_2D[:, 2][1] - corners_2D[:, 1][1]
# put a mask on the front
front_fill = patches.Rectangle((corners_2D[:, 1]), width, height, fill=True, color=color, alpha=0.4)
self.ax.add_patch(patch)
self.ax.add_patch(front_fill)
def plot(
self,
img = None,
class_object: str = None,
bbox = None, # bbox 2d [xmin, ymin, xmax, ymax]
dim = None, # dimension of the box (l, w, h)
loc = None, # location of the box (x, y, z)
rot_y = None, # rotation of the box around y-axis
gt = False
):
"""plot 3d bbox and bev"""
# initialize bev image
bev_img = np.zeros((self.shape, self.shape, 3), np.uint8)
# loop through all detections
if class_object in self.object_list:
if not gt:
self.draw_3dbox(class_object, bbox, dim, loc, rot_y, gt)
self.draw_bev(dim, loc, rot_y, gt)
# visualize 3D bounding box
self.ax.imshow(img)
self.ax.set_xticks([])
self.ax.set_yticks([])
# plot camera view range
x1 = np.linspace(0, self.shape / 2)
x2 = np.linspace(self.shape / 2, self.shape)
self.ax2.plot(x1, self.shape / 2 - x1, ls="--", color="grey", linewidth=1, alpha=0.5)
self.ax2.plot(x2, x2 - self.shape / 2, ls="--", color="grey", linewidth=1, alpha=0.5)
self.ax2.plot(self.shape / 2, 0, marker="+", markersize=16, markeredgecolor="red")
# visualize bird eye view (bev)
self.ax2.imshow(bev_img, origin="lower")
self.ax2.set_xticks([])
self.ax2.set_yticks([])
# add legend
# handles, labels = ax2.get_legend_handles_labels()
# legend = ax2.legend(
# [handles[0], handles[1]],
# [labels[0], labels[1]],
# loc="lower right",
# fontsize="x-small",
# framealpha=0.2,
# )
# for text in legend.get_texts():
# plt.setp(text, color="w")
def save_plot(self, path, name):
self.fig.savefig(
os.path.join(path, f"{name}.png"),
dpi=self.fig.dpi,
bbox_inches="tight",
pad_inches=0.0,
)
if __name__ == "__main__":
plot = Plot3DBox(
image_path="./data/demo/videos/2011_09_26/image_02/data",
label_path="./outputs/2022-09-01/22-12-09/inference",
calib_path="./data/calib_kitti_images.txt",
pred_path="./outputs/2022-09-01/22-12-09/inference",
save_path="./data/results",
mode="training",
)
plot.visualization() | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/class_averages-kitti6.txt | {"car": {"count": 14385, "total": [21898.43999999967, 23568.289999999495, 55754.239999999765]}, "cyclist": {"count": 893, "total": [1561.5099999999982, 552.850000000001, 1569.5100000000007]}, "truck": {"count": 606, "total": [1916.6199999999872, 1554.710000000011, 6567.400000000018]}, "van": {"count": 1617, "total": [3593.439999999989, 3061.370000000014, 8122.769999999951]}, "pedestrian": {"count": 2280, "total": [3998.8900000000003, 1576.6400000000049, 1974.090000000009]}, "tram": {"count": 287, "total": [1012.7000000000005, 771.13, 4739.249999999991]}} | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/averages.py | """Average dimension class"""
from typing import List
import numpy as np
import os
import json
class DimensionAverages:
"""
Class to calculate the average dimensions of the objects in the dataset.
"""
def __init__(
self,
categories: List[str] = ['car', 'pedestrian', 'cyclist'],
save_file: str = 'dimension_averages.txt'
):
self.dimension_map = {}
self.filename = os.path.abspath(os.path.dirname(__file__)) + '/' + save_file
self.categories = categories
if len(self.categories) == 0:
self.load_items_from_file()
for det in self.categories:
cat_ = det.lower()
if cat_ in self.dimension_map.keys():
continue
self.dimension_map[cat_] = {}
self.dimension_map[cat_]['count'] = 0
self.dimension_map[cat_]['total'] = np.zeros(3, dtype=np.float32)
def add_items(self, items_path):
for path in items_path:
with open(path, "r") as f:
for line in f:
line = line.split(" ")
if line[0].lower() in self.categories:
self.add_item(
line[0],
np.array([float(line[8]), float(line[9]), float(line[10])])
)
def add_item(self, cat, dim):
cat = cat.lower()
self.dimension_map[cat]['count'] += 1
self.dimension_map[cat]['total'] += dim
def get_item(self, cat):
cat = cat.lower()
return self.dimension_map[cat]['total'] / self.dimension_map[cat]['count']
def load_items_from_file(self):
f = open(self.filename, 'r')
dimension_map = json.load(f)
for cat in dimension_map:
dimension_map[cat]['total'] = np.asarray(dimension_map[cat]['total'])
self.dimension_map = dimension_map
def dump_to_file(self):
f = open(self.filename, "w")
f.write(json.dumps(self.dimension_map, cls=NumpyEncoder))
f.close()
def recognized_class(self, cat):
return cat.lower() in self.dimension_map
class ClassAverages:
def __init__(self, classes=[]):
self.dimension_map = {}
self.filename = os.path.abspath(os.path.dirname(__file__)) + '/class_averages.txt'
if len(classes) == 0: # eval mode
self.load_items_from_file()
for detection_class in classes:
class_ = detection_class.lower()
if class_ in self.dimension_map.keys():
continue
self.dimension_map[class_] = {}
self.dimension_map[class_]['count'] = 0
self.dimension_map[class_]['total'] = np.zeros(3, dtype=np.double)
def add_item(self, class_, dimension):
class_ = class_.lower()
self.dimension_map[class_]['count'] += 1
self.dimension_map[class_]['total'] += dimension
# self.dimension_map[class_]['total'] /= self.dimension_map[class_]['count']
def get_item(self, class_):
class_ = class_.lower()
return self.dimension_map[class_]['total'] / self.dimension_map[class_]['count']
def dump_to_file(self):
f = open(self.filename, "w")
f.write(json.dumps(self.dimension_map, cls=NumpyEncoder))
f.close()
def load_items_from_file(self):
f = open(self.filename, 'r')
dimension_map = json.load(f)
for class_ in dimension_map:
dimension_map[class_]['total'] = np.asarray(dimension_map[class_]['total'])
self.dimension_map = dimension_map
def recognized_class(self, class_):
return class_.lower() in self.dimension_map
class NumpyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self,obj) | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/__init__.py | from src.utils.pylogger import get_pylogger
from src.utils.rich_utils import enforce_tags, print_config_tree
from src.utils.utils import (
close_loggers,
extras,
get_metric_value,
instantiate_callbacks,
instantiate_loggers,
log_hyperparameters,
save_file,
task_wrapper,
)
| 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/Calib.py | """
Script for handling calibration file
"""
import numpy as np
def get_P(calib_file):
"""
Get matrix P_rect_02 (camera 2 RGB)
and transform to 3 x 4 matrix
"""
for line in open(calib_file, 'r'):
if 'P_rect_02' in line:
cam_P = line.strip().split(' ')
cam_P = np.asarray([float(cam_P) for cam_P in cam_P[1:]])
matrix = np.zeros((3, 4))
matrix = cam_P.reshape((3, 4))
return matrix
# TODO: understand this
def get_calibration_cam_to_image(cab_f):
for line in open(cab_f):
if 'P2:' in line:
cam_to_img = line.strip().split(' ')
cam_to_img = np.asarray([float(number) for number in cam_to_img[1:]])
cam_to_img = np.reshape(cam_to_img, (3, 4))
return cam_to_img
file_not_found(cab_f)
def get_R0(cab_f):
for line in open(cab_f):
if 'R0_rect:' in line:
R0 = line.strip().split(' ')
R0 = np.asarray([float(number) for number in R0[1:]])
R0 = np.reshape(R0, (3, 3))
R0_rect = np.zeros([4,4])
R0_rect[3,3] = 1
R0_rect[:3,:3] = R0
return R0_rect
def get_tr_to_velo(cab_f):
for line in open(cab_f):
if 'Tr_velo_to_cam:' in line:
Tr = line.strip().split(' ')
Tr = np.asarray([float(number) for number in Tr[1:]])
Tr = np.reshape(Tr, (3, 4))
Tr_to_velo = np.zeros([4,4])
Tr_to_velo[3,3] = 1
Tr_to_velo[:3,:4] = Tr
return Tr_to_velo
def file_not_found(filename):
print("\nError! Can't read calibration file, does %s exist?"%filename)
exit()
| 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/rich_utils.py | from pathlib import Path
from typing import Sequence
import rich
import rich.syntax
import rich.tree
from hydra.core.hydra_config import HydraConfig
from omegaconf import DictConfig, OmegaConf, open_dict
from pytorch_lightning.utilities import rank_zero_only
from rich.prompt import Prompt
from src.utils import pylogger
log = pylogger.get_pylogger(__name__)
@rank_zero_only
def print_config_tree(
cfg: DictConfig,
print_order: Sequence[str] = (
"datamodule",
"model",
"callbacks",
"logger",
"trainer",
"paths",
"extras",
),
resolve: bool = False,
save_to_file: bool = False,
) -> None:
"""Prints content of DictConfig using Rich library and its tree structure.
Args:
cfg (DictConfig): Configuration composed by Hydra.
print_order (Sequence[str], optional): Determines in what order config components are printed.
resolve (bool, optional): Whether to resolve reference fields of DictConfig.
save_to_file (bool, optional): Whether to export config to the hydra output folder.
"""
style = "dim"
tree = rich.tree.Tree("CONFIG", style=style, guide_style=style)
queue = []
# add fields from `print_order` to queue
for field in print_order:
queue.append(field) if field in cfg else log.warning(
f"Field '{field}' not found in config. Skipping '{field}' config printing..."
)
# add all the other fields to queue (not specified in `print_order`)
for field in cfg:
if field not in queue:
queue.append(field)
# generate config tree from queue
for field in queue:
branch = tree.add(field, style=style, guide_style=style)
config_group = cfg[field]
if isinstance(config_group, DictConfig):
branch_content = OmegaConf.to_yaml(config_group, resolve=resolve)
else:
branch_content = str(config_group)
branch.add(rich.syntax.Syntax(branch_content, "yaml"))
# print config tree
rich.print(tree)
# save config tree to file
if save_to_file:
with open(Path(cfg.paths.output_dir, "config_tree.log"), "w") as file:
rich.print(tree, file=file)
@rank_zero_only
def enforce_tags(cfg: DictConfig, save_to_file: bool = False) -> None:
"""Prompts user to input tags from command line if no tags are provided in config."""
if not cfg.get("tags"):
if "id" in HydraConfig().cfg.hydra.job:
raise ValueError("Specify tags before launching a multirun!")
log.warning("No tags provided in config. Prompting user to input tags...")
tags = Prompt.ask("Enter a list of comma separated tags", default="dev")
tags = [t.strip() for t in tags.split(",") if t != ""]
with open_dict(cfg):
cfg.tags = tags
log.info(f"Tags: {cfg.tags}")
if save_to_file:
with open(Path(cfg.paths.output_dir, "tags.log"), "w") as file:
rich.print(cfg.tags, file=file)
if __name__ == "__main__":
from hydra import compose, initialize
with initialize(version_base="1.2", config_path="../../configs"):
cfg = compose(config_name="train.yaml", return_hydra_config=False, overrides=[])
print_config_tree(cfg, resolve=False, save_to_file=False)
| 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/class_averages.txt | {"car": {"count": 14385, "total": [21898.43999999967, 23568.289999999495, 55754.239999999765]}, "cyclist": {"count": 893, "total": [1561.5099999999982, 552.850000000001, 1569.5100000000007]}, "truck": {"count": 606, "total": [1916.6199999999872, 1554.710000000011, 6567.400000000018]}, "van": {"count": 1617, "total": [3593.439999999989, 3061.370000000014, 8122.769999999951]}, "pedestrian": {"count": 2280, "total": [3998.8900000000003, 1576.6400000000049, 1974.090000000009]}, "tram": {"count": 287, "total": [1012.7000000000005, 771.13, 4739.249999999991]}} | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/kitti_common.py | import concurrent.futures as futures
import os
import pathlib
import re
from collections import OrderedDict
import numpy as np
from skimage import io
def get_image_index_str(img_idx):
return "{:06d}".format(img_idx)
def get_kitti_info_path(idx,
prefix,
info_type='image_2',
file_tail='.png',
training=True,
relative_path=True):
img_idx_str = get_image_index_str(idx)
img_idx_str += file_tail
prefix = pathlib.Path(prefix)
if training:
file_path = pathlib.Path('training') / info_type / img_idx_str
else:
file_path = pathlib.Path('testing') / info_type / img_idx_str
if not (prefix / file_path).exists():
raise ValueError("file not exist: {}".format(file_path))
if relative_path:
return str(file_path)
else:
return str(prefix / file_path)
def get_image_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'image_2', '.png', training,
relative_path)
def get_label_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'label_2', '.txt', training,
relative_path)
def get_velodyne_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'velodyne', '.bin', training,
relative_path)
def get_calib_path(idx, prefix, training=True, relative_path=True):
return get_kitti_info_path(idx, prefix, 'calib', '.txt', training,
relative_path)
def _extend_matrix(mat):
mat = np.concatenate([mat, np.array([[0., 0., 0., 1.]])], axis=0)
return mat
def get_kitti_image_info(path,
training=True,
label_info=True,
velodyne=False,
calib=False,
image_ids=7481,
extend_matrix=True,
num_worker=8,
relative_path=True,
with_imageshape=True):
# image_infos = []
root_path = pathlib.Path(path)
if not isinstance(image_ids, list):
image_ids = list(range(image_ids))
def map_func(idx):
image_info = {'image_idx': idx}
annotations = None
if velodyne:
image_info['velodyne_path'] = get_velodyne_path(
idx, path, training, relative_path)
image_info['img_path'] = get_image_path(idx, path, training,
relative_path)
if with_imageshape:
img_path = image_info['img_path']
if relative_path:
img_path = str(root_path / img_path)
image_info['img_shape'] = np.array(
io.imread(img_path).shape[:2], dtype=np.int32)
if label_info:
label_path = get_label_path(idx, path, training, relative_path)
if relative_path:
label_path = str(root_path / label_path)
annotations = get_label_anno(label_path)
if calib:
calib_path = get_calib_path(
idx, path, training, relative_path=False)
with open(calib_path, 'r') as f:
lines = f.readlines()
P0 = np.array(
[float(info) for info in lines[0].split(' ')[1:13]]).reshape(
[3, 4])
P1 = np.array(
[float(info) for info in lines[1].split(' ')[1:13]]).reshape(
[3, 4])
P2 = np.array(
[float(info) for info in lines[2].split(' ')[1:13]]).reshape(
[3, 4])
P3 = np.array(
[float(info) for info in lines[3].split(' ')[1:13]]).reshape(
[3, 4])
if extend_matrix:
P0 = _extend_matrix(P0)
P1 = _extend_matrix(P1)
P2 = _extend_matrix(P2)
P3 = _extend_matrix(P3)
image_info['calib/P0'] = P0
image_info['calib/P1'] = P1
image_info['calib/P2'] = P2
image_info['calib/P3'] = P3
R0_rect = np.array([
float(info) for info in lines[4].split(' ')[1:10]
]).reshape([3, 3])
if extend_matrix:
rect_4x4 = np.zeros([4, 4], dtype=R0_rect.dtype)
rect_4x4[3, 3] = 1.
rect_4x4[:3, :3] = R0_rect
else:
rect_4x4 = R0_rect
image_info['calib/R0_rect'] = rect_4x4
Tr_velo_to_cam = np.array([
float(info) for info in lines[5].split(' ')[1:13]
]).reshape([3, 4])
Tr_imu_to_velo = np.array([
float(info) for info in lines[6].split(' ')[1:13]
]).reshape([3, 4])
if extend_matrix:
Tr_velo_to_cam = _extend_matrix(Tr_velo_to_cam)
Tr_imu_to_velo = _extend_matrix(Tr_imu_to_velo)
image_info['calib/Tr_velo_to_cam'] = Tr_velo_to_cam
image_info['calib/Tr_imu_to_velo'] = Tr_imu_to_velo
if annotations is not None:
image_info['annos'] = annotations
add_difficulty_to_annos(image_info)
return image_info
with futures.ThreadPoolExecutor(num_worker) as executor:
image_infos = executor.map(map_func, image_ids)
return list(image_infos)
def filter_kitti_anno(image_anno,
used_classes,
used_difficulty=None,
dontcare_iou=None):
if not isinstance(used_classes, (list, tuple)):
used_classes = [used_classes]
img_filtered_annotations = {}
relevant_annotation_indices = [
i for i, x in enumerate(image_anno['name']) if x in used_classes
]
for key in image_anno.keys():
img_filtered_annotations[key] = (
image_anno[key][relevant_annotation_indices])
if used_difficulty is not None:
relevant_annotation_indices = [
i for i, x in enumerate(img_filtered_annotations['difficulty'])
if x in used_difficulty
]
for key in image_anno.keys():
img_filtered_annotations[key] = (
img_filtered_annotations[key][relevant_annotation_indices])
if 'DontCare' in used_classes and dontcare_iou is not None:
dont_care_indices = [
i for i, x in enumerate(img_filtered_annotations['name'])
if x == 'DontCare'
]
# bounding box format [y_min, x_min, y_max, x_max]
all_boxes = img_filtered_annotations['bbox']
ious = iou(all_boxes, all_boxes[dont_care_indices])
# Remove all bounding boxes that overlap with a dontcare region.
if ious.size > 0:
boxes_to_remove = np.amax(ious, axis=1) > dontcare_iou
for key in image_anno.keys():
img_filtered_annotations[key] = (img_filtered_annotations[key][
np.logical_not(boxes_to_remove)])
return img_filtered_annotations
def filter_annos_low_score(image_annos, thresh):
new_image_annos = []
for anno in image_annos:
img_filtered_annotations = {}
relevant_annotation_indices = [
i for i, s in enumerate(anno['score']) if s >= thresh
]
for key in anno.keys():
img_filtered_annotations[key] = (
anno[key][relevant_annotation_indices])
new_image_annos.append(img_filtered_annotations)
return new_image_annos
def kitti_result_line(result_dict, precision=4):
prec_float = "{" + ":.{}f".format(precision) + "}"
res_line = []
all_field_default = OrderedDict([
('name', None),
('truncated', -1),
('occluded', -1),
('alpha', -10),
('bbox', None),
('dimensions', [-1, -1, -1]),
('location', [-1000, -1000, -1000]),
('rotation_y', -10),
('score', None),
])
res_dict = [(key, None) for key, val in all_field_default.items()]
res_dict = OrderedDict(res_dict)
for key, val in result_dict.items():
if all_field_default[key] is None and val is None:
raise ValueError("you must specify a value for {}".format(key))
res_dict[key] = val
for key, val in res_dict.items():
if key == 'name':
res_line.append(val)
elif key in ['truncated', 'alpha', 'rotation_y', 'score']:
if val is None:
res_line.append(str(all_field_default[key]))
else:
res_line.append(prec_float.format(val))
elif key == 'occluded':
if val is None:
res_line.append(str(all_field_default[key]))
else:
res_line.append('{}'.format(val))
elif key in ['bbox', 'dimensions', 'location']:
if val is None:
res_line += [str(v) for v in all_field_default[key]]
else:
res_line += [prec_float.format(v) for v in val]
else:
raise ValueError("unknown key. supported key:{}".format(
res_dict.keys()))
return ' '.join(res_line)
def add_difficulty_to_annos(info):
min_height = [40, 25,
25] # minimum height for evaluated groundtruth/detections
max_occlusion = [
0, 1, 2
] # maximum occlusion level of the groundtruth used for evaluation
max_trunc = [
0.15, 0.3, 0.5
] # maximum truncation level of the groundtruth used for evaluation
annos = info['annos']
dims = annos['dimensions'] # lhw format
bbox = annos['bbox']
height = bbox[:, 3] - bbox[:, 1]
occlusion = annos['occluded']
truncation = annos['truncated']
diff = []
easy_mask = np.ones((len(dims), ), dtype=np.bool)
moderate_mask = np.ones((len(dims), ), dtype=np.bool)
hard_mask = np.ones((len(dims), ), dtype=np.bool)
i = 0
for h, o, t in zip(height, occlusion, truncation):
if o > max_occlusion[0] or h <= min_height[0] or t > max_trunc[0]:
easy_mask[i] = False
if o > max_occlusion[1] or h <= min_height[1] or t > max_trunc[1]:
moderate_mask[i] = False
if o > max_occlusion[2] or h <= min_height[2] or t > max_trunc[2]:
hard_mask[i] = False
i += 1
is_easy = easy_mask
is_moderate = np.logical_xor(easy_mask, moderate_mask)
is_hard = np.logical_xor(hard_mask, moderate_mask)
for i in range(len(dims)):
if is_easy[i]:
diff.append(0)
elif is_moderate[i]:
diff.append(1)
elif is_hard[i]:
diff.append(2)
else:
diff.append(-1)
annos["difficulty"] = np.array(diff, np.int32)
return diff
def get_label_anno(label_path):
annotations = {}
annotations.update({
'name': [],
'truncated': [],
'occluded': [],
'alpha': [],
'bbox': [],
'dimensions': [],
'location': [],
'rotation_y': []
})
with open(label_path, 'r') as f:
lines = f.readlines()
# if len(lines) == 0 or len(lines[0]) < 15:
# content = []
# else:
content = [line.strip().split(' ') for line in lines]
annotations['name'] = np.array([x[0] for x in content])
annotations['truncated'] = np.array([float(x[1]) for x in content])
annotations['occluded'] = np.array([int(x[2]) for x in content])
annotations['alpha'] = np.array([float(x[3]) for x in content])
annotations['bbox'] = np.array(
[[float(info) for info in x[4:8]] for x in content]).reshape(-1, 4)
# dimensions will convert hwl format to standard lhw(camera) format.
annotations['dimensions'] = np.array(
[[float(info) for info in x[8:11]] for x in content]).reshape(
-1, 3)[:, [2, 0, 1]]
annotations['location'] = np.array(
[[float(info) for info in x[11:14]] for x in content]).reshape(-1, 3)
annotations['rotation_y'] = np.array(
[float(x[14]) for x in content]).reshape(-1)
if len(content) != 0 and len(content[0]) == 16: # have score
annotations['score'] = np.array([float(x[15]) for x in content])
else:
annotations['score'] = np.zeros([len(annotations['bbox'])])
return annotations
def get_label_annos(label_folder, image_ids=None):
if image_ids is None:
filepaths = pathlib.Path(label_folder).glob('*.txt')
prog = re.compile(r'^\d{6}.txt$')
filepaths = filter(lambda f: prog.match(f.name), filepaths)
image_ids = [int(p.stem) for p in filepaths]
image_ids = sorted(image_ids)
if not isinstance(image_ids, list):
image_ids = list(range(image_ids))
annos = []
label_folder = pathlib.Path(label_folder)
for idx in image_ids:
image_idx = get_image_index_str(idx)
label_filename = label_folder / (image_idx + '.txt')
annos.append(get_label_anno(label_filename))
return annos
def area(boxes, add1=False):
"""Computes area of boxes.
Args:
boxes: Numpy array with shape [N, 4] holding N boxes
Returns:
a numpy array with shape [N*1] representing box areas
"""
if add1:
return (boxes[:, 2] - boxes[:, 0] + 1.0) * (
boxes[:, 3] - boxes[:, 1] + 1.0)
else:
return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
def intersection(boxes1, boxes2, add1=False):
"""Compute pairwise intersection areas between boxes.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes
boxes2: a numpy array with shape [M, 4] holding M boxes
Returns:
a numpy array with shape [N*M] representing pairwise intersection area
"""
[y_min1, x_min1, y_max1, x_max1] = np.split(boxes1, 4, axis=1)
[y_min2, x_min2, y_max2, x_max2] = np.split(boxes2, 4, axis=1)
all_pairs_min_ymax = np.minimum(y_max1, np.transpose(y_max2))
all_pairs_max_ymin = np.maximum(y_min1, np.transpose(y_min2))
if add1:
all_pairs_min_ymax += 1.0
intersect_heights = np.maximum(
np.zeros(all_pairs_max_ymin.shape),
all_pairs_min_ymax - all_pairs_max_ymin)
all_pairs_min_xmax = np.minimum(x_max1, np.transpose(x_max2))
all_pairs_max_xmin = np.maximum(x_min1, np.transpose(x_min2))
if add1:
all_pairs_min_xmax += 1.0
intersect_widths = np.maximum(
np.zeros(all_pairs_max_xmin.shape),
all_pairs_min_xmax - all_pairs_max_xmin)
return intersect_heights * intersect_widths
def iou(boxes1, boxes2, add1=False):
"""Computes pairwise intersection-over-union between box collections.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding N boxes.
Returns:
a numpy array with shape [N, M] representing pairwise iou scores.
"""
intersect = intersection(boxes1, boxes2, add1)
area1 = area(boxes1, add1)
area2 = area(boxes2, add1)
union = np.expand_dims(
area1, axis=1) + np.expand_dims(
area2, axis=0) - intersect
return intersect / union
| 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/utils.py | import time
import warnings
from importlib.util import find_spec
from pathlib import Path
from typing import Any, Callable, Dict, List
import numpy as np
import hydra
from omegaconf import DictConfig
from pytorch_lightning import Callback
from pytorch_lightning.loggers import LightningLoggerBase
from pytorch_lightning.utilities import rank_zero_only
from src.utils import pylogger, rich_utils
log = pylogger.get_pylogger(__name__)
def task_wrapper(task_func: Callable) -> Callable:
"""Optional decorator that wraps the task function in extra utilities.
Makes multirun more resistant to failure.
Utilities:
- Calling the `utils.extras()` before the task is started
- Calling the `utils.close_loggers()` after the task is finished
- Logging the exception if occurs
- Logging the task total execution time
- Logging the output dir
"""
def wrap(cfg: DictConfig):
# apply extra utilities
extras(cfg)
# execute the task
try:
start_time = time.time()
metric_dict, object_dict = task_func(cfg=cfg)
except Exception as ex:
log.exception("") # save exception to `.log` file
raise ex
finally:
path = Path(cfg.paths.output_dir, "exec_time.log")
content = f"'{cfg.task_name}' execution time: {time.time() - start_time} (s)"
save_file(path, content) # save task execution time (even if exception occurs)
close_loggers() # close loggers (even if exception occurs so multirun won't fail)
log.info(f"Output dir: {cfg.paths.output_dir}")
return metric_dict, object_dict
return wrap
def extras(cfg: DictConfig) -> None:
"""Applies optional utilities before the task is started.
Utilities:
- Ignoring python warnings
- Setting tags from command line
- Rich config printing
"""
# return if no `extras` config
if not cfg.get("extras"):
log.warning("Extras config not found! <cfg.extras=null>")
return
# disable python warnings
if cfg.extras.get("ignore_warnings"):
log.info("Disabling python warnings! <cfg.extras.ignore_warnings=True>")
warnings.filterwarnings("ignore")
# prompt user to input tags from command line if none are provided in the config
if cfg.extras.get("enforce_tags"):
log.info("Enforcing tags! <cfg.extras.enforce_tags=True>")
rich_utils.enforce_tags(cfg, save_to_file=True)
# pretty print config tree using Rich library
if cfg.extras.get("print_config"):
log.info("Printing config tree with Rich! <cfg.extras.print_config=True>")
rich_utils.print_config_tree(cfg, resolve=True, save_to_file=True)
@rank_zero_only
def save_file(path: str, content: str) -> None:
"""Save file in rank zero mode (only on one process in multi-GPU setup)."""
with open(path, "w+") as file:
file.write(content)
def instantiate_callbacks(callbacks_cfg: DictConfig) -> List[Callback]:
"""Instantiates callbacks from config."""
callbacks: List[Callback] = []
if not callbacks_cfg:
log.warning("Callbacks config is empty.")
return callbacks
if not isinstance(callbacks_cfg, DictConfig):
raise TypeError("Callbacks config must be a DictConfig!")
for _, cb_conf in callbacks_cfg.items():
if isinstance(cb_conf, DictConfig) and "_target_" in cb_conf:
log.info(f"Instantiating callback <{cb_conf._target_}>")
callbacks.append(hydra.utils.instantiate(cb_conf))
return callbacks
def instantiate_loggers(logger_cfg: DictConfig) -> List[LightningLoggerBase]:
"""Instantiates loggers from config."""
logger: List[LightningLoggerBase] = []
if not logger_cfg:
log.warning("Logger config is empty.")
return logger
if not isinstance(logger_cfg, DictConfig):
raise TypeError("Logger config must be a DictConfig!")
for _, lg_conf in logger_cfg.items():
if isinstance(lg_conf, DictConfig) and "_target_" in lg_conf:
log.info(f"Instantiating logger <{lg_conf._target_}>")
logger.append(hydra.utils.instantiate(lg_conf))
return logger
@rank_zero_only
def log_hyperparameters(object_dict: dict) -> None:
"""Controls which config parts are saved by lightning loggers.
Additionally saves:
- Number of model parameters
"""
hparams = {}
cfg = object_dict["cfg"]
model = object_dict["model"]
trainer = object_dict["trainer"]
if not trainer.logger:
log.warning("Logger not found! Skipping hyperparameter logging...")
return
hparams["model"] = cfg["model"]
# save number of model parameters
hparams["model/params/total"] = sum(p.numel() for p in model.parameters())
hparams["model/params/trainable"] = sum(
p.numel() for p in model.parameters() if p.requires_grad
)
hparams["model/params/non_trainable"] = sum(
p.numel() for p in model.parameters() if not p.requires_grad
)
hparams["datamodule"] = cfg["datamodule"]
hparams["trainer"] = cfg["trainer"]
hparams["callbacks"] = cfg.get("callbacks")
hparams["extras"] = cfg.get("extras")
hparams["task_name"] = cfg.get("task_name")
hparams["tags"] = cfg.get("tags")
hparams["ckpt_path"] = cfg.get("ckpt_path")
hparams["seed"] = cfg.get("seed")
# send hparams to all loggers
trainer.logger.log_hyperparams(hparams)
def get_metric_value(metric_dict: dict, metric_name: str) -> float:
"""Safely retrieves value of the metric logged in LightningModule."""
if not metric_name:
log.info("Metric name is None! Skipping metric value retrieval...")
return None
if metric_name not in metric_dict:
raise Exception(
f"Metric value not found! <metric_name={metric_name}>\n"
"Make sure metric name logged in LightningModule is correct!\n"
"Make sure `optimized_metric` name in `hparams_search` config is correct!"
)
metric_value = metric_dict[metric_name].item()
log.info(f"Retrieved metric value! <{metric_name}={metric_value}>")
return metric_value
def close_loggers() -> None:
"""Makes sure all loggers closed properly (prevents logging failure during multirun)."""
log.info("Closing loggers...")
if find_spec("wandb"): # if wandb is installed
import wandb
if wandb.run:
log.info("Closing wandb!")
wandb.finish()
class detectionInfo(object):
"""
utils for YOLO3D
detectionInfo is a class that contains information about the detection
"""
def __init__(self, line):
self.name = line[0]
self.truncation = float(line[1])
self.occlusion = int(line[2])
# local orientation = alpha + pi/2
self.alpha = float(line[3])
# in pixel coordinate
self.xmin = float(line[4])
self.ymin = float(line[5])
self.xmax = float(line[6])
self.ymax = float(line[7])
# height, weigh, length in object coordinate, meter
self.h = float(line[8])
self.w = float(line[9])
self.l = float(line[10])
# x, y, z in camera coordinate, meter
self.tx = float(line[11])
self.ty = float(line[12])
self.tz = float(line[13])
# global orientation [-pi, pi]
self.rot_global = float(line[14])
def member_to_list(self):
output_line = []
for name, value in vars(self).items():
output_line.append(value)
return output_line
def box3d_candidate(self, rot_local, soft_range):
x_corners = [self.l, self.l, self.l, self.l, 0, 0, 0, 0]
y_corners = [self.h, 0, self.h, 0, self.h, 0, self.h, 0]
z_corners = [0, 0, self.w, self.w, self.w, self.w, 0, 0]
x_corners = [i - self.l / 2 for i in x_corners]
y_corners = [i - self.h / 2 for i in y_corners]
z_corners = [i - self.w / 2 for i in z_corners]
corners_3d = np.transpose(np.array([x_corners, y_corners, z_corners]))
point1 = corners_3d[0, :]
point2 = corners_3d[1, :]
point3 = corners_3d[2, :]
point4 = corners_3d[3, :]
point5 = corners_3d[6, :]
point6 = corners_3d[7, :]
point7 = corners_3d[4, :]
point8 = corners_3d[5, :]
# set up projection relation based on local orientation
xmin_candi = xmax_candi = ymin_candi = ymax_candi = 0
if 0 < rot_local < np.pi / 2:
xmin_candi = point8
xmax_candi = point2
ymin_candi = point2
ymax_candi = point5
if np.pi / 2 <= rot_local <= np.pi:
xmin_candi = point6
xmax_candi = point4
ymin_candi = point4
ymax_candi = point1
if np.pi < rot_local <= 3 / 2 * np.pi:
xmin_candi = point2
xmax_candi = point8
ymin_candi = point8
ymax_candi = point1
if 3 * np.pi / 2 <= rot_local <= 2 * np.pi:
xmin_candi = point4
xmax_candi = point6
ymin_candi = point6
ymax_candi = point5
# soft constraint
div = soft_range * np.pi / 180
if 0 < rot_local < div or 2*np.pi-div < rot_local < 2*np.pi:
xmin_candi = point8
xmax_candi = point6
ymin_candi = point6
ymax_candi = point5
if np.pi - div < rot_local < np.pi + div:
xmin_candi = point2
xmax_candi = point4
ymin_candi = point8
ymax_candi = point1
return xmin_candi, xmax_candi, ymin_candi, ymax_candi
class KITTIObject():
"""
utils for YOLO3D
detectionInfo is a class that contains information about the detection
"""
def __init__(self, line = np.zeros(16)):
self.name = line[0]
self.truncation = float(line[1])
self.occlusion = int(line[2])
# local orientation = alpha + pi/2
self.alpha = float(line[3])
# in pixel coordinate
self.xmin = float(line[4])
self.ymin = float(line[5])
self.xmax = float(line[6])
self.ymax = float(line[7])
# height, weigh, length in object coordinate, meter
self.h = float(line[8])
self.w = float(line[9])
self.l = float(line[10])
# x, y, z in camera coordinate, meter
self.tx = float(line[11])
self.ty = float(line[12])
self.tz = float(line[13])
# global orientation [-pi, pi]
self.rot_global = float(line[14])
# score
self.score = float(line[15])
def member_to_list(self):
output_line = []
for name, value in vars(self).items():
output_line.append(value)
return output_line
def box3d_candidate(self, rot_local, soft_range):
x_corners = [self.l, self.l, self.l, self.l, 0, 0, 0, 0]
y_corners = [self.h, 0, self.h, 0, self.h, 0, self.h, 0]
z_corners = [0, 0, self.w, self.w, self.w, self.w, 0, 0]
x_corners = [i - self.l / 2 for i in x_corners]
y_corners = [i - self.h / 2 for i in y_corners]
z_corners = [i - self.w / 2 for i in z_corners]
corners_3d = np.transpose(np.array([x_corners, y_corners, z_corners]))
point1 = corners_3d[0, :]
point2 = corners_3d[1, :]
point3 = corners_3d[2, :]
point4 = corners_3d[3, :]
point5 = corners_3d[6, :]
point6 = corners_3d[7, :]
point7 = corners_3d[4, :]
point8 = corners_3d[5, :]
# set up projection relation based on local orientation
xmin_candi = xmax_candi = ymin_candi = ymax_candi = 0
if 0 < rot_local < np.pi / 2:
xmin_candi = point8
xmax_candi = point2
ymin_candi = point2
ymax_candi = point5
if np.pi / 2 <= rot_local <= np.pi:
xmin_candi = point6
xmax_candi = point4
ymin_candi = point4
ymax_candi = point1
if np.pi < rot_local <= 3 / 2 * np.pi:
xmin_candi = point2
xmax_candi = point8
ymin_candi = point8
ymax_candi = point1
if 3 * np.pi / 2 <= rot_local <= 2 * np.pi:
xmin_candi = point4
xmax_candi = point6
ymin_candi = point6
ymax_candi = point5
# soft constraint
div = soft_range * np.pi / 180
if 0 < rot_local < div or 2*np.pi-div < rot_local < 2*np.pi:
xmin_candi = point8
xmax_candi = point6
ymin_candi = point6
ymax_candi = point5
if np.pi - div < rot_local < np.pi + div:
xmin_candi = point2
xmax_candi = point4
ymin_candi = point8
ymax_candi = point1
return xmin_candi, xmax_candi, ymin_candi, ymax_candi
if __name__ == "__main__":
pass | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/Math.py | import numpy as np
# using this math: https://en.wikipedia.org/wiki/Rotation_matrix
def rotation_matrix(yaw, pitch=0, roll=0):
tx = roll
ty = yaw
tz = pitch
Rx = np.array([[1,0,0], [0, np.cos(tx), -np.sin(tx)], [0, np.sin(tx), np.cos(tx)]])
Ry = np.array([[np.cos(ty), 0, np.sin(ty)], [0, 1, 0], [-np.sin(ty), 0, np.cos(ty)]])
Rz = np.array([[np.cos(tz), -np.sin(tz), 0], [np.sin(tz), np.cos(tz), 0], [0,0,1]])
return Ry.reshape([3,3])
# return np.dot(np.dot(Rz,Ry), Rx)
# option to rotate and shift (for label info)
def create_corners(dimension, location=None, R=None):
dx = dimension[2] / 2
dy = dimension[0] / 2
dz = dimension[1] / 2
x_corners = []
y_corners = []
z_corners = []
for i in [1, -1]:
for j in [1,-1]:
for k in [1,-1]:
x_corners.append(dx*i)
y_corners.append(dy*j)
z_corners.append(dz*k)
corners = [x_corners, y_corners, z_corners]
# rotate if R is passed in
if R is not None:
corners = np.dot(R, corners)
# shift if location is passed in
if location is not None:
for i,loc in enumerate(location):
corners[i,:] = corners[i,:] + loc
final_corners = []
for i in range(8):
final_corners.append([corners[0][i], corners[1][i], corners[2][i]])
return final_corners
# this is based on the paper. Math!
# calib is a 3x4 matrix, box_2d is [(xmin, ymin), (xmax, ymax)]
# Math help: http://ywpkwon.github.io/pdf/bbox3d-study.pdf
def calc_location(dimension, proj_matrix, box_2d, alpha, theta_ray):
#global orientation
orient = alpha + theta_ray
R = rotation_matrix(orient)
# format 2d corners
try:
xmin = box_2d[0][0]
ymin = box_2d[0][1]
xmax = box_2d[1][0]
ymax = box_2d[1][1]
except:
xmin = box_2d[0]
ymin = box_2d[1]
xmax = box_2d[2]
ymax = box_2d[3]
# left top right bottom
box_corners = [xmin, ymin, xmax, ymax]
# get the point constraints
constraints = []
left_constraints = []
right_constraints = []
top_constraints = []
bottom_constraints = []
# using a different coord system
dx = dimension[2] / 2
dy = dimension[0] / 2
dz = dimension[1] / 2
# below is very much based on trial and error
# based on the relative angle, a different configuration occurs
# negative is back of car, positive is front
left_mult = 1
right_mult = -1
# about straight on but opposite way
if alpha < np.deg2rad(92) and alpha > np.deg2rad(88):
left_mult = 1
right_mult = 1
# about straight on and same way
elif alpha < np.deg2rad(-88) and alpha > np.deg2rad(-92):
left_mult = -1
right_mult = -1
# this works but doesnt make much sense
elif alpha < np.deg2rad(90) and alpha > -np.deg2rad(90):
left_mult = -1
right_mult = 1
# if the car is facing the oppositeway, switch left and right
switch_mult = -1
if alpha > 0:
switch_mult = 1
# left and right could either be the front of the car ot the back of the car
# careful to use left and right based on image, no of actual car's left and right
for i in (-1,1):
left_constraints.append([left_mult * dx, i*dy, -switch_mult * dz])
for i in (-1,1):
right_constraints.append([right_mult * dx, i*dy, switch_mult * dz])
# top and bottom are easy, just the top and bottom of car
for i in (-1,1):
for j in (-1,1):
top_constraints.append([i*dx, -dy, j*dz])
for i in (-1,1):
for j in (-1,1):
bottom_constraints.append([i*dx, dy, j*dz])
# now, 64 combinations
for left in left_constraints:
for top in top_constraints:
for right in right_constraints:
for bottom in bottom_constraints:
constraints.append([left, top, right, bottom])
# filter out the ones with repeats
constraints = filter(lambda x: len(x) == len(set(tuple(i) for i in x)), constraints)
# create pre M (the term with I and the R*X)
pre_M = np.zeros([4,4])
# 1's down diagonal
for i in range(0,4):
pre_M[i][i] = 1
best_loc = None
best_error = [1e09]
best_X = None
# loop through each possible constraint, hold on to the best guess
# constraint will be 64 sets of 4 corners
count = 0
for constraint in constraints:
# each corner
Xa = constraint[0]
Xb = constraint[1]
Xc = constraint[2]
Xd = constraint[3]
X_array = [Xa, Xb, Xc, Xd]
# M: all 1's down diagonal, and upper 3x1 is Rotation_matrix * [x, y, z]
Ma = np.copy(pre_M)
Mb = np.copy(pre_M)
Mc = np.copy(pre_M)
Md = np.copy(pre_M)
M_array = [Ma, Mb, Mc, Md]
# create A, b
A = np.zeros([4,3], dtype=np.float)
b = np.zeros([4,1])
indicies = [0,1,0,1]
for row, index in enumerate(indicies):
X = X_array[row]
M = M_array[row]
# create M for corner Xx
RX = np.dot(R, X)
M[:3,3] = RX.reshape(3)
M = np.dot(proj_matrix, M)
A[row, :] = M[index,:3] - box_corners[row] * M[2,:3]
b[row] = box_corners[row] * M[2,3] - M[index,3]
# solve here with least squares, since over fit will get some error
loc, error, rank, s = np.linalg.lstsq(A, b, rcond=None)
# found a better estimation
if error < best_error:
count += 1 # for debugging
best_loc = loc
best_error = error
best_X = X_array
# return best_loc, [left_constraints, right_constraints] # for debugging
best_loc = [best_loc[0][0], best_loc[1][0], best_loc[2][0]]
return best_loc, best_X
"""
Code for generating new plot with bev and 3dbbox
source: https://github.com/lzccccc/3d-bounding-box-estimation-for-autonomous-driving
"""
def get_new_alpha(alpha):
"""
change the range of orientation from [-pi, pi] to [0, 2pi]
:param alpha: original orientation in KITTI
:return: new alpha
"""
new_alpha = float(alpha) + np.pi / 2.
if new_alpha < 0:
new_alpha = new_alpha + 2. * np.pi
# make sure angle lies in [0, 2pi]
new_alpha = new_alpha - int(new_alpha / (2. * np.pi)) * (2. * np.pi)
return new_alpha
def recover_angle(bin_anchor, bin_confidence, bin_num):
# select anchor from bins
max_anc = np.argmax(bin_confidence)
anchors = bin_anchor[max_anc]
# compute the angle offset
if anchors[1] > 0:
angle_offset = np.arccos(anchors[0])
else:
angle_offset = -np.arccos(anchors[0])
# add the angle offset to the center ray of each bin to obtain the local orientation
wedge = 2 * np.pi / bin_num
angle = angle_offset + max_anc * wedge
# angle - 2pi, if exceed 2pi
angle_l = angle % (2 * np.pi)
# change to ray back to [-pi, pi]
angle = angle_l + wedge / 2 - np.pi
if angle > np.pi:
angle -= 2 * np.pi
angle = round(angle, 2)
return angle
def compute_orientaion(P2, obj):
x = (obj.xmax + obj.xmin) / 2
# compute camera orientation
u_distance = x - P2[0, 2]
focal_length = P2[0, 0]
rot_ray = np.arctan(u_distance / focal_length)
# global = alpha + ray
rot_global = obj.alpha + rot_ray
# local orientation, [0, 2 * pi]
# rot_local = obj.alpha + np.pi / 2
rot_local = get_new_alpha(obj.alpha)
rot_global = round(rot_global, 2)
return rot_global, rot_local
def translation_constraints(P2, obj, rot_local):
bbox = [obj.xmin, obj.ymin, obj.xmax, obj.ymax]
# rotation matrix
R = np.array([[ np.cos(obj.rot_global), 0, np.sin(obj.rot_global)],
[ 0, 1, 0 ],
[-np.sin(obj.rot_global), 0, np.cos(obj.rot_global)]])
A = np.zeros((4, 3))
b = np.zeros((4, 1))
I = np.identity(3)
# object coordinate T, samply divide into xyz
# bug1: h div 2
xmin_candi, xmax_candi, ymin_candi, ymax_candi = obj.box3d_candidate(rot_local, soft_range=8)
X = np.bmat([xmin_candi, xmax_candi,
ymin_candi, ymax_candi])
# X: [x, y, z] in object coordinate
X = X.reshape(4,3).T
# construct equation (3, 4)
# object four point in bev
for i in range(4):
# X[:,i] sames as Ti
# matrice = [R T] * Xo
matrice = np.bmat([[I, np.matmul(R, X[:,i])], [np.zeros((1,3)), np.ones((1,1))]])
# M = K * [R T] * Xo
M = np.matmul(P2, matrice)
if i % 2 == 0:
A[i, :] = M[0, 0:3] - bbox[i] * M[2, 0:3]
b[i, :] = M[2, 3] * bbox[i] - M[0, 3]
else:
A[i, :] = M[1, 0:3] - bbox[i] * M[2, 0:3]
b[i, :] = M[2, 3] * bbox[i] - M[1, 3]
# solve x, y, z, using method of least square
Tran = np.matmul(np.linalg.pinv(A), b)
tx, ty, tz = [float(np.around(tran, 2)) for tran in Tran]
return tx, ty, tz
class detectionInfo(object):
def __init__(self, line):
self.name = line[0]
self.truncation = float(line[1])
self.occlusion = int(line[2])
# local orientation = alpha + pi/2
self.alpha = float(line[3])
# in pixel coordinate
self.xmin = float(line[4])
self.ymin = float(line[5])
self.xmax = float(line[6])
self.ymax = float(line[7])
# height, weigh, length in object coordinate, meter
self.h = float(line[8])
self.w = float(line[9])
self.l = float(line[10])
# x, y, z in camera coordinate, meter
self.tx = float(line[11])
self.ty = float(line[12])
self.tz = float(line[13])
# global orientation [-pi, pi]
self.rot_global = float(line[14])
def member_to_list(self):
output_line = []
for name, value in vars(self).items():
output_line.append(value)
return output_line
def box3d_candidate(self, rot_local, soft_range):
x_corners = [self.l, self.l, self.l, self.l, 0, 0, 0, 0]
y_corners = [self.h, 0, self.h, 0, self.h, 0, self.h, 0]
z_corners = [0, 0, self.w, self.w, self.w, self.w, 0, 0]
x_corners = [i - self.l / 2 for i in x_corners]
y_corners = [i - self.h / 2 for i in y_corners]
z_corners = [i - self.w / 2 for i in z_corners]
corners_3d = np.transpose(np.array([x_corners, y_corners, z_corners]))
point1 = corners_3d[0, :]
point2 = corners_3d[1, :]
point3 = corners_3d[2, :]
point4 = corners_3d[3, :]
point5 = corners_3d[6, :]
point6 = corners_3d[7, :]
point7 = corners_3d[4, :]
point8 = corners_3d[5, :]
# set up projection relation based on local orientation
xmin_candi = xmax_candi = ymin_candi = ymax_candi = 0
if 0 < rot_local < np.pi / 2:
xmin_candi = point8
xmax_candi = point2
ymin_candi = point2
ymax_candi = point5
if np.pi / 2 <= rot_local <= np.pi:
xmin_candi = point6
xmax_candi = point4
ymin_candi = point4
ymax_candi = point1
if np.pi < rot_local <= 3 / 2 * np.pi:
xmin_candi = point2
xmax_candi = point8
ymin_candi = point8
ymax_candi = point1
if 3 * np.pi / 2 <= rot_local <= 2 * np.pi:
xmin_candi = point4
xmax_candi = point6
ymin_candi = point6
ymax_candi = point5
# soft constraint
div = soft_range * np.pi / 180
if 0 < rot_local < div or 2*np.pi-div < rot_local < 2*np.pi:
xmin_candi = point8
xmax_candi = point6
ymin_candi = point6
ymax_candi = point5
if np.pi - div < rot_local < np.pi + div:
xmin_candi = point2
xmax_candi = point4
ymin_candi = point8
ymax_candi = point1
return xmin_candi, xmax_candi, ymin_candi, ymax_candi | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/eval.py | import io as sysio
import time
# import numba
import numpy as np
from scipy.interpolate import interp1d
from tqdm import tqdm
from src.utils.rotate_iou import rotate_iou_gpu_eval
def get_mAP(prec):
sums = 0
for i in range(0, len(prec), 4):
sums += prec[i]
return sums / 11 * 100
# @numba.jit
def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41):
scores.sort()
scores = scores[::-1]
current_recall = 0
thresholds = []
for i, score in enumerate(scores):
l_recall = (i + 1) / num_gt
if i < (len(scores) - 1):
r_recall = (i + 2) / num_gt
else:
r_recall = l_recall
if (((r_recall - current_recall) < (current_recall - l_recall))
and (i < (len(scores) - 1))):
continue
# recall = l_recall
thresholds.append(score)
current_recall += 1 / (num_sample_pts - 1.0)
# print(len(thresholds), len(scores), num_gt)
return thresholds
def clean_data(gt_anno, dt_anno, current_class, difficulty):
CLASS_NAMES = [
"car", "truck", "van", "bus", "pedestrian", "cyclist", "trafficcone", "unknown"
]
MIN_HEIGHT = [40, 25, 25]
MAX_OCCLUSION = [0, 1, 2]
MAX_TRUNCATION = [0.15, 0.3, 0.5]
dc_bboxes, ignored_gt, ignored_dt = [], [], []
current_cls_name = CLASS_NAMES[current_class].lower()
num_gt = len(gt_anno["name"])
num_dt = len(dt_anno["name"])
num_valid_gt = 0
for i in range(num_gt):
bbox = gt_anno["bbox"][i]
gt_name = gt_anno["name"][i].lower()
height = bbox[3] - bbox[1]
valid_class = -1
if (gt_name == current_cls_name):
valid_class = 1
elif (current_cls_name == "Pedestrian".lower()
and "Person_sitting".lower() == gt_name):
valid_class = 0
elif (current_cls_name == "Car".lower() and "Van".lower() == gt_name):
valid_class = 0
else:
valid_class = -1
ignore = False
if ((gt_anno["occluded"][i] > MAX_OCCLUSION[difficulty])
or (gt_anno["truncated"][i] > MAX_TRUNCATION[difficulty])
or (height <= MIN_HEIGHT[difficulty])):
# if gt_anno["difficulty"][i] > difficulty or gt_anno["difficulty"][i] == -1:
ignore = True
if valid_class == 1 and not ignore:
ignored_gt.append(0)
num_valid_gt += 1
elif (valid_class == 0 or (ignore and (valid_class == 1))):
ignored_gt.append(1)
else:
ignored_gt.append(-1)
# for i in range(num_gt):
if gt_anno["name"][i] == "DontCare":
dc_bboxes.append(gt_anno["bbox"][i])
for i in range(num_dt):
if (dt_anno["name"][i].lower() == current_cls_name):
valid_class = 1
else:
valid_class = -1
height = abs(dt_anno["bbox"][i, 3] - dt_anno["bbox"][i, 1])
if height < MIN_HEIGHT[difficulty]:
ignored_dt.append(1)
elif valid_class == 1:
ignored_dt.append(0)
else:
ignored_dt.append(-1)
return num_valid_gt, ignored_gt, ignored_dt, dc_bboxes
# @numba.jit(nopython=True)
def image_box_overlap(boxes, query_boxes, criterion=-1):
N = boxes.shape[0]
K = query_boxes.shape[0]
overlaps = np.zeros((N, K), dtype=boxes.dtype)
for k in range(K):
qbox_area = ((query_boxes[k, 2] - query_boxes[k, 0]) *
(query_boxes[k, 3] - query_boxes[k, 1]))
for n in range(N):
iw = (min(boxes[n, 2], query_boxes[k, 2]) - max(
boxes[n, 0], query_boxes[k, 0]))
if iw > 0:
ih = (min(boxes[n, 3], query_boxes[k, 3]) - max(
boxes[n, 1], query_boxes[k, 1]))
if ih > 0:
if criterion == -1:
ua = (
(boxes[n, 2] - boxes[n, 0]) *
(boxes[n, 3] - boxes[n, 1]) + qbox_area - iw * ih)
elif criterion == 0:
ua = ((boxes[n, 2] - boxes[n, 0]) *
(boxes[n, 3] - boxes[n, 1]))
elif criterion == 1:
ua = qbox_area
else:
ua = 1.0
overlaps[n, k] = iw * ih / ua
return overlaps
def bev_box_overlap(boxes, qboxes, criterion=-1):
riou = rotate_iou_gpu_eval(boxes, qboxes, criterion)
return riou
# @numba.jit(nopython=True, parallel=True)
def d3_box_overlap_kernel(boxes,
qboxes,
rinc,
criterion=-1,
z_axis=1,
z_center=1.0):
"""
z_axis: the z (height) axis.
z_center: unified z (height) center of box.
"""
N, K = boxes.shape[0], qboxes.shape[0]
for i in range(N):
for j in range(K):
if rinc[i, j] > 0:
min_z = min(
boxes[i, z_axis] + boxes[i, z_axis + 3] * (1 - z_center),
qboxes[j, z_axis] + qboxes[j, z_axis + 3] * (1 - z_center))
max_z = max(
boxes[i, z_axis] - boxes[i, z_axis + 3] * z_center,
qboxes[j, z_axis] - qboxes[j, z_axis + 3] * z_center)
iw = min_z - max_z
if iw > 0:
area1 = boxes[i, 3] * boxes[i, 4] * boxes[i, 5]
area2 = qboxes[j, 3] * qboxes[j, 4] * qboxes[j, 5]
inc = iw * rinc[i, j]
if criterion == -1:
ua = (area1 + area2 - inc)
elif criterion == 0:
ua = area1
elif criterion == 1:
ua = area2
else:
ua = 1.0
rinc[i, j] = inc / ua
else:
rinc[i, j] = 0.0
def d3_box_overlap(boxes, qboxes, criterion=-1, z_axis=1, z_center=1.0):
"""kitti camera format z_axis=1.
"""
bev_axes = list(range(7))
bev_axes.pop(z_axis + 3)
bev_axes.pop(z_axis)
rinc = rotate_iou_gpu_eval(boxes[:, bev_axes], qboxes[:, bev_axes], 2)
d3_box_overlap_kernel(boxes, qboxes, rinc, criterion, z_axis, z_center)
return rinc
# @numba.jit(nopython=True)
def compute_statistics_jit(overlaps,
gt_datas,
dt_datas,
ignored_gt,
ignored_det,
dc_bboxes,
metric,
min_overlap,
thresh=0,
compute_fp=False,
compute_aos=False):
det_size = dt_datas.shape[0]
gt_size = gt_datas.shape[0]
dt_scores = dt_datas[:, -1]
dt_alphas = dt_datas[:, 4]
gt_alphas = gt_datas[:, 4]
dt_bboxes = dt_datas[:, :4]
# gt_bboxes = gt_datas[:, :4]
assigned_detection = [False] * det_size
ignored_threshold = [False] * det_size
if compute_fp:
for i in range(det_size):
if (dt_scores[i] < thresh):
ignored_threshold[i] = True
NO_DETECTION = -10000000
tp, fp, fn, similarity = 0, 0, 0, 0
# thresholds = [0.0]
# delta = [0.0]
thresholds = np.zeros((gt_size, ))
thresh_idx = 0
delta = np.zeros((gt_size, ))
delta_idx = 0
for i in range(gt_size):
if ignored_gt[i] == -1:
continue
det_idx = -1
valid_detection = NO_DETECTION
max_overlap = 0
assigned_ignored_det = False
for j in range(det_size):
if (ignored_det[j] == -1):
continue
if (assigned_detection[j]):
continue
if (ignored_threshold[j]):
continue
overlap = overlaps[j, i]
dt_score = dt_scores[j]
if (not compute_fp and (overlap > min_overlap)
and dt_score > valid_detection):
det_idx = j
valid_detection = dt_score
elif (compute_fp and (overlap > min_overlap)
and (overlap > max_overlap or assigned_ignored_det)
and ignored_det[j] == 0):
max_overlap = overlap
det_idx = j
valid_detection = 1
assigned_ignored_det = False
elif (compute_fp and (overlap > min_overlap)
and (valid_detection == NO_DETECTION)
and ignored_det[j] == 1):
det_idx = j
valid_detection = 1
assigned_ignored_det = True
if (valid_detection == NO_DETECTION) and ignored_gt[i] == 0:
fn += 1
elif ((valid_detection != NO_DETECTION)
and (ignored_gt[i] == 1 or ignored_det[det_idx] == 1)):
assigned_detection[det_idx] = True
elif valid_detection != NO_DETECTION:
# only a tp add a threshold.
tp += 1
# thresholds.append(dt_scores[det_idx])
thresholds[thresh_idx] = dt_scores[det_idx]
thresh_idx += 1
if compute_aos:
# delta.append(gt_alphas[i] - dt_alphas[det_idx])
delta[delta_idx] = gt_alphas[i] - dt_alphas[det_idx]
delta_idx += 1
assigned_detection[det_idx] = True
if compute_fp:
for i in range(det_size):
if (not (assigned_detection[i] or ignored_det[i] == -1
or ignored_det[i] == 1 or ignored_threshold[i])):
fp += 1
nstuff = 0
if metric == 0:
overlaps_dt_dc = image_box_overlap(dt_bboxes, dc_bboxes, 0)
for i in range(dc_bboxes.shape[0]):
for j in range(det_size):
if (assigned_detection[j]):
continue
if (ignored_det[j] == -1 or ignored_det[j] == 1):
continue
if (ignored_threshold[j]):
continue
if overlaps_dt_dc[j, i] > min_overlap:
assigned_detection[j] = True
nstuff += 1
fp -= nstuff
if compute_aos:
tmp = np.zeros((fp + delta_idx, ))
# tmp = [0] * fp
for i in range(delta_idx):
tmp[i + fp] = (1.0 + np.cos(delta[i])) / 2.0
# tmp.append((1.0 + np.cos(delta[i])) / 2.0)
# assert len(tmp) == fp + tp
# assert len(delta) == tp
if tp > 0 or fp > 0:
similarity = np.sum(tmp)
else:
similarity = -1
return tp, fp, fn, similarity, thresholds[:thresh_idx]
def get_split_parts(num, num_part):
same_part = num // num_part
remain_num = num % num_part
if remain_num == 0:
return [same_part] * num_part
else:
return [same_part] * num_part + [remain_num]
# @numba.jit(nopython=True)
def fused_compute_statistics(overlaps,
pr,
gt_nums,
dt_nums,
dc_nums,
gt_datas,
dt_datas,
dontcares,
ignored_gts,
ignored_dets,
metric,
min_overlap,
thresholds,
compute_aos=False):
gt_num = 0
dt_num = 0
dc_num = 0
for i in range(gt_nums.shape[0]):
for t, thresh in enumerate(thresholds):
overlap = overlaps[dt_num:dt_num + dt_nums[i], gt_num:gt_num +
gt_nums[i]]
gt_data = gt_datas[gt_num:gt_num + gt_nums[i]]
dt_data = dt_datas[dt_num:dt_num + dt_nums[i]]
ignored_gt = ignored_gts[gt_num:gt_num + gt_nums[i]]
ignored_det = ignored_dets[dt_num:dt_num + dt_nums[i]]
dontcare = dontcares[dc_num:dc_num + dc_nums[i]]
tp, fp, fn, similarity, _ = compute_statistics_jit(
overlap,
gt_data,
dt_data,
ignored_gt,
ignored_det,
dontcare,
metric,
min_overlap=min_overlap,
thresh=thresh,
compute_fp=True,
compute_aos=compute_aos)
pr[t, 0] += tp
pr[t, 1] += fp
pr[t, 2] += fn
if similarity != -1:
pr[t, 3] += similarity
gt_num += gt_nums[i]
dt_num += dt_nums[i]
dc_num += dc_nums[i]
def calculate_iou_partly(gt_annos,
dt_annos,
metric,
num_parts=1,
z_axis=1,
z_center=1.0):
"""fast iou algorithm. this function can be used independently to
do result analysis.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
metric: eval type. 0: bbox, 1: bev, 2: 3d
num_parts: int. a parameter for fast calculate algorithm
z_axis: height axis. kitti camera use 1, lidar use 2.
"""
assert len(gt_annos) == len(dt_annos)
total_dt_num = np.stack([len(a["name"]) for a in dt_annos], 0)
total_gt_num = np.stack([len(a["name"]) for a in gt_annos], 0)
num_examples = len(gt_annos)
split_parts = get_split_parts(num_examples, num_parts)
parted_overlaps = []
example_idx = 0
bev_axes = list(range(3))
bev_axes.pop(z_axis)
for num_part in split_parts:
gt_annos_part = gt_annos[example_idx:example_idx + num_part]
dt_annos_part = dt_annos[example_idx:example_idx + num_part]
if metric == 0:
gt_boxes = np.concatenate([a["bbox"] for a in gt_annos_part], 0)
dt_boxes = np.concatenate([a["bbox"] for a in dt_annos_part], 0)
overlap_part = image_box_overlap(gt_boxes, dt_boxes)
elif metric == 1:
loc = np.concatenate(
[a["location"][:, bev_axes] for a in gt_annos_part], 0)
dims = np.concatenate(
[a["dimensions"][:, bev_axes] for a in gt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in gt_annos_part], 0)
gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
loc = np.concatenate(
[a["location"][:, bev_axes] for a in dt_annos_part], 0)
dims = np.concatenate(
[a["dimensions"][:, bev_axes] for a in dt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in dt_annos_part], 0)
dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
overlap_part = bev_box_overlap(gt_boxes,
dt_boxes).astype(np.float64)
elif metric == 2:
loc = np.concatenate([a["location"] for a in gt_annos_part], 0)
dims = np.concatenate([a["dimensions"] for a in gt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in gt_annos_part], 0)
gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
loc = np.concatenate([a["location"] for a in dt_annos_part], 0)
dims = np.concatenate([a["dimensions"] for a in dt_annos_part], 0)
rots = np.concatenate([a["rotation_y"] for a in dt_annos_part], 0)
dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]],
axis=1)
overlap_part = d3_box_overlap(
gt_boxes, dt_boxes, z_axis=z_axis,
z_center=z_center).astype(np.float64)
else:
raise ValueError("unknown metric")
parted_overlaps.append(overlap_part)
example_idx += num_part
overlaps = []
example_idx = 0
for j, num_part in enumerate(split_parts):
gt_annos_part = gt_annos[example_idx:example_idx + num_part]
dt_annos_part = dt_annos[example_idx:example_idx + num_part]
gt_num_idx, dt_num_idx = 0, 0
for i in range(num_part):
gt_box_num = total_gt_num[example_idx + i]
dt_box_num = total_dt_num[example_idx + i]
overlaps.append(
parted_overlaps[j][gt_num_idx:gt_num_idx +
gt_box_num, dt_num_idx:dt_num_idx +
dt_box_num])
gt_num_idx += gt_box_num
dt_num_idx += dt_box_num
example_idx += num_part
return overlaps, parted_overlaps, total_gt_num, total_dt_num
def _prepare_data(gt_annos, dt_annos, current_class, difficulty):
gt_datas_list = []
dt_datas_list = []
total_dc_num = []
ignored_gts, ignored_dets, dontcares = [], [], []
total_num_valid_gt = 0
for i in range(len(gt_annos)):
rets = clean_data(gt_annos[i], dt_annos[i], current_class, difficulty)
num_valid_gt, ignored_gt, ignored_det, dc_bboxes = rets
ignored_gts.append(np.array(ignored_gt, dtype=np.int64))
ignored_dets.append(np.array(ignored_det, dtype=np.int64))
if len(dc_bboxes) == 0:
dc_bboxes = np.zeros((0, 4)).astype(np.float64)
else:
dc_bboxes = np.stack(dc_bboxes, 0).astype(np.float64)
total_dc_num.append(dc_bboxes.shape[0])
dontcares.append(dc_bboxes)
total_num_valid_gt += num_valid_gt
gt_datas = np.concatenate(
[gt_annos[i]["bbox"], gt_annos[i]["alpha"][..., np.newaxis]], 1)
dt_datas = np.concatenate([
dt_annos[i]["bbox"], dt_annos[i]["alpha"][..., np.newaxis],
dt_annos[i]["score"][..., np.newaxis]
], 1)
gt_datas_list.append(gt_datas)
dt_datas_list.append(dt_datas)
total_dc_num = np.stack(total_dc_num, axis=0)
return (gt_datas_list, dt_datas_list, ignored_gts, ignored_dets, dontcares,
total_dc_num, total_num_valid_gt)
def eval_class(gt_annos,
dt_annos,
current_classes,
difficultys,
metric,
min_overlaps,
compute_aos=False,
z_axis=1,
z_center=1.0,
num_parts=1):
"""Kitti eval. support 2d/bev/3d/aos eval. support 0.5:0.05:0.95 coco AP.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
current_class: int, 0: car, 1: pedestrian, 2: cyclist
difficulty: int. eval difficulty, 0: easy, 1: normal, 2: hard
metric: eval type. 0: bbox, 1: bev, 2: 3d
min_overlap: float, min overlap. official:
[[0.7, 0.5, 0.5], [0.7, 0.5, 0.5], [0.7, 0.5, 0.5]]
format: [metric, class]. choose one from matrix above.
num_parts: int. a parameter for fast calculate algorithm
Returns:
dict of recall, precision and aos
"""
assert len(gt_annos) == len(dt_annos)
num_examples = len(gt_annos)
split_parts = get_split_parts(num_examples, num_parts)
rets = calculate_iou_partly(
dt_annos,
gt_annos,
metric,
num_parts,
z_axis=z_axis,
z_center=z_center)
overlaps, parted_overlaps, total_dt_num, total_gt_num = rets
N_SAMPLE_PTS = 50
num_minoverlap = len(min_overlaps)
num_class = len(current_classes)
num_difficulty = len(difficultys)
precision = np.zeros(
[num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
recall = np.zeros(
[num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
aos = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
all_thresholds = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS])
for m, current_class in enumerate(current_classes):
for l, difficulty in enumerate(difficultys):
rets = _prepare_data(gt_annos, dt_annos, current_class, difficulty)
(gt_datas_list, dt_datas_list, ignored_gts, ignored_dets,
dontcares, total_dc_num, total_num_valid_gt) = rets
for k, min_overlap in enumerate(min_overlaps[:, metric, m]):
thresholdss = []
for i in range(len(gt_annos)):
rets = compute_statistics_jit(
overlaps[i],
gt_datas_list[i],
dt_datas_list[i],
ignored_gts[i],
ignored_dets[i],
dontcares[i],
metric,
min_overlap=min_overlap,
thresh=0.0,
compute_fp=False)
tp, fp, fn, similarity, thresholds = rets
thresholdss += thresholds.tolist()
thresholdss = np.array(thresholdss)
thresholds = get_thresholds(thresholdss, total_num_valid_gt)
thresholds = np.array(thresholds)
all_thresholds[m, l, k, :len(thresholds)] = thresholds
pr = np.zeros([len(thresholds), 4])
idx = 0
for j, num_part in enumerate(split_parts):
gt_datas_part = np.concatenate(
gt_datas_list[idx:idx + num_part], 0)
dt_datas_part = np.concatenate(
dt_datas_list[idx:idx + num_part], 0)
dc_datas_part = np.concatenate(
dontcares[idx:idx + num_part], 0)
ignored_dets_part = np.concatenate(
ignored_dets[idx:idx + num_part], 0)
ignored_gts_part = np.concatenate(
ignored_gts[idx:idx + num_part], 0)
fused_compute_statistics(
parted_overlaps[j],
pr,
total_gt_num[idx:idx + num_part],
total_dt_num[idx:idx + num_part],
total_dc_num[idx:idx + num_part],
gt_datas_part,
dt_datas_part,
dc_datas_part,
ignored_gts_part,
ignored_dets_part,
metric,
min_overlap=min_overlap,
thresholds=thresholds,
compute_aos=compute_aos)
idx += num_part
for i in range(len(thresholds)):
precision[m, l, k, i] = pr[i, 0] / (pr[i, 0] + pr[i, 1])
if compute_aos:
aos[m, l, k, i] = pr[i, 3] / (pr[i, 0] + pr[i, 1])
for i in range(len(thresholds)):
precision[m, l, k, i] = np.max(
precision[m, l, k, i:], axis=-1)
if compute_aos:
aos[m, l, k, i] = np.max(aos[m, l, k, i:], axis=-1)
ret_dict = {
# "recall": recall, # [num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]
"precision": precision,
"orientation": aos,
"thresholds": all_thresholds,
"min_overlaps": min_overlaps,
}
return ret_dict
def get_mAP_v2(prec):
sums = 0
for i in range(0, prec.shape[-1], 4):
sums = sums + prec[..., i]
return sums / 11 * 100
def do_eval_v2(gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos=False,
difficultys=(0, 1, 2),
z_axis=1,
z_center=1.0):
# min_overlaps: [num_minoverlap, metric, num_class]
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
0,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
# ret: [num_class, num_diff, num_minoverlap, num_sample_points]
mAP_bbox = get_mAP_v2(ret["precision"])
mAP_aos = None
if compute_aos:
mAP_aos = get_mAP_v2(ret["orientation"])
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
1,
min_overlaps,
z_axis=z_axis,
z_center=z_center)
mAP_bev = get_mAP_v2(ret["precision"])
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
2,
min_overlaps,
z_axis=z_axis,
z_center=z_center)
mAP_3d = get_mAP_v2(ret["precision"])
return mAP_bbox, mAP_bev, mAP_3d, mAP_aos
def do_eval_v3(gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos=False,
difficultys=(0, 1, 2),
z_axis=1,
z_center=1.0):
# min_overlaps: [num_minoverlap, metric, num_class]
types = ["bbox", "bev", "3d"]
metrics = {}
for i in range(3):
ret = eval_class(
gt_annos,
dt_annos,
current_classes,
difficultys,
i,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
metrics[types[i]] = ret
return metrics
def do_coco_style_eval(gt_annos,
dt_annos,
current_classes,
overlap_ranges,
compute_aos,
z_axis=1,
z_center=1.0):
# overlap_ranges: [range, metric, num_class]
min_overlaps = np.zeros([10, *overlap_ranges.shape[1:]])
for i in range(overlap_ranges.shape[1]):
for j in range(overlap_ranges.shape[2]):
min_overlaps[:, i, j] = np.linspace(*overlap_ranges[:, i, j])
mAP_bbox, mAP_bev, mAP_3d, mAP_aos = do_eval_v2(
gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos,
z_axis=z_axis,
z_center=z_center)
# ret: [num_class, num_diff, num_minoverlap]
mAP_bbox = mAP_bbox.mean(-1)
mAP_bev = mAP_bev.mean(-1)
mAP_3d = mAP_3d.mean(-1)
if mAP_aos is not None:
mAP_aos = mAP_aos.mean(-1)
return mAP_bbox, mAP_bev, mAP_3d, mAP_aos
def print_str(value, *arg, sstream=None):
if sstream is None:
sstream = sysio.StringIO()
sstream.truncate(0)
sstream.seek(0)
print(value, *arg, file=sstream)
return sstream.getvalue()
def get_official_eval_result(gt_annos,
dt_annos,
current_classes,
difficultys=[0, 1, 2],
z_axis=1,
z_center=1.0):
"""
gt_annos and dt_annos must contains following keys:
[bbox, location, dimensions, rotation_y, score]
"""
overlap_mod = np.array([[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],
[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],
[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7]])
overlap_easy = np.array([[0.7, 0.5, 0.5, 0.7, 0.5, 0.5, 0.5, 0.5],
[0.5, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5],
[0.5, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5]])
min_overlaps = np.stack([overlap_mod, overlap_easy], axis=0) # [2, 3, 5]
class_to_name = {
0: 'Car',
1: 'Cyclist',
2: 'Truck',
3: 'Van',
4: 'Pedestrian',
5: 'Tram',
}
name_to_class = {v: n for n, v in class_to_name.items()}
if not isinstance(current_classes, (list, tuple)):
current_classes = [current_classes]
current_classes_int = []
for curcls in current_classes:
if isinstance(curcls, str):
current_classes_int.append(name_to_class[curcls])
else:
current_classes_int.append(curcls)
current_classes = current_classes_int
min_overlaps = min_overlaps[:, :, current_classes]
result = ''
# check whether alpha is valid
compute_aos = False
for anno in dt_annos:
if anno['alpha'].shape[0] != 0:
if anno['alpha'][0] != -10:
compute_aos = True
break
metrics = do_eval_v3(
gt_annos,
dt_annos,
current_classes,
min_overlaps,
compute_aos,
difficultys,
z_axis=z_axis,
z_center=z_center)
for j, curcls in enumerate(current_classes):
# mAP threshold array: [num_minoverlap, metric, class]
# mAP result: [num_class, num_diff, num_minoverlap]
for i in tqdm(range(min_overlaps.shape[0])):
mAPbbox = get_mAP_v2(metrics["bbox"]["precision"][j, :, i])
mAPbbox = ", ".join(f"{v:.2f}" for v in mAPbbox)
mAPbev = get_mAP_v2(metrics["bev"]["precision"][j, :, i])
mAPbev = ", ".join(f"{v:.2f}" for v in mAPbev)
mAP3d = get_mAP_v2(metrics["3d"]["precision"][j, :, i])
mAP3d = ", ".join(f"{v:.2f}" for v in mAP3d)
result += print_str(
(f"{class_to_name[curcls]} "
"AP(Average Precision)@{:.2f}, {:.2f}, {:.2f}:".format(*min_overlaps[i, :, j])))
result += print_str(f"bbox AP:{mAPbbox}")
result += print_str(f"bev AP:{mAPbev}")
result += print_str(f"3d AP:{mAP3d}")
if compute_aos:
mAPaos = get_mAP_v2(metrics["bbox"]["orientation"][j, :, i])
mAPaos = ", ".join(f"{v:.2f}" for v in mAPaos)
result += print_str(f"aos AP:{mAPaos}")
return result
def get_coco_eval_result(gt_annos,
dt_annos,
current_classes,
z_axis=1,
z_center=1.0):
class_to_name = {
0: 'Car',
1: 'Cyclist',
2: 'Truck',
3: 'Van',
4: 'Pedestrian',
5: 'Tram',
}
class_to_range = {
0: [0.5, 1.0, 0.05],
1: [0.25, 0.75, 0.05],
2: [0.25, 0.75, 0.05],
3: [0.5, 1.0, 0.05],
4: [0.25, 0.75, 0.05],
5: [0.5, 1.0, 0.05],
}
class_to_range = {
0: [0.5, 0.95, 10],
1: [0.25, 0.7, 10],
2: [0.25, 0.7, 10],
3: [0.5, 0.95, 10],
4: [0.25, 0.7, 10],
5: [0.5, 0.95, 10],
}
name_to_class = {v: n for n, v in class_to_name.items()}
if not isinstance(current_classes, (list, tuple)):
current_classes = [current_classes]
current_classes_int = []
for curcls in current_classes:
if isinstance(curcls, str):
current_classes_int.append(name_to_class[curcls])
else:
current_classes_int.append(curcls)
current_classes = current_classes_int
overlap_ranges = np.zeros([3, 3, len(current_classes)])
for i, curcls in enumerate(current_classes):
overlap_ranges[:, :, i] = np.array(
class_to_range[curcls])[:, np.newaxis]
result = ''
# check whether alpha is valid
compute_aos = False
for anno in dt_annos:
if anno['alpha'].shape[0] != 0:
if anno['alpha'][0] != -10:
compute_aos = True
break
mAPbbox, mAPbev, mAP3d, mAPaos = do_coco_style_eval(
gt_annos,
dt_annos,
current_classes,
overlap_ranges,
compute_aos,
z_axis=z_axis,
z_center=z_center)
for j, curcls in enumerate(current_classes):
# mAP threshold array: [num_minoverlap, metric, class]
# mAP result: [num_class, num_diff, num_minoverlap]
o_range = np.array(class_to_range[curcls])[[0, 2, 1]]
o_range[1] = (o_range[2] - o_range[0]) / (o_range[1] - 1)
result += print_str((f"{class_to_name[curcls]} "
"coco AP@{:.2f}:{:.2f}:{:.2f}:".format(*o_range)))
result += print_str((f"bbox AP:{mAPbbox[j, 0]:.2f}, "
f"{mAPbbox[j, 1]:.2f}, "
f"{mAPbbox[j, 2]:.2f}"))
result += print_str((f"bev AP:{mAPbev[j, 0]:.2f}, "
f"{mAPbev[j, 1]:.2f}, "
f"{mAPbev[j, 2]:.2f}"))
result += print_str((f"3d AP:{mAP3d[j, 0]:.2f}, "
f"{mAP3d[j, 1]:.2f}, "
f"{mAP3d[j, 2]:.2f}"))
if compute_aos:
result += print_str((f"aos AP:{mAPaos[j, 0]:.2f}, "
f"{mAPaos[j, 1]:.2f}, "
f"{mAPaos[j, 2]:.2f}"))
return result | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/class_averages-L4.txt | {"car": {"count": 15939, "total": [25041.166112000075, 27306.989660000134, 68537.86737500015]}, "pedestrian": {"count": 1793, "total": [3018.555634000002, 974.4651910000001, 847.5994529999997]}, "cyclist": {"count": 116, "total": [169.2123550000001, 54.55659699999999, 187.05904800000002]}, "truck": {"count": 741, "total": [2219.4890700000014, 1835.661420999999, 6906.846059999999]}, "van": {"count": 632, "total": [1366.3955200000005, 1232.9152299999998, 3155.905800000001]}, "trafficcone": {"count": 0, "total": [0.0, 0.0, 0.0]}, "unknown": {"count": 4294, "total": [4924.141288999996, 3907.031903999998, 6185.184788000007]}} | 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/rotate_iou.py | #####################
# Based on https://github.com/hongzhenwang/RRPN-revise
# Licensed under The MIT License
# Author: yanyan, scrin@foxmail.com
#####################
import math
import numba
import numpy as np
from numba import cuda
@numba.jit(nopython=True)
def div_up(m, n):
return m // n + (m % n > 0)
@cuda.jit('(float32[:], float32[:], float32[:])', device=True, inline=True)
def trangle_area(a, b, c):
return ((a[0] - c[0]) * (b[1] - c[1]) - (a[1] - c[1]) *
(b[0] - c[0])) / 2.0
@cuda.jit('(float32[:], int32)', device=True, inline=True)
def area(int_pts, num_of_inter):
area_val = 0.0
for i in range(num_of_inter - 2):
area_val += abs(
trangle_area(int_pts[:2], int_pts[2 * i + 2:2 * i + 4],
int_pts[2 * i + 4:2 * i + 6]))
return area_val
@cuda.jit('(float32[:], int32)', device=True, inline=True)
def sort_vertex_in_convex_polygon(int_pts, num_of_inter):
if num_of_inter > 0:
center = cuda.local.array((2, ), dtype=numba.float32)
center[:] = 0.0
for i in range(num_of_inter):
center[0] += int_pts[2 * i]
center[1] += int_pts[2 * i + 1]
center[0] /= num_of_inter
center[1] /= num_of_inter
v = cuda.local.array((2, ), dtype=numba.float32)
vs = cuda.local.array((16, ), dtype=numba.float32)
for i in range(num_of_inter):
v[0] = int_pts[2 * i] - center[0]
v[1] = int_pts[2 * i + 1] - center[1]
d = math.sqrt(v[0] * v[0] + v[1] * v[1])
v[0] = v[0] / d
v[1] = v[1] / d
if v[1] < 0:
v[0] = -2 - v[0]
vs[i] = v[0]
j = 0
temp = 0
for i in range(1, num_of_inter):
if vs[i - 1] > vs[i]:
temp = vs[i]
tx = int_pts[2 * i]
ty = int_pts[2 * i + 1]
j = i
while j > 0 and vs[j - 1] > temp:
vs[j] = vs[j - 1]
int_pts[j * 2] = int_pts[j * 2 - 2]
int_pts[j * 2 + 1] = int_pts[j * 2 - 1]
j -= 1
vs[j] = temp
int_pts[j * 2] = tx
int_pts[j * 2 + 1] = ty
@cuda.jit(
'(float32[:], float32[:], int32, int32, float32[:])',
device=True,
inline=True)
def line_segment_intersection(pts1, pts2, i, j, temp_pts):
A = cuda.local.array((2, ), dtype=numba.float32)
B = cuda.local.array((2, ), dtype=numba.float32)
C = cuda.local.array((2, ), dtype=numba.float32)
D = cuda.local.array((2, ), dtype=numba.float32)
A[0] = pts1[2 * i]
A[1] = pts1[2 * i + 1]
B[0] = pts1[2 * ((i + 1) % 4)]
B[1] = pts1[2 * ((i + 1) % 4) + 1]
C[0] = pts2[2 * j]
C[1] = pts2[2 * j + 1]
D[0] = pts2[2 * ((j + 1) % 4)]
D[1] = pts2[2 * ((j + 1) % 4) + 1]
BA0 = B[0] - A[0]
BA1 = B[1] - A[1]
DA0 = D[0] - A[0]
CA0 = C[0] - A[0]
DA1 = D[1] - A[1]
CA1 = C[1] - A[1]
acd = DA1 * CA0 > CA1 * DA0
bcd = (D[1] - B[1]) * (C[0] - B[0]) > (C[1] - B[1]) * (D[0] - B[0])
if acd != bcd:
abc = CA1 * BA0 > BA1 * CA0
abd = DA1 * BA0 > BA1 * DA0
if abc != abd:
DC0 = D[0] - C[0]
DC1 = D[1] - C[1]
ABBA = A[0] * B[1] - B[0] * A[1]
CDDC = C[0] * D[1] - D[0] * C[1]
DH = BA1 * DC0 - BA0 * DC1
Dx = ABBA * DC0 - BA0 * CDDC
Dy = ABBA * DC1 - BA1 * CDDC
temp_pts[0] = Dx / DH
temp_pts[1] = Dy / DH
return True
return False
@cuda.jit(
'(float32[:], float32[:], int32, int32, float32[:])',
device=True,
inline=True)
def line_segment_intersection_v1(pts1, pts2, i, j, temp_pts):
a = cuda.local.array((2, ), dtype=numba.float32)
b = cuda.local.array((2, ), dtype=numba.float32)
c = cuda.local.array((2, ), dtype=numba.float32)
d = cuda.local.array((2, ), dtype=numba.float32)
a[0] = pts1[2 * i]
a[1] = pts1[2 * i + 1]
b[0] = pts1[2 * ((i + 1) % 4)]
b[1] = pts1[2 * ((i + 1) % 4) + 1]
c[0] = pts2[2 * j]
c[1] = pts2[2 * j + 1]
d[0] = pts2[2 * ((j + 1) % 4)]
d[1] = pts2[2 * ((j + 1) % 4) + 1]
area_abc = trangle_area(a, b, c)
area_abd = trangle_area(a, b, d)
if area_abc * area_abd >= 0:
return False
area_cda = trangle_area(c, d, a)
area_cdb = area_cda + area_abc - area_abd
if area_cda * area_cdb >= 0:
return False
t = area_cda / (area_abd - area_abc)
dx = t * (b[0] - a[0])
dy = t * (b[1] - a[1])
temp_pts[0] = a[0] + dx
temp_pts[1] = a[1] + dy
return True
@cuda.jit('(float32, float32, float32[:])', device=True, inline=True)
def point_in_quadrilateral(pt_x, pt_y, corners):
ab0 = corners[2] - corners[0]
ab1 = corners[3] - corners[1]
ad0 = corners[6] - corners[0]
ad1 = corners[7] - corners[1]
ap0 = pt_x - corners[0]
ap1 = pt_y - corners[1]
abab = ab0 * ab0 + ab1 * ab1
abap = ab0 * ap0 + ab1 * ap1
adad = ad0 * ad0 + ad1 * ad1
adap = ad0 * ap0 + ad1 * ap1
return abab >= abap and abap >= 0 and adad >= adap and adap >= 0
@cuda.jit('(float32[:], float32[:], float32[:])', device=True, inline=True)
def quadrilateral_intersection(pts1, pts2, int_pts):
num_of_inter = 0
for i in range(4):
if point_in_quadrilateral(pts1[2 * i], pts1[2 * i + 1], pts2):
int_pts[num_of_inter * 2] = pts1[2 * i]
int_pts[num_of_inter * 2 + 1] = pts1[2 * i + 1]
num_of_inter += 1
if point_in_quadrilateral(pts2[2 * i], pts2[2 * i + 1], pts1):
int_pts[num_of_inter * 2] = pts2[2 * i]
int_pts[num_of_inter * 2 + 1] = pts2[2 * i + 1]
num_of_inter += 1
temp_pts = cuda.local.array((2, ), dtype=numba.float32)
for i in range(4):
for j in range(4):
has_pts = line_segment_intersection(pts1, pts2, i, j, temp_pts)
if has_pts:
int_pts[num_of_inter * 2] = temp_pts[0]
int_pts[num_of_inter * 2 + 1] = temp_pts[1]
num_of_inter += 1
return num_of_inter
@cuda.jit('(float32[:], float32[:])', device=True, inline=True)
def rbbox_to_corners(corners, rbbox):
# generate clockwise corners and rotate it clockwise
angle = rbbox[4]
a_cos = math.cos(angle)
a_sin = math.sin(angle)
center_x = rbbox[0]
center_y = rbbox[1]
x_d = rbbox[2]
y_d = rbbox[3]
corners_x = cuda.local.array((4, ), dtype=numba.float32)
corners_y = cuda.local.array((4, ), dtype=numba.float32)
corners_x[0] = -x_d / 2
corners_x[1] = -x_d / 2
corners_x[2] = x_d / 2
corners_x[3] = x_d / 2
corners_y[0] = -y_d / 2
corners_y[1] = y_d / 2
corners_y[2] = y_d / 2
corners_y[3] = -y_d / 2
for i in range(4):
corners[2 *
i] = a_cos * corners_x[i] + a_sin * corners_y[i] + center_x
corners[2 * i
+ 1] = -a_sin * corners_x[i] + a_cos * corners_y[i] + center_y
@cuda.jit('(float32[:], float32[:])', device=True, inline=True)
def inter(rbbox1, rbbox2):
corners1 = cuda.local.array((8, ), dtype=numba.float32)
corners2 = cuda.local.array((8, ), dtype=numba.float32)
intersection_corners = cuda.local.array((16, ), dtype=numba.float32)
rbbox_to_corners(corners1, rbbox1)
rbbox_to_corners(corners2, rbbox2)
num_intersection = quadrilateral_intersection(corners1, corners2,
intersection_corners)
sort_vertex_in_convex_polygon(intersection_corners, num_intersection)
# print(intersection_corners.reshape([-1, 2])[:num_intersection])
return area(intersection_corners, num_intersection)
@cuda.jit('(float32[:], float32[:], int32)', device=True, inline=True)
def devRotateIoUEval(rbox1, rbox2, criterion=-1):
area1 = rbox1[2] * rbox1[3]
area2 = rbox2[2] * rbox2[3]
area_inter = inter(rbox1, rbox2)
if criterion == -1:
return area_inter / (area1 + area2 - area_inter)
elif criterion == 0:
return area_inter / area1
elif criterion == 1:
return area_inter / area2
else:
return area_inter
@cuda.jit('(int64, int64, float32[:], float32[:], float32[:], int32)', fastmath=False)
def rotate_iou_kernel_eval(N, K, dev_boxes, dev_query_boxes, dev_iou, criterion=-1):
threadsPerBlock = 8 * 8
row_start = cuda.blockIdx.x
col_start = cuda.blockIdx.y
tx = cuda.threadIdx.x
row_size = min(N - row_start * threadsPerBlock, threadsPerBlock)
col_size = min(K - col_start * threadsPerBlock, threadsPerBlock)
block_boxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32)
block_qboxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32)
dev_query_box_idx = threadsPerBlock * col_start + tx
dev_box_idx = threadsPerBlock * row_start + tx
if (tx < col_size):
block_qboxes[tx * 5 + 0] = dev_query_boxes[dev_query_box_idx * 5 + 0]
block_qboxes[tx * 5 + 1] = dev_query_boxes[dev_query_box_idx * 5 + 1]
block_qboxes[tx * 5 + 2] = dev_query_boxes[dev_query_box_idx * 5 + 2]
block_qboxes[tx * 5 + 3] = dev_query_boxes[dev_query_box_idx * 5 + 3]
block_qboxes[tx * 5 + 4] = dev_query_boxes[dev_query_box_idx * 5 + 4]
if (tx < row_size):
block_boxes[tx * 5 + 0] = dev_boxes[dev_box_idx * 5 + 0]
block_boxes[tx * 5 + 1] = dev_boxes[dev_box_idx * 5 + 1]
block_boxes[tx * 5 + 2] = dev_boxes[dev_box_idx * 5 + 2]
block_boxes[tx * 5 + 3] = dev_boxes[dev_box_idx * 5 + 3]
block_boxes[tx * 5 + 4] = dev_boxes[dev_box_idx * 5 + 4]
cuda.syncthreads()
if tx < row_size:
for i in range(col_size):
offset = row_start * threadsPerBlock * K + col_start * threadsPerBlock + tx * K + i
dev_iou[offset] = devRotateIoUEval(block_qboxes[i * 5:i * 5 + 5],
block_boxes[tx * 5:tx * 5 + 5], criterion)
def rotate_iou_gpu_eval(boxes, query_boxes, criterion=-1, device_id=0):
"""rotated box iou running in gpu. 500x faster than cpu version
(take 5ms in one example with numba.cuda code).
convert from [this project](
https://github.com/hongzhenwang/RRPN-revise/tree/master/lib/rotation).
Args:
boxes (float tensor: [N, 5]): rbboxes. format: centers, dims,
angles(clockwise when positive)
query_boxes (float tensor: [K, 5]): [description]
device_id (int, optional): Defaults to 0. [description]
Returns:
[type]: [description]
"""
box_dtype = boxes.dtype
boxes = boxes.astype(np.float32)
query_boxes = query_boxes.astype(np.float32)
N = boxes.shape[0]
K = query_boxes.shape[0]
iou = np.zeros((N, K), dtype=np.float32)
if N == 0 or K == 0:
return iou
threadsPerBlock = 8 * 8
cuda.select_device(device_id)
blockspergrid = (div_up(N, threadsPerBlock), div_up(K, threadsPerBlock))
stream = cuda.stream()
with stream.auto_synchronize():
boxes_dev = cuda.to_device(boxes.reshape([-1]), stream)
query_boxes_dev = cuda.to_device(query_boxes.reshape([-1]), stream)
iou_dev = cuda.to_device(iou.reshape([-1]), stream)
rotate_iou_kernel_eval[blockspergrid, threadsPerBlock, stream](
N, K, boxes_dev, query_boxes_dev, iou_dev, criterion)
iou_dev.copy_to_host(iou.reshape([-1]), stream=stream)
return iou.astype(boxes.dtype)
| 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/utils/pylogger.py | import logging
from pytorch_lightning.utilities import rank_zero_only
def get_pylogger(name=__name__) -> logging.Logger:
"""Initializes multi-GPU-friendly python command line logger."""
logger = logging.getLogger(name)
# this ensures all logging levels get marked with the rank zero decorator
# otherwise logs would get multiplied for each GPU process in multi-GPU setup
logging_levels = ("debug", "info", "warning", "error", "exception", "fatal", "critical")
for level in logging_levels:
setattr(logger, level, rank_zero_only(getattr(logger, level)))
return logger
| 0 |
apollo_public_repos/apollo-model-yolo3d/src | apollo_public_repos/apollo-model-yolo3d/src/models/regressor.py | """
KITTI Regressor Model
"""
import torch
from torch import nn
from pytorch_lightning import LightningModule
from src.models.components.base import OrientationLoss, orientation_loss2
class RegressorModel(LightningModule):
def __init__(
self,
net: nn.Module,
optimizer: str = "adam",
lr: float = 0.0001,
momentum: float = 0.9,
w: float = 0.4,
alpha: float = 0.6,
):
super().__init__()
# save hyperparamters
self.save_hyperparameters(logger=False)
# init model
self.net = net
# loss functions
self.conf_loss_func = nn.CrossEntropyLoss()
self.dim_loss_func = nn.MSELoss()
self.orient_loss_func = OrientationLoss
# TODO: export model use this
def forward(self, x):
output = self.net(x)
orient = output[0]
conf = output[1]
dim = output[2]
return [orient, conf, dim]
# def forward(self, x):
# return self.net(x)
def on_train_start(self):
# by default lightning executes validation step sanity checks before training starts,
# so we need to make sure val_acc_best doesn't store accuracy from these checks
# self.val_acc_best.reset()
pass
def step(self, batch):
x, y = batch
# convert to float
x = x.float()
truth_orient = y["Orientation"].float()
truth_conf = y["Confidence"].float() # front or back
truth_dim = y["Dimensions"].float()
# predict y_hat
preds = self(x)
[orient, conf, dim] = preds
# compute loss
orient_loss = self.orient_loss_func(orient, truth_orient, truth_conf)
dim_loss = self.dim_loss_func(dim, truth_dim)
# truth_conf = torch.max(truth_conf, dim=1)[1]
conf_loss = self.conf_loss_func(conf, truth_conf)
loss_theta = conf_loss + 1.5 * self.hparams.w * orient_loss
loss = self.hparams.alpha * dim_loss + loss_theta
return [loss, loss_theta, orient_loss, dim_loss, conf_loss], preds, y
def training_step(self, batch, batch_idx):
loss, preds, targets = self.step(batch)
# logging
self.log_dict(
{
"train/loss": loss[0],
"train/theta_loss": loss[1],
"train/orient_loss": loss[2],
"train/dim_loss": loss[3],
"train/conf_loss": loss[4],
},
on_step=False,
on_epoch=True,
prog_bar=False,
)
return {"loss": loss[0], "preds": preds, "targets": targets}
def training_epoch_end(self, outputs):
# `outputs` is a list of dicts returned from `training_step()`
pass
def validation_step(self, batch, batch_idx):
loss, preds, targets = self.step(batch)
# logging
self.log_dict(
{
"val/loss": loss[0],
"val/theta_loss": loss[1],
"val/orient_loss": loss[2],
"val/dim_loss": loss[3],
"val/conf_loss": loss[4],
},
on_step=False,
on_epoch=True,
prog_bar=False,
)
return {"loss": loss[0], "preds": preds, "targets": targets}
def validation_epoch_end(self, outputs):
avg_val_loss = torch.tensor([x["loss"] for x in outputs]).mean()
# log to tensorboard
self.log("val/avg_loss", avg_val_loss)
return {"loss": avg_val_loss}
def on_epoch_end(self):
# reset metrics at the end of every epoch
pass
def configure_optimizers(self):
if self.hparams.optimizer.lower() == "adam":
optimizer = torch.optim.Adam(params=self.parameters(), lr=self.hparams.lr)
elif self.hparams.optimizer.lower() == "sgd":
optimizer = torch.optim.SGD(self.parameters(), lr=self.hparams.lr,
momentum=self.hparams.momentum
)
# scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', patience=5)
return optimizer
class RegressorModel2(LightningModule):
def __init__(
self,
net: nn.Module,
lr: float = 0.0001,
momentum: float = 0.9,
w: float = 0.4,
alpha: float = 0.6,
):
super().__init__()
# save hyperparamters
self.save_hyperparameters(logger=False)
# init model
self.net = net
# loss functions
self.conf_loss_func = nn.CrossEntropyLoss()
self.dim_loss_func = nn.MSELoss()
self.orient_loss_func = orientation_loss2
def forward(self, x):
return self.net(x)
def on_train_start(self):
# by default lightning executes validation step sanity checks before training starts,
# so we need to make sure val_acc_best doesn't store accuracy from these checks
# self.val_acc_best.reset()
pass
def step(self, batch):
x, y = batch
# convert to float
x = x.float()
gt_orient = y["orientation"].float()
gt_conf = y["confidence"].float()
gt_dims = y["dimensions"].float()
# predict y_true
predictions = self(x)
[pred_orient, pred_conf, pred_dims] = predictions
# compute loss
loss_orient = self.orient_loss_func(pred_orient, gt_orient)
loss_dims = self.dim_loss_func(pred_dims, gt_dims)
gt_conf = torch.max(gt_conf, dim=1)[1]
loss_conf = self.conf_loss_func(pred_conf, gt_conf)
# weighting loss => see paper
loss_theta = loss_conf + (self.hparams.w * loss_orient)
loss = (self.hparams.alpha * loss_dims) + loss_theta
return [loss, loss_theta, loss_orient, loss_conf, loss_dims], predictions, y
def training_step(self, batch, batch_idx):
loss, preds, targets = self.step(batch)
# logging
self.log_dict(
{
"train/loss": loss[0],
"train/theta_loss": loss[1],
"train/orient_loss": loss[2],
"train/conf_loss": loss[3],
"train/dim_loss": loss[4],
},
on_step=False,
on_epoch=True,
prog_bar=False,
)
return {"loss": loss[0], "preds": preds, "targets": targets}
def training_epoch_end(self, outputs):
# `outputs` is a list of dicts returned from `training_step()`
pass
def validation_step(self, batch, batch_idx):
loss, preds, targets = self.step(batch)
# logging
self.log_dict(
{
"val/loss": loss[0],
"val/theta_loss": loss[1],
"val/orient_loss": loss[2],
"val/conf_loss": loss[3],
"val/dim_loss": loss[4],
},
on_step=False,
on_epoch=True,
prog_bar=False,
)
return {"loss": loss[0], "preds": preds, "targets": targets}
def validation_epoch_end(self, outputs):
avg_val_loss = torch.tensor([x["loss"] for x in outputs]).mean()
# log to tensorboard
self.log("val/avg_loss", avg_val_loss)
return {"loss": avg_val_loss}
def on_epoch_end(self):
# reset metrics at the end of every epoch
pass
def configure_optimizers(self):
# optimizer = torch.optim.Adam(params=self.parameters(), lr=self.hparams.lr)
optimizer = torch.optim.SGD(self.parameters(), lr=self.hparams.lr,
momentum=self.hparams.momentum
)
# scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', patience=2)
return optimizer
class RegressorModel3(LightningModule):
def __init__(
self,
net: nn.Module,
optimizer: str = "adam",
lr: float = 0.0001,
momentum: float = 0.9,
w: float = 0.4,
alpha: float = 0.6,
):
super().__init__()
# save hyperparamters
self.save_hyperparameters(logger=False)
# init model
self.net = net
# loss functions
self.conf_loss_func = nn.CrossEntropyLoss()
self.dim_loss_func = nn.MSELoss()
self.orient_loss_func = OrientationLoss
def forward(self, x):
return self.net(x)
def on_train_start(self):
# by default lightning executes validation step sanity checks before training starts,
# so we need to make sure val_acc_best doesn't store accuracy from these checks
# self.val_acc_best.reset()
pass
def step(self, batch):
x, y = batch
# convert to float
x = x.float()
gt_orient = y["orientation"].float()
gt_conf = y["confidence"].float()
gt_dims = y["dimensions"].float()
# predict y_true
predictions = self(x)
[pred_orient, pred_conf, pred_dims] = predictions
# compute loss
loss_orient = self.orient_loss_func(pred_orient, gt_orient, gt_conf)
loss_dims = self.dim_loss_func(pred_dims, gt_dims)
gt_conf = torch.max(gt_conf, dim=1)[1]
loss_conf = self.conf_loss_func(pred_conf, gt_conf)
# weighting loss => see paper
loss_theta = loss_conf + (self.hparams.w * loss_orient)
loss = (self.hparams.alpha * loss_dims) + loss_theta
return [loss, loss_theta, loss_orient, loss_conf, loss_dims], predictions, y
def training_step(self, batch, batch_idx):
loss, preds, targets = self.step(batch)
# logging
self.log_dict(
{
"train/loss": loss[0],
"train/theta_loss": loss[1],
"train/orient_loss": loss[2],
"train/conf_loss": loss[3],
"train/dim_loss": loss[4],
},
on_step=False,
on_epoch=True,
prog_bar=False,
)
return {"loss": loss[0], "preds": preds, "targets": targets}
def training_epoch_end(self, outputs):
# `outputs` is a list of dicts returned from `training_step()`
pass
def validation_step(self, batch, batch_idx):
loss, preds, targets = self.step(batch)
# logging
self.log_dict(
{
"val/loss": loss[0],
"val/theta_loss": loss[1],
"val/orient_loss": loss[2],
"val/conf_loss": loss[3],
"val/dim_loss": loss[4],
},
on_step=False,
on_epoch=True,
prog_bar=False,
)
return {"loss": loss[0], "preds": preds, "targets": targets}
def validation_epoch_end(self, outputs):
avg_val_loss = torch.tensor([x["loss"] for x in outputs]).mean()
# log to tensorboard
self.log("val/avg_loss", avg_val_loss)
return {"loss": avg_val_loss}
def on_epoch_end(self):
# reset metrics at the end of every epoch
pass
def configure_optimizers(self):
if self.hparams.optimizer.lower() == "adam":
optimizer = torch.optim.Adam(params=self.parameters(), lr=self.hparams.lr)
elif self.hparams.optimizer.lower() == "sgd":
optimizer = torch.optim.SGD(self.parameters(), lr=self.hparams.lr,
momentum=self.hparams.momentum
)
# scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', patience=5)
return optimizer
if __name__ == "__main__":
from src.models.components.base import RegressorNet
from torchvision.models import resnet18
model1 = RegressorModel(
net=RegressorNet(backbone=resnet18(pretrained=False), bins=2),
)
print(model1)
model2 = RegressorModel3(
net=RegressorNet(backbone=resnet18(pretrained=False), bins=2),
)
print(model2)
| 0 |