text
stringlengths 3
11.2M
| id
stringlengths 15
188
| metadata
dict | __index_level_0__
int64 0
275
|
---|---|---|---|
# Customization and Extension
## Table of Contents
- [Custom Datasets](#custom-datasets)
- [Custom Models](#custom-models)
- [Custom Dialogue Templates](#custom-dialogue-templates)
## Custom Dataset
We support three methods for **customizing datasets**.
1. \[Recommended] Use the command line argument directly to specify `--dataset xxx.json yyy.jsonl zzz.csv`, which is more convenient for supporting custom datasets. It supports five data formats (using `SmartPreprocessor`, supported dataset formats are listed below) and supports `dataset_id` and `dataset_path`. No need to modify the `dataset_info.json` file.
2. Adding datasets to `dataset_info.json` is more flexible but cumbersome compared to the first method, and supports using two preprocessors and specifying their parameters: `RenameColumnsPreprocessor`, `ConversationsPreprocessor` (default is to use `SmartPreprocessor`). You can directly modify the built-in `dataset_info.json` in Swift, or pass in an external json file using `--custom_dataset_info xxx.json` (for users who prefer pip install over git clone to expand datasets).
3. Registering datasets: More flexible but cumbersome compared to the first and second methods, it supports using functions to preprocess datasets. Methods 1 and 2 are implemented by leveraging method 3. You can directly modify the source code for expansion, or pass in a custom registration path using `--custom_register_path xxx.py`, where the script will parse the py file (for pip install users).
### 📌 \[Recommended\] Using Command Line Arguments Directly
Supports directly passing in custom `dataset_id` (compatible with MS and HF) and `dataset_path`, as well as simultaneously passing in multiple custom datasets and their respective sample sizes. The script will automatically preprocess and concatenate the datasets. If a `dataset_id` is passed in, it will default to using the 'default' subset in the dataset_id and set the split to 'train'. If the dataset_id has already been registered, it will use the subsets, split, and preprocessing functions that were passed in during registration. If a `dataset_path` is passed in, it can be specified as a relative path or an absolute path, where the relative path is relative to the current running directory.
```bash
--dataset {dataset_id} {dataset_path}
# Dataset Mixing: the following command takes subset1 and subset2 from dataset_id and samples 20,000 records
--dataset {dataset_name}#20000 {dataset_id}:{subset1}/{subset2}#20000 {dataset_path}#10000
```
The supported file formats for the script include `csv`, `json`, and `jsonl`. You need to ensure that the incoming file conforms to the following dataset formats (only a partial list is provided). All of these formats support the `system` field (it is important to note that if the `system` field is specified in the csv format, it cannot be set to `None` and can only be specified as an empty string. There is no such restriction for the json and jsonl formats). Files in `json` and `jsonl` formats support multi-turn dialogue (`csv` does not support this).
**Format 1:**
Pre-Training
```csv
response
11111
aaaaa
AAAAA
```
```jsonl
{"response": "11111"}
{"response": "aaaaa"}
{"response": "AAAAA"}
```
Single-Round Dialogue
```csv
system,query,response
00000,11111,22222
00001,aaaaa,bbbbb
00002,AAAAA,BBBBB
```
```jsonl
{"system": "00000", "query": "11111", "response": "22222"}
{"query": "aaaaa", "response": "bbbbb"}
{"query": "AAAAA", "response": "BBBBB"}
```
Multi-Round Dialogue
```jsonl
{"system": "00000", "query": "55555", "response": "66666"}
{"query": "eeeee", "response": "fffff", "history": []}
{"query": "EEEEE", "response": "FFFFF", "history": [["AAAAA", "BBBBB"], ["CCCCC", "DDDDD"]]}
```
```json
[{"system": "00000", "query": "55555", "response": "66666"},
{"query": "eeeee", "response": "fffff", "history": []},
{"query": "EEEEE", "response": "FFFFF", "history": [["AAAAA", "BBBBB"], ["CCCCC", "DDDDD"]]}]
```
**Format 2:**
```jsonl
{"conversations": [{"from": "system", "value": "00000"}, {"from": "user", "value": "11111"}, {"from": "assistant", "value": "22222"}]}
{"conversations": [{"from": "user", "value": "aaaaa"}, {"from": "assistant", "value": "bbbbb"}, {"from": "user", "value": "ccccc"}, {"from": "assistant", "value": "ddddd"}]}
{"conversations": [{"from": "user", "value": "AAAAA"}, {"from": "assistant", "value": "BBBBB"}, {"from": "user", "value": "CCCCC"}, {"from": "assistant", "value": "DDDDD"}]}
```
**Format 3:**
```jsonl
{"messages": [{"role": "system", "content": "00000"}, {"role": "user", "content": "11111"}, {"role": "assistant", "content": "22222"}]}
{"messages": [{"role": "user", "content": "aaaaa"}, {"role": "assistant", "content": "bbbbb"}, {"role": "user", "content": "ccccc"}, {"role": "assistant", "content": "ddddd"}]}
{"messages": [{"role": "user", "content": "AAAAA"}, {"role": "assistant", "content": "BBBBB"}, {"role": "user", "content": "CCCCC"}, {"role": "assistant", "content": "DDDDD"}]}
```
**Format 4:**
```jsonl
{"system": "00000", "conversation": [{"human": "11111", "assistant": "22222"}]}
{"conversation": [{"human": "aaaaa", "assistant": "bbbbb"}]}
{"conversation": [{"human": "AAAAA", "assistant": "BBBBB"}, {"human": "CCCCC", "assistant": "DDDDD"}, {"human": "EEEEE", "assistant": "FFFFF"}]}
```
**Format 5:**
```csv
system,instruction,input,output
00000,11111,22222,33333
00001,aaaaa,bbbbb,ccccc
00002,AAAAA,BBBBB,CCCCC
```
**Human preference alignment (DPO/ORPO/SimPO/CPO)**
```jsonl
{"query": "11111", "response": "22222", "rejected_response": "33333", "history": [["AAAAA", "BBBBB"], ["CCCCC", "DDDDD"]]}
{"query": "aaaaa", "response": "bbbbb", "rejected_response": "ccccc", "history": [["AAAAA", "BBBBB"], ["CCCCC", "DDDDD"]]}
{"query": "AAAAA", "response": "BBBBB", "rejected_response": "CCCCC", "history": [["AAAAA", "BBBBB"], ["CCCCC", "DDDDD"]]}
```
**Tool-Calling Agent**
Format 1
```jsonl
{"tools":"{API_LIST}","conversations": [{"from": "system", "value": "00000"}, {"from": "user", "value": "11111"}, {"from": "assistant", "value": "22222"}]}
{"tools":"{API_LIST}","conversations": [{"from": "user", "value": "aaaaa"}, {"from": "assistant", "value": "bbbbb"}, {"from": "tool", "value": "ccccc"}, {"from": "assistant", "value": "ddddd"}]}
{"tools":"{API_LIST}","conversations": [{"from": "user", "value": "AAAAA"}, {"from": "assistant", "value": "BBBBB"}, {"from": "tool", "value": "CCCCC"}, {"from": "assistant", "value": "DDDDD"}]}
```
Format 2
```jsonl
{"tools":"{API_LIST}","messages": [{"role": "system", "content": "00000"}, {"role": "user", "content": "11111"}, {"role": "assistant", "content": "22222"}]}
{"tools":"{API_LIST}","messages": [{"role": "user", "content": "aaaaa"}, {"role": "assistant", "content": "bbbbb"}, {"role": "tool", "content": "ccccc"}, {"role": "assistant", "content": "ddddd"}]}
{"tools":"{API_LIST}","messages": [{"role": "user", "content": "AAAAA"}, {"role": "assistant", "content": "BBBBB"}, {"role": "tool", "content": "CCCCC"}, {"role": "assistant", "content": "DDDDD"}]}
```
For the tools format, please refer to [Agent-Deoloyment Document](./Agent-deployment-best-practice.md) You can choose the corresponding prompt by setting `--tools_prompt`.
The `tool` field represents the return result of the tool calling.
### Adding dataset_info.json
You can refer to the [builtin dataset_info.json in Swift](https://github.com/modelscope/swift/blob/main/swift/llm/data/dataset_info.json) to expand datasets. You can directly add it in the built-in dataset_info.json, or you can pass in the path to an external dataset_info.json, a JSON string, or a dictionary using `--custom_dataset_info 1.json`.
Adding dataset_id:
```python
# MS
# Usage: `--dataset <dataset_name>`
"<dataset_name>": {
"dataset_id": "xxx/xxx"
}
# HF
# Usage: `--dataset HF::<dataset_name>` or directly use the `USE_HF` environment variable.
"<dataset_name>": {
"hf_dataset_id": "xxx/xxx"
}
```
Adding dataset\_path:
```python
# You can specify relative and absolute paths. Relative paths are relative to the directory where dataset_info.json is located.
# Usage: `--dataset <dataset_name>`
"<dataset_name>": {
"dataset_path": "xxx"
}
```
Supported parameters include:
- dataset_id: The corresponding ModelScope dataset_id, default is `None`. The simplest setup requires specifying one of `dataset_id`, `hf_dataset_id`, or `dataset_path`.
- subsets: A list of names of the subsets, default is `[]`, which means using the 'default' subset.
- split: Default is ['train'], usually not necessary to set.
- hf_dataset_id: The corresponding HuggingFace dataset_id, default is `None`.
- dataset_path: Used to specify the local path of the dataset, e.g. 1.jsonl, default is `None`. It can take relative or absolute paths. If using a relative path, it is relative to the directory where the dataset_info.json is located. If dataset_path is set, then dataset_id, subsets, and hf_dataset_id parameters are ignored.
- columns: The default preprocessor used is `SmartPreprocessor`. Specifying this parameter sets it to `RenameColumnsPreprocessor`. You need to rename the columns in the dataset and convert them to the style of **format 1** mentioned above.
- conversations: Specifying this parameter sets the preprocessor to `ConversationsPreprocessor` ('columns' takes priority over 'conversations').
- remove_useless_columns: Specifies whether to remove unnecessary columns (including: 'query', 'response', 'rejected_response', 'system', 'history', 'images'), default is `True`, usually not necessary to set.
- tags: Used to annotate the dataset, default is `[]`, usually not necessary to set.
If the parameters in `dataset_info.json` are not sufficient for your needs, such as adding custom prompts, requiring advanced dataset cleaning, or complex dataset retrieval and preprocessing, you can use the method of registering datasets using functions for data retrieval and preprocessing.
### Registering Datasets
The following is an example of **registering datasets**. The complete py file can be viewed at [custom.py](https://github.com/modelscope/swift/blob/main/examples/pytorch/llm/custom.py), and the sh script can be viewed at [custom](https://github.com/modelscope/swift/tree/main/examples/pytorch/llm/scripts/custom). You can parse the registered content by specifying `--custom_register_path xxx.py`.
```python
from typing import Optional, Tuple
from datasets import Dataset as HfDataset
from modelscope import MsDataset
from swift.llm import get_dataset, register_dataset, get_dataset_from_repo
from swift.utils import get_logger
logger = get_logger()
class CustomDatasetName:
stsb_en = 'stsb-en'
def _preprocess_stsb(dataset: HfDataset) -> HfDataset:
prompt = """Task: Based on the given two sentences, provide a similarity score between 0.0 and 5.0.
Sentence 1: {text1}
Sentence 2: {text2}
Similarity score: """
query = []
response = []
for d in dataset:
query.append(prompt.format(text1=d['text1'], text2=d['text2']))
response.append(f"{d['label']:.1f}")
return HfDataset.from_dict({'query': query, 'response': response})
register_dataset(CustomDatasetName.stsb_en, 'huangjintao/stsb', None, _preprocess_stsb, get_dataset_from_repo)
if __name__ == '__main__':
# test dataset
train_dataset, val_dataset = get_dataset([CustomDatasetName.stsb_en],
check_dataset_strategy='warning')
print(f'train_dataset: {train_dataset}')
print(f'val_dataset: {val_dataset}')
```
The `register_dataset` function will register the dataset in the `DATASET_MAPPING`. The parameters of this function are as follows:
- `dataset_name`: Required, representing the name of the dataset, which is also the unique ID of the dataset.
- `dataset_id_or_path`: Required, representing the `dataset_id` on the ModelScope Hub or the local `dataset_dir`.
- `subsets`: List of subsets of the dataset, default is `[]`.
- `split`: Default is ['train'].
- `preprocess_func`: Preprocessing function.
- `get_function`: Default value is `None`. The function to get the dataset. If passed `None`, the decorator approach will be used to register the dataset. If passed a function, the normal approach will be used to register.
> `get_function` should return `HfDataset` or `Tuple[HfDataset, Optional[HfDataset]]`. If only one dataset is returned, it will be the train_dataset. If two datasets are returned, they will be the train_dataset and val_dataset, respectively. The `get_dataset` function supports obtaining multiple datasets, for example: `get_dataset(['dataset1', 'dataset2'])`. We will concatenate the training and validation parts of each subset and return the merged train_dataset and val_dataset.
> The `HfDataset` returned by the function needs to follow certain specifications. If you want to do **pre-training**, you only need to include the `response` field, please refer to the `'tigerbot-law-zh'` dataset for details. For **instruction tuning (single-round dialogue)**, the `query` and `response` fields need to be included, representing the user's query and the AI assistant's answer in instruction tuning respectively, please refer to the `'alpaca-zh'` dataset for details. For **multi-round dialogue**, an additional `history` field needs to be added, representing the historical information of the dialogue, please refer to the `'damo-agent-mini-zh'` dataset for details. If each dataset sample has a different `system`, an additional system field needs to be added, you can also refer to the `'damo-agent-mini-zh'` dataset for details.
- `**kwargs`: Other parameters used to annotate the dataset. This parameter generally does not need to be set.
## Custom Models
The following is an example of **custom models**. The complete py file can be viewed at [custom.py](https://github.com/modelscope/swift/blob/main/examples/pytorch/llm/custom.py), and the sh script can be viewed at [custom](https://github.com/modelscope/swift/tree/main/examples/pytorch/llm/scripts/custom). You can parse the registered content by specifying `--custom_register_path xxx.py`.
```python
from typing import Any, Dict
from modelscope import AutoConfig, AutoModelForCausalLM, AutoTokenizer
from torch import dtype as Dtype
from transformers.utils.versions import require_version
from swift.llm import LoRATM, TemplateType, get_model_tokenizer, register_model
from swift.utils import get_logger
logger = get_logger()
class CustomModelType:
tigerbot_7b = 'tigerbot-7b'
tigerbot_13b = 'tigerbot-13b'
tigerbot_13b_chat = 'tigerbot-13b-chat'
class CustomTemplateType:
tigerbot = 'tigerbot'
@register_model(CustomModelType.tigerbot_7b,
'TigerResearch/tigerbot-7b-base-v3', LoRATM.llama,
TemplateType.default_generation)
@register_model(CustomModelType.tigerbot_13b,
'TigerResearch/tigerbot-13b-base-v2', LoRATM.llama,
TemplateType.default_generation)
@register_model(CustomModelType.tigerbot_13b_chat,
'TigerResearch/tigerbot-13b-chat-v4', LoRATM.llama,
CustomTemplateType.tigerbot)
def get_tigerbot_model_tokenizer(model_dir: str,
torch_dtype: Dtype,
model_kwargs: Dict[str, Any],
load_model: bool = True,
**kwargs):
use_flash_attn = kwargs.pop('use_flash_attn', False)
if use_flash_attn:
require_version('transformers>=4.34')
logger.info('Setting use_flash_attention_2: True')
model_kwargs['use_flash_attention_2'] = True
model_config = AutoConfig.from_pretrained(
model_dir, trust_remote_code=True)
model_config.pretraining_tp = 1
model_config.torch_dtype = torch_dtype
logger.info(f'model_config: {model_config}')
tokenizer = AutoTokenizer.from_pretrained(
model_dir, trust_remote_code=True)
model = None
if load_model:
model = AutoModelForCausalLM.from_pretrained(
model_dir,
config=model_config,
torch_dtype=torch_dtype,
trust_remote_code=True,
**model_kwargs)
return model, tokenizer
if __name__ == '__main__':
# test model base
model, tokenizer = get_model_tokenizer(
CustomModelType.tigerbot_7b, use_flash_attn=False)
print(model.__class__.__name__)
# test model chat
model, tokenizer = get_model_tokenizer(
CustomModelType.tigerbot_13b_chat, use_flash_attn=False)
print(model.__class__.__name__)
```
`register_model` will register the model in `MODEL_MAPPING`. The meaning of the parameters of this function are as follows:
- `model_type`: Required field. Represents the name of the model, and is also the unique id.
- `model_id_or_path`: Required field. Represents the `model_id` of the model in ModelScope Hub, or the local model directory `model_dir`.
- `lora_target_modules`: Default is `None`. Represents the default lora_target_modules to use when `--lora_target_modules DEFAULT` or `--lora_target_modules AUTO` is specified in the sh script, or when `--lora_target_modules` is not specified.
- `template`: Default is `TemplateType.default`. Represents the default dialogue template to use when `--template_type AUTO` is specified in the sh script, or when `--template_type` is not specified.
- `get_function`: Default value is `None`. The function to get model and tokenizer. If passed `None`, the decorator approach will be used to register the model. If passed a function, the normal approach will be used to register.
- `requires`: Default is `[]`. Represents the dependencies required by the model that differ from other models. This parameter generally does not need to be set.
- `torch_dtype`: Default is `None`. Represents the recommended torch_dtype for the model to use. This parameter generally does not need to be set.
- `revision`: Default is `None`. Used to specify the version number of the model. If `model_id_or_path` is a local model directory, this parameter is not effective. This parameter generally does not need to be set.
- `ignore_file_pattern`: Default is `None`. Represents the regular pattern of file names to be ignored when downloading, this parameter will be passed to `snapshot_download`. For example, `r'.+\.bin$'`, `r'.+\.savetensors$'`, etc. This parameter generally does not need to be set.
- `**kwargs`: Other parameters used to annotate model capabilities. This parameter generally does not need to be set.
## Custom Dialogue Templates
The following is an example of **custom models**. The complete py file can be viewed at [custom.py](https://github.com/modelscope/swift/blob/main/examples/pytorch/llm/custom.py), and the sh script can be viewed at [custom](https://github.com/modelscope/swift/tree/main/examples/pytorch/llm/scripts/custom).
```python
from swift.llm import (Template, ModelType, dataset_map,
get_model_tokenizer, get_template, get_dataset,
print_example, register_template, DatasetName)
from swift.utils import get_logger
logger = get_logger()
class CustomTemplateType:
tigerbot = 'tigerbot'
# Ref: https://github.com/TigerResearch/TigerBot/blob/main/infer.py
register_template(
CustomTemplateType.tigerbot,
Template(['{{SYSTEM}}'], ['\n\n### Instruction:\n{{QUERY}}\n\n### Response:\n'], [],
[['eos_token_id']]))
if __name__ == '__main__':
# test template
train_dataset, _ = get_dataset(DatasetName.blossom_math_zh)
_, tokenizer = get_model_tokenizer(ModelType.qwen_7b_chat, load_model=False)
template = get_template(CustomTemplateType.tigerbot, tokenizer)
train_dataset = dataset_map(train_dataset, template.encode)
print_example(train_dataset[0], tokenizer)
```
`register_template` will register the dialogue template in `TEMPLATE_MAPPING`. The meaning of the parameters of this function are as follows:
- `template_type`: Required field, represents the name of the dialogue template, and is also the unique id of the template.
- `template`: Required field, needs to pass in a `Template`. To initialize `Template`, the following parameters need to be passed in: `prefix`, `prompt`, `chat_sep`, `suffix`, `default_system`.
The template initialization function will obtain the complete chat template based on these four contents. The meaning of these four configuration contents are as follows.
- `prefix`: Represents the prefix part of the dialogue template, generally including system part, prefix tokens, bos tokens, etc. We use `{{SYSTEM}}` as the placeholder for the system. If `{{SYSTEM}}` does not exist in the prefix, then this Template does not support system, e.g. `damo-agent-mini-zh` dataset.
- `prompt`: Represents a round of dialogue in the dialogue template. We use `{{QUERY}}` as the placeholder for the human query part in each round of dialogue, `{{ROUND0}}` represents the placeholder for which round of dialogue this is, starting from 0, and `{{ROUND1}}` starts from 1. The AI assistant's reply part will be concatenated after `prompt`, so we have not designed a placeholder for it. We will only calculate the loss for the AI assistant's reply part.
- `chat_sep`: If multi-round dialogue is needed, `chat_sep` will be used as the separator between each round of dialogue, such as: newline, etc. If set to None, then this Template does not support multi-round dialogue.
- `suffix`: Used as the suffix part of the dialogue template, generally eos token. Will be concatenated after the last round of dialogue.
- `default_system`: The default system.
| swift/docs/source_en/LLM/Customization.md/0 | {
"file_path": "swift/docs/source_en/LLM/Customization.md",
"repo_id": "swift",
"token_count": 7078
} | 188 |
## LLM Documentation
### 📚Tutorials!
1. [LLM Inference](LLM-inference.md)
2. [LLM Finetuning](LLM-fine-tuning.md)
3. [DPO Training](DPO.md)
4. [Web-ui Training and Inference](../GetStarted/Web-ui.md)
5. [LLM Evaluation](LLM-eval.md)
6. [LLM Quantization](LLM-quantization.md)
7. [VLLM Inference and Deployment](VLLM-inference-acceleration-and-deployment.md)
8. [LLM Experimental](LLM-exp.md)
9. [ORPO Training](ORPO.md)
10. [SimPO Training](SimPO.md)
11. [Human Preference Alignment Training Documentation](Human-Preference-Alignment-Training-Documentation.md)
### ⭐️Best Practices!
1. [Self Cognition Best Practice](Self-cognition-best-practice.md)
2. [Agent Training and Inference Best Practice](Agent-fine-tuning-best-practice.md)
3. [Agent deployment best practice](Agent-deployment-best-practice.md)
4. [Qwen1.5 Best Practice](Qwen1.5-best-practice.md)
5. [NPU Best Practice](NPU-best-practice.md)
6. [Grok-1 Training and Inference Best Practice](Grok-1-best-practice.md)
### 🐔References!
1. [Customization for models and datasets](Customization.md)
2. [Command Line Parameters](Command-line-parameters.md)
3. [Supported models and datasets](Supported-models-datasets.md)
4. [Benchmark](Benchmark.md)
5. [Compatible with the HuggingFace ecosystem](Compat-HF.md)
### 🍀Multi-Modal Best Practices!
Please check: [Multi-Modal Best Practices](../Multi-Modal/index.md)
| swift/docs/source_en/LLM/index.md/0 | {
"file_path": "swift/docs/source_en/LLM/index.md",
"repo_id": "swift",
"token_count": 496
} | 189 |
# Experimental environment: 3090
CUDA_VISIBLE_DEVICES=0 \
swift infer \
--ckpt_dir "output/atom-7b-chat/vx-xxx/checkpoint-xxx" \
--load_dataset_config true \
--max_new_tokens 2048 \
--temperature 0.1 \
--top_p 0.7 \
--repetition_penalty 1. \
--do_sample true \
--merge_lora false \
| swift/examples/pytorch/llm/scripts/atom_7b_chat/lora/infer.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/atom_7b_chat/lora/infer.sh",
"repo_id": "swift",
"token_count": 141
} | 190 |
# Experimental environment: 3090
CUDA_VISIBLE_DEVICES=0 \
swift infer \
--ckpt_dir "output/chatglm3-6b/vx-xxx/checkpoint-xxx" \
--load_dataset_config true \
--max_new_tokens 2048 \
--temperature 0.1 \
--top_p 0.7 \
--repetition_penalty 1. \
--do_sample true \
--merge_lora false \
| swift/examples/pytorch/llm/scripts/chatglm3_6b/lora_ddp/infer.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/chatglm3_6b/lora_ddp/infer.sh",
"repo_id": "swift",
"token_count": 142
} | 191 |
# Experiment env: A10, RTX3090/4090, A100
CUDA_VISIBLE_DEVICES=0 \
swift sft \
--model_type codeqwen1half-7b-chat-awq \
--dataset leetcode-python-en \
--batch_size 4 \
--max_length 2048 \
--gradient_accumulation_steps 2 \
--learning_rate 5e-5 \
--use_flash_attn true \
--eval_steps 2000 \
--save_steps 2000 \
--num_train_epochs 3 \
--check_dataset_strategy none \
--gradient_checkpointing true \
--weight_decay 0.1 \
--max_grad_norm 1.0 \
--warmup_ratio 0.03 \
--save_total_limit 2 \
--logging_steps 10 \
--sft_type lora \
--lora_target_modules ALL \
--lora_rank 8 \
--lora_alpha 32
| swift/examples/pytorch/llm/scripts/codeqwen1half_7b_chat_awq/lora/sft.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/codeqwen1half_7b_chat_awq/lora/sft.sh",
"repo_id": "swift",
"token_count": 262
} | 192 |
# Experimental environment: 4*A100
# Memory usage: 4 * 20G
nproc_per_node=2
CUDA_VISIBLE_DEVICES=0,1,2,3 \
NPROC_PER_NODE=$nproc_per_node \
MASTER_PORT=29500 \
swift dpo \
--model_type yi-6b-chat \
--ref_model_type yi-6b-chat \
--model_revision master \
--sft_type lora \
--tuner_backend swift \
--dtype AUTO \
--output_dir output \
--dataset hh-rlhf-cn:harmless_base_cn \
--num_train_epochs 3 \
--max_length 1024 \
--max_prompt_length 512 \
--check_dataset_strategy none \
--lora_rank 8 \
--lora_alpha 32 \
--lora_dropout_p 0.05 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--weight_decay 0.1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps $(expr 16 / $nproc_per_node) \
--max_grad_norm 1.0 \
--warmup_ratio 0.03 \
--eval_steps 2000 \
--save_steps 2000 \
--save_total_limit 2 \
--logging_steps 10 \
| swift/examples/pytorch/llm/scripts/dpo/lora_ddp_mp/dpo.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/dpo/lora_ddp_mp/dpo.sh",
"repo_id": "swift",
"token_count": 485
} | 193 |
# Experiment env: A10, RTX3090/4090, A100
PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0,1 \
python llm_infer.py \
--ckpt_dir "output/llama2-7b-aqlm-2bit-1x16/vx-xxx/checkpoint-xxx" \
--load_dataset_config true \
--use_flash_attn true \
--max_new_tokens 2048 \
--temperature 0.5 \
--top_p 0.7 \
--repetition_penalty 1. \
--do_sample true \
--stream false \
--merge_lora false \
| swift/examples/pytorch/llm/scripts/llama2_7b_aqlm_2bit_1x16/lora/infer.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/llama2_7b_aqlm_2bit_1x16/lora/infer.sh",
"repo_id": "swift",
"token_count": 201
} | 194 |
# Experimental environment: A10
PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0 \
python llm_infer.py \
--ckpt_dir "output/openbuddy-mistral-7b-chat/vx-xxx/checkpoint-xxx" \
--load_dataset_config true \
--max_new_tokens 2048 \
--temperature 0.1 \
--top_p 0.7 \
--repetition_penalty 1. \
--do_sample true \
--merge_lora false \
| swift/examples/pytorch/llm/scripts/openbuddy_mistral_7b_chat/lora_ddp_ds/infer.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/openbuddy_mistral_7b_chat/lora_ddp_ds/infer.sh",
"repo_id": "swift",
"token_count": 162
} | 195 |
# Experimental environment: 4 * A100
# 4 * 55GB GPU memory
NPROC_PER_NODE=2 \
CUDA_VISIBLE_DEVICES=0,1,2,3 \
swift sft \
--model_type qwen-vl-chat \
--sft_type full \
--train_dataset_sample -1 \
--eval_steps 100 \
--output_dir output \
--num_train_epochs 1 \
--max_length 2048 \
--learning_rate 1e-5 \
--use_flash_attn true \
--save_only_model true \
--dataset coco-en-mini \
| swift/examples/pytorch/llm/scripts/qwen_vl_chat/full_mp_ddp/sft.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/qwen_vl_chat/full_mp_ddp/sft.sh",
"repo_id": "swift",
"token_count": 190
} | 196 |
# Experimental environment: A10, 3090
# 16GB GPU memory
PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0 \
python llm_sft.py \
--model_id_or_path skywork/Skywork-13B-base \
--model_revision master \
--sft_type lora \
--tuner_backend peft \
--template_type default-generation \
--dtype AUTO \
--output_dir output \
--dataset advertise-gen-zh \
--train_dataset_sample 20000 \
--num_train_epochs 1 \
--max_length 2048 \
--check_dataset_strategy warning \
--quantization_bit 4 \
--bnb_4bit_comp_dtype AUTO \
--lora_rank 8 \
--lora_alpha 32 \
--lora_dropout_p 0.05 \
--lora_target_modules DEFAULT \
--gradient_checkpointing true \
--batch_size 1 \
--weight_decay 0.1 \
--learning_rate 1e-4 \
--gradient_accumulation_steps 16 \
--max_grad_norm 0.5 \
--warmup_ratio 0.03 \
--eval_steps 100 \
--save_steps 100 \
--save_total_limit 2 \
--logging_steps 10 \
| swift/examples/pytorch/llm/scripts/skywork_13b/qlora/sft.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/skywork_13b/qlora/sft.sh",
"repo_id": "swift",
"token_count": 421
} | 197 |
# Experimental environment: 3090
# 12GB GPU memory
PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0 \
python llm_sft.py \
--model_id_or_path xverse/XVERSE-13B \
--model_revision master \
--sft_type lora \
--tuner_backend peft \
--template_type default-generation \
--dtype AUTO \
--output_dir output \
--dataset advertise-gen-zh \
--train_dataset_sample 20000 \
--num_train_epochs 1 \
--max_length 2048 \
--check_dataset_strategy warning \
--quantization_bit 4 \
--bnb_4bit_comp_dtype AUTO \
--lora_rank 8 \
--lora_alpha 32 \
--lora_dropout_p 0.05 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--weight_decay 0.1 \
--learning_rate 1e-4 \
--gradient_accumulation_steps 16 \
--max_grad_norm 0.5 \
--warmup_ratio 0.03 \
--eval_steps 100 \
--save_steps 100 \
--save_total_limit 2 \
--logging_steps 10 \
| swift/examples/pytorch/llm/scripts/xverse_13b/qlora/sft.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/xverse_13b/qlora/sft.sh",
"repo_id": "swift",
"token_count": 416
} | 198 |
PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0 \
python infer_controlnet.py \
--base_model_path "AI-ModelScope/stable-diffusion-v1-5" \
--controlnet_path "train_controlnet" \
--prompt "pale golden rod circle with old lace background" \
--control_image_path "conditioning_image_1.png" \
--image_save_path "output.png" \
--torch_dtype "fp16" \
--seed 0 \
| swift/examples/pytorch/sdxl/scripts/run_infer_controlnet.sh/0 | {
"file_path": "swift/examples/pytorch/sdxl/scripts/run_infer_controlnet.sh",
"repo_id": "swift",
"token_count": 154
} | 199 |
PYTHONPATH=../../../ \
accelerate launch train_text_to_image_lora_sdxl.py \
--pretrained_model_name_or_path="AI-ModelScope/stable-diffusion-xl-base-1.0" \
--pretrained_vae_model_name_or_path="AI-ModelScope/sdxl-vae-fp16-fix" \
--dataset_name="AI-ModelScope/pokemon-blip-captions" \
--caption_column="text" \
--resolution=1024 \
--random_flip \
--train_batch_size=1 \
--num_train_epochs=2 \
--checkpointing_steps=500 \
--learning_rate=1e-04 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--mixed_precision="fp16" \
--seed=42 \
--output_dir="train_text_to_image_lora_sdxl" \
--validation_prompt="cute dragon creature" \
--report_to="tensorboard" \
| swift/examples/pytorch/sdxl/scripts/run_train_text_to_image_lora_sdxl.sh/0 | {
"file_path": "swift/examples/pytorch/sdxl/scripts/run_train_text_to_image_lora_sdxl.sh",
"repo_id": "swift",
"token_count": 292
} | 200 |
PYTHONPATH=. torchrun examples/pytorch/stable_diffusion/finetune_stable_diffusion.py \
--model 'AI-ModelScope/stable-diffusion-xl-base-1.0' \
--model_revision 'v1.0.2' \
--prompt "a dog" \
--work_dir './tmp/lora_diffusion_xl' \
--train_dataset_name 'buptwq/lora-stable-diffusion-finetune' \
--max_epochs 100 \
--lora_rank 16 \
--lora_alpha 32 \
--save_ckpt_strategy 'by_epoch' \
--logging_interval 1 \
--train.dataloader.workers_per_gpu 0 \
--evaluation.dataloader.workers_per_gpu 0 \
--train.optimizer.lr 1e-4 \
--sample_nums 10 \
--num_inference_steps 30 \
--use_model_config true
| swift/examples/pytorch/stable_diffusion/run_train_lora_xl.sh/0 | {
"file_path": "swift/examples/pytorch/stable_diffusion/run_train_lora_xl.sh",
"repo_id": "swift",
"token_count": 297
} | 201 |
{
"framework": "pytorch",
"task": "chat",
"allow_remote": true,
"adapter_cfg": {
"model_id_or_path": "qwen/Qwen-7B-Chat",
"model_revision": "master",
"sft_type": "lora",
"tuner_backend": "peft",
"template_type": "qwen",
"dtype": "bf16",
"system": "You are a helpful assistant."
}
} | swift/output/qwen-7b-chat/v1-20240626-092716/checkpoint-12/configuration.json/0 | {
"file_path": "swift/output/qwen-7b-chat/v1-20240626-092716/checkpoint-12/configuration.json",
"repo_id": "swift",
"token_count": 186
} | 202 |
import os
import subprocess
from swift.llm import ModelType
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
if __name__ == '__main__':
model_name_list = ModelType.get_model_name_list()
success_model_list = []
fpath = os.path.join(os.path.dirname(__file__), 'utils.py')
for model_name in model_name_list:
code = subprocess.run(['python', fpath, '--model_type', model_name])
if code.returncode == 0:
success_model_list.append(model_name)
else:
print(f'model_name: {model_name} not support vllm.')
print(success_model_list)
| swift/scripts/tests/test_vllm.py/main.py/0 | {
"file_path": "swift/scripts/tests/test_vllm.py/main.py",
"repo_id": "swift",
"token_count": 251
} | 203 |
# Copyright (c) Alibaba, Inc. and its affiliates.
import argparse
import os
import torch
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline, UniPCMultistepScheduler
from diffusers.utils import load_image
from modelscope import snapshot_download
def parse_args():
parser = argparse.ArgumentParser(description='Simple example of a ControlNet inference.')
parser.add_argument(
'--base_model_path',
type=str,
default='AI-ModelScope/stable-diffusion-v1-5',
required=True,
help='Path to pretrained model or model identifier from modelscope.cn/models.',
)
parser.add_argument(
'--revision',
type=str,
default=None,
required=False,
help='Revision of pretrained model identifier from modelscope.cn/models.',
)
parser.add_argument(
'--controlnet_path',
type=str,
default=None,
required=False,
help='The path to trained controlnet model.',
)
parser.add_argument(
'--prompt',
type=str,
default=None,
required=True,
help='The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`',
)
parser.add_argument(
'--control_image_path',
type=str,
default=None,
required=True,
help='The path to conditioning image.',
)
parser.add_argument(
'--image_save_path',
type=str,
default=None,
required=True,
help='The path to save generated image',
)
parser.add_argument(
'--torch_dtype',
type=str,
default=None,
choices=['no', 'fp16', 'bf16'],
help=('Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >='
' 1.10.and an Nvidia Ampere GPU. Default to the value of the'
' mixed_precision passed with the `accelerate.launch` command in training script.'),
)
parser.add_argument('--seed', type=int, default=None, help='A seed for inference.')
parser.add_argument(
'--num_inference_steps',
type=int,
default=20,
help=('The number of denoising steps. More denoising steps usually lead to a higher quality image at the \
expense of slower inference.'),
)
parser.add_argument(
'--guidance_scale',
type=float,
default=7.5,
help=('A higher guidance scale value encourages the model to generate images closely linked to the text \
`prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.'),
)
args = parser.parse_args()
return args
def main():
args = parse_args()
if os.path.exists(args.base_model_path):
base_model_path = args.base_model_path
else:
base_model_path = snapshot_download(args.base_model_path, revision=args.revision)
if args.torch_dtype == 'fp16':
torch_dtype = torch.float16
elif args.torch_dtype == 'bf16':
torch_dtype = torch.bfloat16
else:
torch_dtype = torch.float32
controlnet = ControlNetModel.from_pretrained(args.controlnet_path, torch_dtype=torch_dtype)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
base_model_path, controlnet=controlnet, torch_dtype=torch_dtype)
# speed up diffusion process with faster scheduler and memory optimization
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# memory optimization.
pipe.enable_model_cpu_offload()
control_image = load_image(args.control_image_path)
# generate image
generator = torch.manual_seed(args.seed)
image = pipe(
args.prompt, num_inference_steps=args.num_inference_steps, generator=generator, image=control_image).images[0]
image.save(args.image_save_path)
| swift/swift/aigc/diffusers/infer_controlnet.py/0 | {
"file_path": "swift/swift/aigc/diffusers/infer_controlnet.py",
"repo_id": "swift",
"token_count": 1576
} | 204 |
# Copyright (c) Alibaba, Inc. and its affiliates.
from swift.llm import sft_main
if __name__ == '__main__':
sft_main()
| swift/swift/cli/sft.py/0 | {
"file_path": "swift/swift/cli/sft.py",
"repo_id": "swift",
"token_count": 46
} | 205 |
# Copyright (c) Alibaba, Inc. and its affiliates.
from http import HTTPStatus
import requests
from requests.exceptions import HTTPError
from swift.utils.logger import get_logger
logger = get_logger()
class NotSupportError(Exception):
pass
class NoValidRevisionError(Exception):
pass
class NotExistError(Exception):
pass
class RequestError(Exception):
pass
class GitError(Exception):
pass
class InvalidParameter(Exception):
pass
class NotLoginException(Exception):
pass
class FileIntegrityError(Exception):
pass
class FileDownloadError(Exception):
pass
def is_ok(rsp):
""" Check the request is ok
Args:
rsp (Response): The request response body
Returns:
bool: `True` if success otherwise `False`.
"""
return rsp['Code'] == HTTPStatus.OK and rsp['Success']
def _decode_response_error(response: requests.Response):
if 'application/json' in response.headers.get('content-type', ''):
message = response.json()
else:
message = response.content.decode('utf-8')
return message
def handle_http_post_error(response, url, request_body):
try:
response.raise_for_status()
except HTTPError as error:
message = _decode_response_error(response)
raise HTTPError('Request %s with body: %s exception, '
'Response details: %s' % (url, request_body, message)) from error
def handle_http_response(response, logger, cookies, model_id):
try:
response.raise_for_status()
except HTTPError as error:
if cookies is None: # code in [403] and
pass
message = _decode_response_error(response)
raise HTTPError('Response details: %s' % message) from error
def raise_on_error(rsp):
"""If response error, raise exception
Args:
rsp (_type_): The server response
Raises:
RequestError: the response error message.
Returns:
bool: True if request is OK, otherwise raise `RequestError` exception.
"""
if rsp['Code'] == HTTPStatus.OK:
return True
else:
raise RequestError(rsp['Message'])
def datahub_raise_on_error(url, rsp):
"""If response error, raise exception
Args:
url (str): The request url
rsp (HTTPResponse): The server response.
Raises:
RequestError: the http request error.
Returns:
bool: `True` if request is OK, otherwise raise `RequestError` exception.
"""
if rsp.get('Code') == HTTPStatus.OK:
return True
else:
raise RequestError(
f"Url = {url}, Message = {rsp.get('Message')}, Please specify correct dataset_name and namespace.")
def raise_for_http_status(rsp):
"""Attempt to decode utf-8 first since some servers
localize reason strings, for invalid utf-8, fall back
to decoding with iso-8859-1.
Args:
rsp: The http response.
Raises:
HTTPError: The http error info.
"""
http_error_msg = ''
if isinstance(rsp.reason, bytes):
try:
reason = rsp.reason.decode('utf-8')
except UnicodeDecodeError:
reason = rsp.reason.decode('iso-8859-1')
else:
reason = rsp.reason
if 400 <= rsp.status_code < 500:
http_error_msg = u'%s Client Error: %s for url: %s' % (rsp.status_code, reason, rsp.url)
elif 500 <= rsp.status_code < 600:
http_error_msg = u'%s Server Error: %s for url: %s' % (rsp.status_code, reason, rsp.url)
if http_error_msg:
req = rsp.request
if req.method == 'POST':
http_error_msg = u'%s, body: %s' % (http_error_msg, req.body)
raise HTTPError(http_error_msg, response=rsp)
| swift/swift/hub/errors.py/0 | {
"file_path": "swift/swift/hub/errors.py",
"repo_id": "swift",
"token_count": 1467
} | 206 |
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
| swift/swift/llm/ds_config/zero2.json/0 | {
"file_path": "swift/swift/llm/ds_config/zero2.json",
"repo_id": "swift",
"token_count": 702
} | 207 |
# Copyright (c) Alibaba, Inc. and its affiliates.
import os
import sys
import types
from collections import OrderedDict
from typing import List, Optional, Tuple
import safetensors
import torch
import torch.nn.functional as F
import transformers
from packaging import version
from peft import PeftModel
from torch.utils.data import DataLoader
from transformers import PreTrainedModel, trainer
from transformers.modeling_utils import unwrap_model
from swift.utils import get_logger, torchacc_trim_graph, use_torchacc
logger = get_logger()
# DataLoader
def get_bucket_sizes(max_length: int) -> List[int]:
return [max_length // 4 * (i + 1) for i in range(4)]
def _get_closet_bucket(bucket_sizes, data_length):
"""Select the one from bucket_sizes that is closest in distance to
data_length. This is required for TorchAcc.
"""
cloest_length = sys.maxsize
for b in bucket_sizes:
if b == data_length or ((b < cloest_length) and (b > data_length)):
cloest_length = b
if cloest_length == sys.maxsize:
bucket_sizes.append(data_length)
cloest_length = data_length
return cloest_length
def pad_and_split_batch(padding_to, input_ids, attention_mask, labels, loss_scale, max_length, tokenizer, rank,
world_size):
if padding_to is None:
longest_len = input_ids.shape[-1]
bucket_sizes = get_bucket_sizes(max_length)
bucket_data_length = _get_closet_bucket(bucket_sizes, longest_len)
padding_length = bucket_data_length - input_ids.shape[1]
input_ids = F.pad(input_ids, (0, padding_length), 'constant', tokenizer.pad_token_id)
attention_mask = F.pad(attention_mask, (0, padding_length), 'constant', 0)
if loss_scale:
loss_scale = F.pad(loss_scale, (0, padding_length), 'constant', 0.)
labels = F.pad(labels, (0, padding_length), 'constant', -100)
# manully split the batch to different DP rank.
batch_size = input_ids.shape[0] // world_size
if batch_size > 0:
start = rank * batch_size
end = (rank + 1) * batch_size
input_ids = input_ids[start:end, :]
attention_mask = attention_mask[start:end, :]
labels = labels[start:end, :]
if loss_scale:
loss_scale = loss_scale[start:end, :]
return input_ids, attention_mask, labels, loss_scale
def ta_train_dataloader(train_dataset, data_collator, sampler, args, batch_size):
# patch skip_first_batches for customized dataloader.
def acc_skip_first_batches(dataloader, num_batches=0):
from accelerate.data_loader import SkipBatchSampler
batch_sampler = SkipBatchSampler(dataloader._loader.batch_sampler, skip_batches=num_batches)
try:
dataset = dataloader.dataset
except AttributeError:
dataset = dataloader._loader.dataset
dataloader_params = {
'collate_fn': data_collator,
'num_workers': args.dataloader_num_workers,
'pin_memory': args.dataloader_pin_memory,
'persistent_workers': args.dataloader_persistent_workers,
}
if not isinstance(train_dataset, torch.utils.data.IterableDataset):
dataloader_params['batch_sampler'] = batch_sampler
dataloader_params['worker_init_fn'] = trainer.seed_worker
return ta.AsyncLoader(DataLoader(dataset, **dataloader_params), args.device)
trainer.skip_first_batches = acc_skip_first_batches
# dataloader for TorchAcc.
import torchacc as ta
dataloader_params = {
'batch_size': batch_size,
'collate_fn': data_collator,
'num_workers': args.dataloader_num_workers,
'pin_memory': args.dataloader_pin_memory,
'persistent_workers': args.dataloader_persistent_workers,
}
if not isinstance(train_dataset, torch.utils.data.IterableDataset):
dataloader_params['sampler'] = sampler
dataloader_params['drop_last'] = args.dataloader_drop_last
dataloader_params['worker_init_fn'] = trainer.seed_worker
return ta.AsyncLoader(DataLoader(train_dataset, **dataloader_params), args.device)
def ta_eval_dataloader(eval_dataset, data_collator, sampler, args):
import torchacc as ta
dataloader_params = {
'batch_size': args.eval_batch_size,
'collate_fn': data_collator,
'num_workers': args.dataloader_num_workers,
'pin_memory': args.dataloader_pin_memory,
'persistent_workers': args.dataloader_persistent_workers,
}
if not isinstance(eval_dataset, torch.utils.data.IterableDataset):
dataloader_params['sampler'] = sampler
dataloader_params['drop_last'] = args.dataloader_drop_last
return ta.AsyncLoader(DataLoader(eval_dataset, **dataloader_params), args.device)
def ta_test_dataloader(test_dataset, data_collator, sampler, args):
import torchacc as ta
dataloader_params = {
'batch_size': args.eval_batch_size,
'collate_fn': data_collator,
'num_workers': args.dataloader_num_workers,
'pin_memory': args.dataloader_pin_memory,
'persistent_workers': args.dataloader_persistent_workers,
}
if not isinstance(test_dataset, torch.utils.data.IterableDataset):
dataloader_params['sampler'] = sampler
dataloader_params['drop_last'] = args.dataloader_drop_last
# We use the same batch_size as for eval.
return ta.AsyncLoader(DataLoader(test_dataset, **dataloader_params), args.device)
# Save/load checkpoint
def consolidate_checkpoint(resume_from_checkpoint, model_name='adapter_model'):
""" Consolidate the sharded TorchAcc checkpoints into a single model checkpoint.
"""
import torch_xla.core.xla_model as xm
from torch_xla.distributed.fsdp import consolidate_sharded_state_dicts
if model_name not in ('adapter_model', 'model'):
logger.error('Only support PeftModel and PreTrainedModel.')
return
model_dir = os.path.join(resume_from_checkpoint, '0')
is_pretrained_model = False
if os.path.exists(os.path.join(model_dir, f'{model_name}.safetensors')):
use_safetensors = True
elif os.path.exists(os.path.join(model_dir, f'{model_name}.bin')):
use_safetensors = False
elif os.path.exists(os.path.join(model_dir, 'pytorch_model.bin')):
# PreTrainedModel use 'pytorch_model.bin' and 'model.safetensors'
use_safetensors = False
is_pretrained_model = True
else:
logger.error('Cannot find checkpoint.')
state_dict_list = []
if xm.is_master_ordinal(local=False) and use_safetensors:
from safetensors.torch import load_file
for rank in range(xm.xrt_world_size()):
shard_dir = os.path.join(resume_from_checkpoint, f'{rank}')
filename = os.path.join(shard_dir, f'{model_name}.safetensors')
state_dict = load_file(filename, device='cpu')
state_dict = OrderedDict(('_fsdp_wrapped_module.' + k, v) for k, v in state_dict.items())
state_dict_list.append(state_dict)
shard_metadata = torch.load(os.path.join(model_dir, 'shard_meta.pth'), map_location='cpu')
elif xm.is_master_ordinal(local=False):
for rank in range(xm.xrt_world_size()):
shard_dir = os.path.join(resume_from_checkpoint, f'{rank}')
if not is_pretrained_model:
filename = os.path.join(shard_dir, f'{model_name}.bin')
else:
filename = os.path.join(shard_dir, 'pytorch_model.bin')
state_dict = torch.load(filename, map_location='cpu')
state_dict = OrderedDict(('_fsdp_wrapped_module.' + k, v) for k, v in state_dict.items())
state_dict_list.append(state_dict)
shard_metadata = torch.load(os.path.join(model_dir, 'shard_meta.pth'), map_location='cpu')
if xm.is_master_ordinal(local=False):
full_state_dict = consolidate_sharded_state_dicts(state_dict_list, shard_metadata)
# peft will prepend "default." prefix automatically, so we remove the
# "default." prefix to prevent the duplication of the prefix.
full_state_dict = OrderedDict((k.replace('default.', ''), v) for k, v in full_state_dict.items())
torch.save(full_state_dict, os.path.join(resume_from_checkpoint, f'{model_name}.bin'))
if model_name == 'adapter_model':
config_path = os.path.join(resume_from_checkpoint, 'adapter_config.json')
old_config_path = os.path.join(model_dir, 'adapter_config.json')
os.system(f'cp {old_config_path} {config_path}')
xm.rendezvous('ckpt_consolidation')
def ta_save_optimizer_and_scheduler(optimizer, lr_scheduler, output_dir):
import torch_xla.core.xla_model as xm
xm.rendezvous('saving_optimizer_states')
torch.save(optimizer.state_dict(), os.path.join(output_dir, f'optimizer_{xm.get_ordinal()}.pt'))
torch.save(lr_scheduler.state_dict(), os.path.join(output_dir, f'scheduler_{xm.get_ordinal()}.pt'))
xm.rendezvous('saving_optimizer_states_done')
def ta_load_optimizer_and_scheduler(optimizer, lr_scheduler, checkpoint, device):
import torch_xla.core.xla_model as xm
optimizer_state = torch.load(os.path.join(checkpoint, f'optimizer_{xm.get_ordinal()}.pt'), map_location='cpu')
lr_scheduler_state = torch.load(os.path.join(checkpoint, f'scheduler_{xm.get_ordinal()}.pt'), map_location='cpu')
xm.send_cpu_data_to_device(optimizer_state, device)
xm.send_cpu_data_to_device(lr_scheduler_state, device)
optimizer.load_state_dict(optimizer_state)
lr_scheduler.load_state_dict(lr_scheduler_state)
return optimizer, lr_scheduler
def save_ta_ddp_checkpoint(self_model, tokenizer, args, output_dir: Optional[str] = None):
output_dir = output_dir if output_dir is not None else args.output_dir
import torch_xla.core.xla_model as xm
model = self_model
if xm.is_master_ordinal():
os.makedirs(output_dir, exist_ok=True)
torch.save(args, os.path.join(output_dir, 'training_args.bin'))
xm.mark_step()
# Save a trained model and configuration using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
supported_classes = (PreTrainedModel, PeftModel)
if not isinstance(model, supported_classes):
if isinstance(unwrap_model(model), supported_classes):
unwrap_model(model).save_pretrained(
output_dir,
is_main_process=args.should_save,
state_dict=xm._maybe_convert_to_cpu(model.state_dict()),
save_function=xm.save,
safe_serialization=args.save_safetensors,
)
else:
logger.info('Trainer.model is not a `PreTrainedModel`, only saving its state dict.')
state_dict = xm._maybe_convert_to_cpu(model.state_dict())
if args.save_safetensors:
safetensors.torch.save_file(state_dict, os.path.join(output_dir, 'model.safetensors'))
else:
torch.save(state_dict, os.path.join(output_dir, 'pytorch_model.bin'))
else:
model.save_pretrained(
output_dir,
is_main_process=args.should_save,
save_function=xm.save,
safe_serialization=args.save_safetensors,
state_dict=xm._maybe_convert_to_cpu(model.state_dict()))
if tokenizer is not None and args.should_save:
tokenizer.save_pretrained(output_dir)
def save_ta_fsdp_checkpoint(self_model, tokenizer, args, output_dir):
import torch_xla.core.xla_model as xm
xm.mark_step()
if xm.is_master_ordinal(local=False):
os.makedirs(output_dir, exist_ok=True)
torch.save(args, os.path.join(output_dir, 'training_args.bin'))
model = self_model._get_underlay_model().module.module
supported_classes = (PreTrainedModel, PeftModel)
save_safetensors = args.save_safetensors
# Save a trained model and configuration using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
xm.rendezvous('saving_checkpoint')
out_dir = os.path.join(output_dir, f'{xm.get_ordinal()}')
if not isinstance(model, supported_classes):
if isinstance(unwrap_model(model), supported_classes):
unwrap_model(model).save_pretrained(
out_dir,
state_dict=xm._maybe_convert_to_cpu(model.state_dict()),
save_function=xm.save,
safe_serialization=args.save_safetensors,
)
else:
logger.info('Trainer.model is not a `PreTrainedModel`, only saving its state dict.')
state_dict = xm._maybe_convert_to_cpu(model.state_dict())
if save_safetensors:
safetensors.torch.save_file(state_dict, os.path.join(out_dir, 'model.safetensors'))
else:
torch.save(state_dict, os.path.join(out_dir, 'pytorch_model.bin'))
else:
model.save_pretrained(
out_dir,
save_function=xm.save,
safe_serialization=args.save_safetensors,
state_dict=xm._maybe_convert_to_cpu(model.state_dict()))
# save shard_metadata for consolidation.
shard_meta = self_model._get_underlay_model().get_shard_metadata()
xm.save(shard_meta, os.path.join(out_dir, 'shard_meta.pth'))
xm.rendezvous('saving_checkpoint_done')
if tokenizer is not None and args.should_save:
tokenizer.save_pretrained(output_dir, is_main_process=xm.is_master_ordinal(local=False), save_function=xm.save)
def ta_trim_graph():
if use_torchacc() and torchacc_trim_graph():
import torchacc as ta
ta.mark_step()
# Model patch
def rotate_half(x):
"""Rotates half the hidden dims of the input."""
x1 = x[..., :x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2:]
return torch.cat((-x2, x1), dim=-1)
def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
"""Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.
sin (`torch.Tensor`): The sine part of the rotary embedding.
position_ids (`torch.Tensor`):
The position indices of the tokens corresponding to the query and key tensors. For example, this can be
used to pass offsetted position ids when working with a KV-cache.
unsqueeze_dim (`int`, *optional*, defaults to 1):
The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
Returns:
`tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
"""
if position_ids is not None:
cos = cos[position_ids].unsqueeze(unsqueeze_dim)
sin = sin[position_ids].unsqueeze(unsqueeze_dim)
else:
cos = cos.unsqueeze(unsqueeze_dim)
sin = sin.unsqueeze(unsqueeze_dim)
q_embed = (q * cos) + (rotate_half(q) * sin)
k_embed = (k * cos) + (rotate_half(k) * sin)
return q_embed, k_embed
def patch_acc_model(model, args):
if not args.use_flash_attn:
logger.warn('Currently use flash attn for torchacc.')
if args.model_type.startswith('qwen1half'):
model = patch_qwen2_model(model)
elif args.model_type.startswith('qwen'):
import torchacc as ta
model = ta.patch_qwen_model(model)
elif args.model_type.startswith('baichuan'):
model = patch_baichuan_model(model)
elif args.model_type.startswith('llama') or args.model_type.startswith('yi'):
model = patch_llama_model(model)
elif args.model_type.startswith('chatglm'):
model = patah_chatglm_model(model)
return model
def patch_llama_model(model):
def update_causal_mask(
self,
attention_mask: torch.Tensor,
input_tensor: torch.Tensor,
cache_position: torch.Tensor,
past_seen_tokens: int,
):
# attention_mask is not supported in TorchAcc.
return None
def llama_attn_forward(self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
cache_position: Optional[torch.LongTensor] = None,
**kwargs) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
from torchacc.ops import flash_attn_varlen_xla
import einops
bsz, q_len, _ = hidden_states.size()
query_states = (self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2))
key_states = (
self.k_proj(hidden_states).view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2))
value_states = (
self.v_proj(hidden_states).view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2))
kv_seq_len = key_states.shape[-2]
assert past_key_value is None, 'past_key_value is not supported'
if version.parse(transformers.__version__) >= version.parse('4.36'):
cos, sin = self.rotary_emb(value_states, position_ids)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
else:
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
assert not output_attentions, 'output_attentions is not supported'
if past_key_value is not None:
key_states = torch.cat([past_key_value[0], key_states], dim=2)
value_states = torch.cat([past_key_value[1], value_states], dim=2)
past_key_value = (key_states, value_states) if use_cache else None
# See https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/flash_attention.py
# if attention_mask is not None:
# value_states = value_states * attention_mask.unsqueeze(1).unsqueeze(-1)
q = einops.rearrange(query_states, 'b h s ... -> (b s) h ...')
k = einops.rearrange(key_states, 'b h s ... -> (b s) h ...')
v = einops.rearrange(value_states, 'b h s ... -> (b s) h ...')
max_s = q_len
cu_q_lens = torch.arange(0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=q.device)
output = flash_attn_varlen_xla(
q, k, v, cu_q_lens, cu_q_lens, max_s, max_s, 0.0, softmax_scale=None, causal=True)
output = einops.rearrange(output, '(b s) ... -> b s ...', b=bsz)
return self.o_proj(einops.rearrange(output, 'b s h d -> b s (h d)')), None, past_key_value
for layer in model.model.layers:
layer.self_attn.forward = types.MethodType(llama_attn_forward, layer.self_attn)
if version.parse(transformers.__version__) >= version.parse('4.40'):
model.model._update_causal_mask = types.MethodType(update_causal_mask, model.model)
return model
def patah_chatglm_model(model):
def chatglm_apply_rotary_pos_emb(x: torch.Tensor, rope_cache: torch.Tensor) -> torch.Tensor:
# x: [sq, b, np, hn]
sq, _, np, _ = x.size(0), x.size(1), x.size(2), x.size(3)
rot_dim = rope_cache.shape[-2] * 2
x, x_pass = x[..., :rot_dim], x[..., rot_dim:]
# truncate to support variable sizes
rope_cache = rope_cache[:sq]
xshaped = x.reshape(sq, -1, np, rot_dim // 2, 2)
rope_cache = rope_cache.view(sq, -1, 1, xshaped.size(3), 2)
x_out2 = torch.stack(
[
xshaped[..., 0] * rope_cache[..., 0] - xshaped[..., 1] * rope_cache[..., 1],
xshaped[..., 1] * rope_cache[..., 0] + xshaped[..., 0] * rope_cache[..., 1],
],
-1,
)
x_out2 = x_out2.flatten(3)
return torch.cat((x_out2, x_pass), dim=-1)
def chatglm_attn_forward(self,
hidden_states,
attention_mask,
rotary_pos_emb,
kv_cache=None,
use_cache=True,
**kwargs):
# hidden_states: [sq, b, h]
# =================================================
# Pre-allocate memory for key-values for inference.
# =================================================
# =====================
# Query, Key, and Value
# =====================
# Attention heads [sq, b, h] --> [sq, b, (np * 3 * hn)]
mixed_x_layer = self.query_key_value(hidden_states)
if self.multi_query_attention:
(query_layer, key_layer, value_layer) = mixed_x_layer.split(
[
self.num_attention_heads_per_partition * self.hidden_size_per_attention_head,
self.num_multi_query_groups_per_partition * self.hidden_size_per_attention_head,
self.num_multi_query_groups_per_partition * self.hidden_size_per_attention_head,
],
dim=-1,
)
query_layer = query_layer.view(query_layer.size()[:-1] + (self.num_attention_heads_per_partition,
self.hidden_size_per_attention_head))
key_layer = key_layer.view(key_layer.size()[:-1] + (self.num_multi_query_groups_per_partition,
self.hidden_size_per_attention_head))
value_layer = value_layer.view(value_layer.size()[:-1] + (self.num_multi_query_groups_per_partition,
self.hidden_size_per_attention_head))
else:
new_tensor_shape = mixed_x_layer.size()[:-1] + (self.num_attention_heads_per_partition,
3 * self.hidden_size_per_attention_head)
mixed_x_layer = mixed_x_layer.view(*new_tensor_shape)
# [sq, b, np, 3 * hn] --> 3 [sq, b, np, hn]
(query_layer, key_layer, value_layer) = split_tensor_along_last_dim(mixed_x_layer, 3)
# apply relative positional encoding (rotary embedding)
if rotary_pos_emb is not None:
query_layer = chatglm_apply_rotary_pos_emb(query_layer, rotary_pos_emb)
key_layer = chatglm_apply_rotary_pos_emb(key_layer, rotary_pos_emb)
# adjust key and value for inference
if kv_cache is not None:
cache_k, cache_v = kv_cache
key_layer = torch.cat((cache_k, key_layer), dim=0)
value_layer = torch.cat((cache_v, value_layer), dim=0)
if use_cache:
kv_cache = (key_layer, value_layer)
else:
kv_cache = None
if self.multi_query_attention:
key_layer = key_layer.unsqueeze(-2)
key_layer = key_layer.expand(
-1, -1, -1, self.num_attention_heads_per_partition // self.num_multi_query_groups_per_partition, -1)
key_layer = key_layer.contiguous().view(key_layer.size()[:2] + (self.num_attention_heads_per_partition,
self.hidden_size_per_attention_head))
value_layer = value_layer.unsqueeze(-2)
value_layer = value_layer.expand(
-1, -1, -1, self.num_attention_heads_per_partition // self.num_multi_query_groups_per_partition, -1)
value_layer = value_layer.contiguous().view(value_layer.size()[:2]
+ (self.num_attention_heads_per_partition,
self.hidden_size_per_attention_head))
# ==================================
# core attention computation
# ==================================
from torchacc.ops import flash_attn_varlen_qkvpacked_xla
import einops
query_layer, key_layer, value_layer = [k.permute(1, 2, 0, 3) for k in [query_layer, key_layer, value_layer]]
bsz, _, q_len, _ = query_layer.size()
qkv = torch.stack([query_layer, key_layer, value_layer], dim=2)
qkv = qkv.transpose(1, 3)
qkv = einops.rearrange(qkv, 'b s ... -> (b s) ...')
cu_q_lens = torch.arange(0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=qkv.device)
context_layer = flash_attn_varlen_qkvpacked_xla(
qkv, cu_q_lens, q_len, dropout_p=0.0, softmax_scale=None, causal=True)
context_layer = einops.rearrange(context_layer, '(b s) ... -> b s ...', b=bsz)
context_layer = context_layer.permute(1, 0, 2, 3)
new_context_layer_shape = context_layer.size()[:-2] + (self.core_attention.hidden_size_per_partition, )
context_layer = context_layer.reshape(*new_context_layer_shape)
# =================
# Output. [sq, b, h]
# =================
output = self.dense(context_layer)
return output, kv_cache
def torchacc_swiglu(x):
x = torch.chunk(x, 2, dim=-1)
return F.silu(x[0]).to(x[0].dtype) * x[1]
# patch attention
for layer in model.transformer.encoder.layers:
layer.self_attention.forward = types.MethodType(chatglm_attn_forward, layer.self_attention)
layer.mlp.activation_func = torchacc_swiglu
return model
def patch_baichuan_model(model):
def baichuan_attn_forward(self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
**kwargs) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
import einops
bsz, q_len, _ = hidden_states.size()
proj = self.W_pack(hidden_states)
proj = (proj.unflatten(-1, (3, self.hidden_size)).unsqueeze(0).transpose(0, -2).squeeze(-2))
query_states = (proj[0].view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2))
key_states = (proj[1].view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2))
value_states = (proj[2].view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2))
kv_seq_len = key_states.shape[-2]
if past_key_value is not None:
kv_seq_len += past_key_value[0].shape[-2]
if past_key_value is not None:
# reuse k, v, self_attention
key_states = torch.cat([past_key_value[0], key_states], dim=2)
value_states = torch.cat([past_key_value[1], value_states], dim=2)
past_key_value = (key_states, value_states) if use_cache else None
from torchacc.ops import flash_attn_varlen_xla
query_states = query_states.transpose(1, 2)
key_states = key_states.transpose(1, 2)
value_states = value_states.transpose(1, 2)
q, k, v = [einops.rearrange(x, 'b s ... -> (b s) ...') for x in [query_states, key_states, value_states]]
cu_q_lens = torch.arange(0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=q.device)
output = flash_attn_varlen_xla(
q, k, v, cu_q_lens, cu_q_lens, q_len, q_len, 0.0, softmax_scale=None, causal=True)
output = einops.rearrange(output, '(b s) ... -> b s ...', b=bsz)
output = self.o_proj(einops.rearrange(output, 'b s h d -> b s (h d)'))
return output, None, past_key_value
for layer in model.base_model.layers:
layer.self_attn.forward = types.MethodType(baichuan_attn_forward, layer.self_attn)
return model
def patch_qwen2_model(model):
def qwen2_attn_forward(
self,
hidden_states,
attention_mask=None,
position_ids=None,
past_key_value=None,
output_attentions: bool = False,
use_cache: bool = False,
**kwargs,
):
bsz, q_len, _ = hidden_states.size()
query_states = self.q_proj(hidden_states)
key_states = self.k_proj(hidden_states)
value_states = self.v_proj(hidden_states)
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
kv_seq_len = key_states.shape[-2]
if past_key_value is not None:
if self.layer_idx is None:
raise ValueError(
f'The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} '
'for auto-regressive decoding with k/v caching, please make sure to initialize the attention class '
'with a layer index.')
kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
# Because the input can be padded, the absolute sequence length depends on the max position id.
# rotary_seq_len = max(kv_seq_len, position_ids[:, -1].max().item()) + 1
rotary_seq_len = kv_seq_len + 1
cos, sin = self.rotary_emb(value_states, seq_len=rotary_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
dropout_rate = 0.0 if not self.training else self.attention_dropout
# In PEFT, usually we cast the layer norms in float32 for training stability reasons
# therefore the input hidden states gets silently casted in float32. Hence, we need
# cast them back in float16 just to be sure everything works as expected.
input_dtype = query_states.dtype
if input_dtype == torch.float32:
if torch.is_autocast_enabled():
target_dtype = torch.get_autocast_gpu_dtype()
# Handle the case where the model is quantized
elif hasattr(self.config, '_pre_quantization_dtype'):
target_dtype = self.config._pre_quantization_dtype
else:
target_dtype = self.q_proj.weight.dtype
query_states = query_states.to(target_dtype)
key_states = key_states.to(target_dtype)
value_states = value_states.to(target_dtype)
# Reashape to the expected shape for Flash Attention
query_states = query_states.transpose(1, 2)
key_states = key_states.transpose(1, 2)
value_states = value_states.transpose(1, 2)
from torchacc.ops import flash_attn_varlen_xla
import einops
q, k, v = [einops.rearrange(x, 'b s ... -> (b s) ...') for x in [query_states, key_states, value_states]]
cu_q_lens = torch.arange(0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=q.device)
attn_output = flash_attn_varlen_xla(
q, k, v, cu_q_lens, cu_q_lens, q_len, q_len, dropout_rate, softmax_scale=None, causal=True)
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
attn_output = self.o_proj(attn_output)
if not output_attentions:
attn_weights = None
return attn_output, attn_weights, past_key_value
def qwen2_forward(self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
**kwargs):
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# retrieve input_ids and inputs_embeds
if input_ids is not None and inputs_embeds is not None:
raise ValueError('You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time')
elif input_ids is not None:
batch_size, seq_length = input_ids.shape
elif inputs_embeds is not None:
batch_size, seq_length, _ = inputs_embeds.shape
else:
raise ValueError('You have to specify either decoder_input_ids or decoder_inputs_embeds')
if self.gradient_checkpointing and self.training:
if use_cache:
use_cache = False
past_key_values_length = 0
if use_cache:
use_legacy_cache = not isinstance(past_key_values, Cache)
if use_legacy_cache:
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
past_key_values_length = past_key_values.get_usable_length(seq_length)
if position_ids is None:
device = input_ids.device if input_ids is not None else inputs_embeds.device
position_ids = torch.arange(
past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device)
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
else:
position_ids = position_ids.view(-1, seq_length).long()
if inputs_embeds is None:
inputs_embeds = self.embed_tokens(input_ids)
hidden_states = inputs_embeds
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
next_decoder_cache = None
for decoder_layer in self.layers:
if output_hidden_states:
all_hidden_states += (hidden_states, )
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(
decoder_layer.__call__,
hidden_states,
attention_mask,
position_ids,
past_key_values,
output_attentions,
use_cache,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_values,
output_attentions=output_attentions,
use_cache=use_cache,
)
hidden_states = layer_outputs[0]
if use_cache:
next_decoder_cache = layer_outputs[2 if output_attentions else 1]
if output_attentions:
all_self_attns += (layer_outputs[1], )
hidden_states = self.norm(hidden_states)
# add hidden states from the last decoder layer
if output_hidden_states:
all_hidden_states += (hidden_states, )
next_cache = None
if use_cache:
next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
if not return_dict:
return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
from transformers.modeling_outputs import BaseModelOutputWithPast
return BaseModelOutputWithPast(
last_hidden_state=hidden_states,
past_key_values=next_cache,
hidden_states=all_hidden_states,
attentions=all_self_attns,
)
for layer in model.model.layers:
layer.self_attn.forward = types.MethodType(qwen2_attn_forward, layer.self_attn)
model.model.forward = types.MethodType(qwen2_forward, model.model)
return model
| swift/swift/torchacc_utils.py/0 | {
"file_path": "swift/swift/torchacc_utils.py",
"repo_id": "swift",
"token_count": 17393
} | 208 |
# Copyright (c) Alibaba, Inc. and its affiliates.
from typing import TYPE_CHECKING
from swift.utils.import_utils import _LazyModule
if TYPE_CHECKING:
from .utils import create_optimizer_and_scheduler, GaLoreConfig
from .adafactor import GaLoreAdafactor
from .adamw8bit import GaLoreAdamW8bit
from .adamw import GaLoreAdamW
else:
_import_structure = {
'utils': ['GaLoreConfig', 'create_optimizer_and_scheduler'],
'adafactor': ['GaLoreAdafactor'],
'adamw8bit': ['GaLoreAdamW8bit'],
'adamw': ['GaLoreAdamW'],
}
import sys
sys.modules[__name__] = _LazyModule(
__name__,
globals()['__file__'],
_import_structure,
module_spec=__spec__,
extra_objects={},
)
| swift/swift/trainers/optimizers/galore/__init__.py/0 | {
"file_path": "swift/swift/trainers/optimizers/galore/__init__.py",
"repo_id": "swift",
"token_count": 337
} | 209 |
# Copyright (c) Alibaba, Inc. and its affiliates.
# Copyright 2023-present the HuggingFace Inc. team.
import os
import re
import shutil
from copy import copy
from functools import partial
from inspect import Parameter, Signature, signature
from types import MethodType
from typing import Dict, List, Literal, Optional, Union
import json
import torch
from peft.utils import CONFIG_NAME
from peft.utils.other import SAFETENSORS_WEIGHTS_NAME, WEIGHTS_NAME
from torch import nn
from transformers import Trainer
from swift import SwiftTuners
from swift.hub.snapshot_download import snapshot_download
from swift.utils.constants import DEFAULT_ADAPTER, SWIFT_TYPE_KEY
from swift.utils.logger import get_logger
from .. import PeftConfig, PeftModel, get_peft_model
from .utils import SwiftConfig, SwiftOutput
logger = get_logger()
class SwiftModel(nn.Module):
"""The Swift wrapper model.
Args:
model (`Union[nn.Module, 'SwiftModel']`) A module to be tuned by Swift.
config (`Union[SwiftConfig, Dict[str, SwiftConfig]]`) A config or a dict of {adapter_name: SwiftConfig}.
If it's a config class, the adapter_name will be `default`
extra_state_keys (`List[str]`, `optional`) A list of regex to match the extra state keys to be saved.
inference_mode (bool, `optional`): Load model at inference mode, default False.
"""
EXTRA_STATE_DIR = 'extra_states'
def __init__(self,
model: Union[nn.Module, 'SwiftModel'],
config: Union[SwiftConfig, Dict[str, SwiftConfig]],
extra_state_keys: List[str] = None,
inference_mode: bool = False,
**kwargs):
super().__init__()
self.adapters = {}
self.active_adapters = set()
if isinstance(model, SwiftModel):
self.adapters = model.adapters
extra_state_keys = extra_state_keys or []
extra_state_keys.extend(model.extra_state_keys)
self.active_adapters = model.active_adapters
model = model.base_model
new_adapters = []
if isinstance(config, SwiftConfig):
if DEFAULT_ADAPTER not in self.adapters:
self.adapters[DEFAULT_ADAPTER] = self._prepare_model(model, config, DEFAULT_ADAPTER)
new_adapters.append(DEFAULT_ADAPTER)
else:
logger.warn(f'Adapter {DEFAULT_ADAPTER} has been patched, skip.')
elif isinstance(config, dict):
assert (all(isinstance(c, SwiftConfig) for c in config.values()))
for adapter_name, _config in config.items():
if adapter_name not in self.adapters:
self.adapters[adapter_name] = self._prepare_model(model, _config, adapter_name)
new_adapters.append(adapter_name)
else:
logger.warn(f'Adapter {adapter_name} has been patched, skip.')
self.base_model = model
self.extra_state_keys = extra_state_keys or []
self.has_additional_modules = any([c.config.has_additional_modules for c in self.adapters.values()])
def forward(self, *args, **kwargs):
return self.base_model(*args, **kwargs)
_parameters = [Parameter('self', Parameter.POSITIONAL_ONLY)]
_parameters += list(signature(self.base_model.forward).parameters.values())
forward.__signature__ = Signature(_parameters)
self.forward = MethodType(forward, self)
for adapter_name in new_adapters:
self.activate_adapter(adapter_name)
if inference_mode:
self.eval()
else:
for key, output in self.adapters.items():
if key in new_adapters:
output.mark_trainable_callback(model)
if self.extra_state_keys:
for n, p in model.named_parameters():
if any(re.fullmatch(extra_key, n) for extra_key in self.extra_state_keys):
p.requires_grad = True
@property
def model(self):
return self.base_model
def load_state_dict(self, state_dict, strict=True, adapter_name: str = None):
if adapter_name is not None:
output = self.adapters[adapter_name]
if getattr(output.config, 'modules_to_save', None):
for key, value in copy(state_dict).items():
for module_name in output.config.modules_to_save:
if module_name in key:
state_dict.pop(key)
key = key.replace(module_name, f'{module_name}.modules_to_save.{adapter_name}')
break
state_dict[key] = value
for key, value in copy(state_dict).items():
if key.startswith('base_model.model.'):
state_dict.pop(key, None)
key = key[len('base_model.model.'):]
if f'lora_A.{adapter_name}.' not in key and 'lora_A' in key:
state_dict.pop(key, None)
key = key.replace('lora_A.', f'lora_A.{adapter_name}.')
if f'lora_B.{adapter_name}.' not in key and 'lora_B' in key:
state_dict.pop(key, None)
key = key.replace('lora_B.', f'lora_B.{adapter_name}.')
if f'lora_embedding_A.{adapter_name}.' not in key and 'lora_embedding_A' in key:
state_dict.pop(key, None)
key = key.replace('lora_embedding_A.', f'lora_embedding_A.{adapter_name}.')
if f'lora_embedding_B.{adapter_name}.' not in key and 'lora_embedding_B' in key:
state_dict.pop(key, None)
key = key.replace('lora_embedding_B.', f'lora_embedding_B.{adapter_name}.')
state_dict[key] = value
incompatible_keys = self.base_model.load_state_dict(state_dict, False)
if incompatible_keys and len(incompatible_keys[1]) > 0:
logger.error(f'Load state dict with unexpected keys: {incompatible_keys[1]}')
def state_dict(self,
*args,
destination=None,
prefix='',
keep_vars=False,
adapter_name: str = None,
peft_format: bool = False,
**kwargs):
"""
Args:
destination (`dict`, `optional`): If provided, the state of module will
be updated into the dict and the same object is returned.
Otherwise, an ``OrderedDict`` will be created and returned.
Default: ``None``.
prefix (`str`, `optional`): a prefix added to parameter and buffer
names to compose the keys in state_dict. Default: ``''``.
keep_vars (`bool`, `optional`): by default the :class:`~torch.Tensor` s
returned in the state dict are detached from autograd. If it's
set to ``True``, detaching will not be performed.
Default: ``False``.
adapter_name (`str`, `optional`): The name of the adapter's parameters to be saved,
`None` input will save all adapters.
peft_format (`bool`, `optional`): Save with peft format (extra `base_model.model.` prefix)
**kwargs:
save_adapter(`bool`): Save adapters or not, default True
save_extra_states(`bool`): Save extra states or not, default True
Returns:
The state dict to be saved.
"""
state_dict = kwargs.get('state_dict')
if state_dict is None:
state_dict = self.base_model.state_dict(destination=destination, prefix=prefix, keep_vars=keep_vars)
state_dict = {
key[len('base_model.'):] if key.startswith('base_model.') else key: value
for key, value in state_dict.items()
}
if not self.has_additional_modules:
return state_dict
state_dicts = {}
if kwargs.get('save_adapter', True):
for name, output in self.adapters.items():
if (adapter_name == name or adapter_name is None) and output.config.has_additional_modules: # noqa
state_dicts.update(output.state_dict_callback(state_dict, name))
modules_to_save_names = [
sub_name for sub_name, _ in self.base_model.named_parameters()
if f'modules_to_save.{name}' in sub_name
]
for module_name in modules_to_save_names:
if f'modules_to_save.{name}' in module_name:
state_dicts[module_name.replace(f'modules_to_save.{name}.', '')] = state_dict[module_name]
if kwargs.get('save_extra_states', True):
state_dicts.update({
k: v
for k, v in state_dict.items() if any(
re.fullmatch(extra_key, k) for extra_key in self.extra_state_keys)
})
if peft_format:
new_state_dict = {}
for key, value in state_dicts.items():
if not key.startswith('base_model.model.'):
key = 'base_model.model.' + key
key = key.replace(f'lora_A.{adapter_name}.', 'lora_A.')
key = key.replace(f'lora_B.{adapter_name}.', 'lora_B.')
key = key.replace(f'lora_embedding_A.{adapter_name}.', 'lora_embedding_A.')
key = key.replace(f'lora_embedding_B.{adapter_name}.', 'lora_embedding_B.')
new_state_dict[key] = value
state_dicts = new_state_dict
return state_dicts
def __getattr__(self, name: str):
"""Forward missing attributes to the wrapped module."""
try:
return super().__getattr__(name) # defer to nn.Module's logic
except AttributeError:
return getattr(self.base_model, name)
@staticmethod
def load_state_file(path, device: Optional[str] = None):
"""Load a state dict file by the input path.
Args:
path: The local dir to load the state file.
Returns:
The state dict.
"""
if device is None:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
if os.path.exists(os.path.join(path, SAFETENSORS_WEIGHTS_NAME)):
filename = os.path.join(path, SAFETENSORS_WEIGHTS_NAME)
from safetensors.torch import load_file as safe_load_file
return safe_load_file(filename, device=device)
elif os.path.exists(os.path.join(path, WEIGHTS_NAME)):
filename = os.path.join(path, WEIGHTS_NAME)
return torch.load(filename, map_location=device)
return None
def create_optimizer_param_groups(self, **defaults):
all_param_names = set()
param_groups = []
for output in self.adapters.values():
if output.optimizer_group_callback:
param_names, param_group = output.optimizer_group_callback(self.model, **defaults)
if param_names and all_param_names & param_names:
raise ValueError('Cannot set one parameter to different param groups')
if param_names and param_group:
all_param_names.update(param_names)
param_groups.extend(param_group)
decay_parameters = Trainer.get_decay_parameter_names(None, self.model)
param_groups.extend([
{
'params': [
p for n, p in self.model.named_parameters()
if (n in decay_parameters and n not in all_param_names and p.requires_grad)
],
'weight_decay':
defaults['weight_decay'],
},
{
'params': [
p for n, p in self.model.named_parameters()
if (n not in decay_parameters and n not in all_param_names and p.requires_grad)
],
'weight_decay':
0.0,
},
])
return param_groups
@classmethod
def from_pretrained(cls,
model: Union[nn.Module, 'SwiftModel'],
model_id: str = None,
adapter_name: Union[str, List[str], Dict[str, str]] = None,
inference_mode: bool = False,
revision: str = None,
**kwargs):
"""Load a set of tuners and corresponding weights by a model_id.
Args:
model (`Union[torch.nn.Module, 'SwiftModel']`): The model to be tuned,
if the model is already a `SwiftModel` it will be un-wrapped and re-wrapped..
model_id (`str`): The model_id or a local model dir of tuners to use to tune the model.
adapter_name (`Union[str, List[str], Dict[str, str]]`): The adapter_names saved in the model repo to load.
Default `None`, means load all tuners saved in the model_id
inference_mode (`bool`): Use in the inference mode or not.
revision (`str`): The model revision to use.
**kwargs:
extra_state_keys (`List[str]`, `optional`) A list of regex to match the extra state keys to be saved.
Other parameters will be passed to the device_map.
Returns:
The `SwiftModel` instance.
"""
adapters = {}
model_dir = model_id
if not os.path.exists(model_dir):
model_dir = snapshot_download(model_dir, revision=revision)
if os.path.isfile(model_dir):
raise ValueError(f'Please pass in a local dir or a model id, not a local file: {model_dir}')
extra_state_keys = kwargs.pop('extra_state_keys', None)
if extra_state_keys is None and os.path.isfile(os.path.join(model_dir, cls.EXTRA_STATE_DIR, CONFIG_NAME)):
with open(os.path.join(model_dir, cls.EXTRA_STATE_DIR, CONFIG_NAME), 'r') as file:
_json = json.load(file)
extra_state_keys = _json.get('extra_state_keys')
if adapter_name is None:
adapter_name = [
sub_dir for sub_dir in os.listdir(model_dir)
if os.path.isfile(os.path.join(model_dir, sub_dir, CONFIG_NAME)) and sub_dir != cls.EXTRA_STATE_DIR
]
for _name in adapter_name if isinstance(adapter_name,
list) else [adapter_name] \
if isinstance(adapter_name, str) else adapter_name.keys():
sub_folder = os.path.join(model_dir, _name)
config_file = os.path.join(sub_folder, CONFIG_NAME)
if not os.path.isfile(config_file):
logger.warning(f'{_name} is not a valid tuner')
continue
with open(config_file, 'r') as file:
json_object = json.load(file)
if SWIFT_TYPE_KEY not in json_object:
raise ValueError('Mixed using with peft is not allowed now.')
else:
key = _name if not isinstance(adapter_name, dict) else adapter_name[_name]
adapters[key] = SwiftConfig.from_pretrained(sub_folder)
self = SwiftModel(model, adapters, extra_state_keys, inference_mode, **kwargs)
for _name in adapter_name if isinstance(adapter_name,
list) else [adapter_name] \
if isinstance(adapter_name, str) else adapter_name.keys():
sub_folder = os.path.join(model_dir, _name)
state_dict = cls.load_state_file(sub_folder)
_adapter = _name if not isinstance(adapter_name, dict) else adapter_name[_name]
if state_dict is not None:
model_is_qlora = len([
k for k in self.state_dict().keys()
if k.endswith(f'.lora_A.{_adapter}.weight') or k.endswith(f'.lora_B.{_adapter}.weight')
])
if not model_is_qlora:
# model is lora, state_dict: qlora->lora
state_dict = {
k[:-len(f'.{_name}.weight') if k.endswith(f'.lora_A.{_name}.weight') or k.
endswith(f'.lora_B.{_name}.weight') else None]: v
for k, v in state_dict.items()
}
if any(['loramodule' in key for key in state_dict]):
# Compatible with old checkpoints before ms-swift:1.5.0
state_dict = {
key.replace(f'loramodule_{_name}.lora_A', 'lora_A') if f'loramodule_{_name}.lora_A.{_name}'
in key else key.replace(f'loramodule_{_name}.lora_A', f'lora_A.{_name}.weight'): value
for key, value in state_dict.items()
}
state_dict = {
key.replace(f'loramodule_{_name}.lora_B', 'lora_B') if f'loramodule_{_name}.lora_B.{_name}'
in key else key.replace(f'loramodule_{_name}.lora_B', f'lora_B.{_name}.weight'): value
for key, value in state_dict.items()
}
if isinstance(adapter_name, dict):
# TODO this logic is fragile! replace `_name` may cause other parts replaced
state_dict = {key.replace(_name, adapter_name[_name]): value for key, value in state_dict.items()}
self.load_state_dict(state_dict, adapter_name=_adapter)
state_dict = cls.load_state_file(os.path.join(model_dir, self.EXTRA_STATE_DIR))
if state_dict is not None:
self.load_state_dict(state_dict)
return self
@classmethod
def _prepare_model(
cls,
model: nn.Module,
config: SwiftConfig,
adapter_name: str,
):
assert (hasattr(config, SWIFT_TYPE_KEY))
from .mapping import SWIFT_MAPPING
adatper_cls = SWIFT_MAPPING[config.swift_type][1]
if adatper_cls.has_additional_modules() and not getattr(model, 'model_frozen', False):
for _, p in model.named_parameters():
p.requires_grad = False
model.model_frozen = True
config.has_additional_modules = adatper_cls.has_additional_modules()
return adatper_cls.prepare_model(model, config, adapter_name)
def create_or_update_model_card(self, output_dir: str):
"""
Updates or create the model card.
"""
if not os.path.exists(os.path.join(output_dir, 'README.md')):
lines = []
else:
with open(os.path.join(output_dir, 'README.md'), 'r') as f:
lines = f.readlines()
quantization_config = None
if hasattr(self.base_model, 'config') and hasattr(self.base_model.config, 'quantization_config'):
if hasattr(self.base_model.config.quantization_config, 'to_dict'):
quantization_config = self.base_model.config.quantization_config.to_dict()
training_config_text = ''
# Adds quantization information if it was used
if quantization_config is not None:
training_config_text += '\nThe following `bitsandbytes` quantization config was used during training:\n'
training_config_text += '\n'.join([f'- {name}: {value}' for name, value in quantization_config.items()])
training_config_text += '\n'
training_procedure_heading = '## Training procedure\n'
if training_procedure_heading in lines:
lines.insert(lines.index(training_procedure_heading) + 2, training_config_text)
else:
lines.append(f'{training_procedure_heading}\n{training_config_text}')
framework_block_heading = '### Framework versions\n'
from swift.version import __version__
if framework_block_heading in lines:
lines.insert(lines.index(framework_block_heading) + 2, f'- SWIFT {__version__}\n')
else:
lines.append(f'{framework_block_heading}\n\n- SWIFT {__version__}\n')
base_model_heading = '### Base model information\n'
lines.append(f'{base_model_heading}\n\n- BaseModel Class {self.base_model.__class__.__name__}\n')
# write the lines back to README.md
with open(os.path.join(output_dir, 'README.md'), 'w') as f:
f.writelines(lines)
def add_weighted_adapter(
self,
adapters,
weights,
adapter_name,
combination_type='svd',
svd_rank=None,
svd_clamp=None,
svd_full_matrices=True,
svd_driver=None,
density=None,
majority_sign_method: Literal['total', 'frequency'] = 'total',
):
"""
This method adds a new adapter by merging the given adapters with the given weights.
When using the `cat` combination_type you should be aware that rank of the resulting adapter will be equal to
the sum of all adapters ranks. So it's possible that the mixed adapter may become too big and result in OOM
errors.
Args:
adapters (`list`):
List of adapter names to be merged.
weights (`list`):
List of weights for each adapter.
adapter_name (`str`):
Name of the new adapter.
combination_type (`str`):
The merging type can be one of [`svd`, `linear`, `cat`, `ties`, `ties_svd`, `dare_ties`, `dare_linear`,
`dare_ties_svd`, `dare_linear_svd`, `magnitude_prune`, `magnitude_prune_svd`]. When using the `cat`
combination_type, the rank of the resulting adapter is equal to the sum of all adapters ranks (the
mixed adapter may be too big and result in OOM errors).
svd_rank (`int`, *optional*):
Rank of output adapter for svd. If None provided, will use max rank of merging adapters.
svd_clamp (`float`, *optional*):
A quantile threshold for clamping SVD decomposition output. If None is provided, do not perform
clamping. Defaults to None.
svd_full_matrices (`bool`, *optional*):
Controls whether to compute the full or reduced SVD, and consequently, the shape of the returned
tensors U and Vh. Defaults to True.
svd_driver (`str`, *optional*):
Name of the cuSOLVER method to be used. This keyword argument only works when merging on CUDA. Can be
one of [None, `gesvd`, `gesvdj`, `gesvda`]. For more info please refer to `torch.linalg.svd`
documentation. Defaults to None.
density (`float`, *optional*):
Value between 0 and 1. 0 means all values are pruned and 1 means no values are pruned. Should be used
with [`ties`, `ties_svd`, `dare_ties`, `dare_linear`, `dare_ties_svd`, `dare_linear_svd`,
`magnintude_prune`, `magnitude_prune_svd`]
majority_sign_method (`str`):
The method, should be one of ["total", "frequency"], to use to get the magnitude of the sign values.
Should be used with [`ties`, `ties_svd`, `dare_ties`, `dare_ties_svd`]
"""
from swift.tuners.lora import LoraModel
lora_model = LoraModel(self.model, None, '')
lora_model.peft_config = {key: value.config for key, value in self.adapters.items()}
from peft.tuners.lora import LoraLayer
lora_model.targeted_module_names = [
key for key, value in self.model.named_modules() if isinstance(value, LoraLayer)
]
lora_model.active_adapter = self.active_adapters
lora_model.add_weighted_adapter(
adapters=adapters,
weights=weights,
adapter_name=adapter_name,
combination_type=combination_type,
svd_rank=svd_rank,
svd_clamp=svd_clamp,
svd_full_matrices=svd_full_matrices,
svd_driver=svd_driver,
density=density,
majority_sign_method=majority_sign_method,
)
def state_dict_callback(state_dict, adapter_name, cfg):
from swift.tuners.lora_layers import lora_state_dict
return lora_state_dict(state_dict, adapter_name, cfg.bias)
def mark_trainable_callback(model, cfg):
from swift.tuners.lora_layers import mark_lora_as_trainable
mark_lora_as_trainable(model, adapter_name, cfg.bias)
cfg = lora_model.peft_config[adapter_name]
cfg.has_additional_modules = True
self.adapters[adapter_name] = SwiftOutput(
config=cfg,
state_dict_callback=partial(state_dict_callback, cfg=cfg),
mark_trainable_callback=partial(mark_trainable_callback, cfg=cfg),
optimizer_group_callback=None,
)
self.set_active_adapters(adapter_name)
def save_pretrained(self,
save_directory: str,
safe_serialization: bool = False,
adapter_name: Union[str, List[str]] = None,
**kwargs):
"""Save the adapters to a local directory.
Args:
save_directory (`str`): The directory to use.
safe_serialization (`bool`): Use safe tensors to save the weights, default False.
adapter_name(`Union[str, List[str]]`): The adapters to be saved, default is `None` to save all.
"""
peft_format = kwargs.pop('peft_format', False)
if os.path.isfile(save_directory):
raise ValueError(f'Provided path ({save_directory}) should be a directory, not a file')
os.makedirs(save_directory, exist_ok=True)
if not self.has_additional_modules:
if hasattr(self.base_model, 'save_pretrained'):
self.base_model.save_pretrained(save_directory, safe_serialization=safe_serialization)
else:
self._save_state_dict(self.base_model.state_dict(), save_directory, safe_serialization)
self.create_or_update_model_card(save_directory)
else:
self.create_or_update_model_card(save_directory)
adapter_names = adapter_name if isinstance(adapter_name, list) or adapter_name is None else [adapter_name]
state_dict_kwargs = {}
state_dict = kwargs.get('state_dict')
if state_dict is not None:
state_dict_kwargs['state_dict'] = kwargs['state_dict']
for adapter_name, output in self.adapters.items():
if adapter_names is not None and adapter_name not in adapter_names:
continue
save_to_peft = peft_format and output.config.swift_type == SwiftTuners.LORA
save_to_peft = save_to_peft and output.config.can_be_saved_to_peft()
if peft_format and not save_to_peft:
logger.error('You are using additional lora parameters, which is not compatible with peft,'
'which is unable to save to peft format.')
# save only the trainable weights
output_state_dict = self.state_dict(
adapter_name=adapter_name, save_extra_states=False, peft_format=save_to_peft, **state_dict_kwargs)
output_dir = os.path.join(save_directory,
adapter_name) if adapter_name != 'default' or not save_to_peft else save_directory
os.makedirs(output_dir, exist_ok=True)
if output_state_dict and output.config.has_additional_modules:
self._save_state_dict(output_state_dict, output_dir, safe_serialization)
if save_to_peft:
config = output.config.to_peft_config()
config.save_pretrained(output_dir)
else:
output.config.save_pretrained(output_dir)
output_state_dict = self.state_dict(save_extra_states=True, save_adapter=False, **state_dict_kwargs)
if len(output_state_dict) > 0:
if self.has_additional_modules:
os.makedirs(os.path.join(save_directory, self.EXTRA_STATE_DIR), exist_ok=True)
self._save_state_dict(output_state_dict, os.path.join(save_directory, self.EXTRA_STATE_DIR),
safe_serialization)
with open(os.path.join(save_directory, self.EXTRA_STATE_DIR, CONFIG_NAME), 'w') as file:
json.dump({'extra_state_keys': self.extra_state_keys}, file)
else:
logger.error('Full parameter training, save_extra_states will be ignored')
if not os.path.exists(os.path.join(save_directory, 'configuration.json')):
with open(os.path.join(save_directory, 'configuration.json'), 'w') as f:
f.write('{}')
@staticmethod
def _save_state_dict(output_state_dict, save_directory, safe_serialization):
if safe_serialization:
from safetensors.torch import save_file as safe_save_file
safe_save_file(
output_state_dict, os.path.join(save_directory, SAFETENSORS_WEIGHTS_NAME), metadata={'format': 'pt'})
else:
torch.save(output_state_dict, os.path.join(save_directory, WEIGHTS_NAME))
def set_active_adapters(self, adapter_names: Union[List[str], str], offload: str = None):
"""Set activated adapters
Args:
adapter_names(`Union[List[str], str]`): The adapters needed to be activated
offload(`str`): Whether to offload the deactivated ones to `cpu` or `meta` device
"""
if not adapter_names:
adapter_names = []
if isinstance(adapter_names, str):
adapter_names = [adapter_names]
adapter_names = set(adapter_names)
for adapter_name in (adapter_names & set(self.adapters.keys())):
self.activate_adapter(adapter_name)
for adapter_name in (set(self.adapters.keys()) - adapter_names):
self.deactivate_adapter(adapter_name, offload)
self.active_adapters = (adapter_names & set(self.adapters.keys()))
def activate_adapter(self, adapter_name: str):
"""Activate one adapter
Args:
adapter_name(`str`): The adapter needed to be activated
"""
if adapter_name not in self.adapters:
logger.warning(f'{adapter_name} not in adapters: {self.adapters.keys()}')
return
from .mapping import SWIFT_MAPPING
SWIFT_MAPPING[self.adapters[adapter_name].config.swift_type][1]\
.activate_adapter(self.base_model, adapter_name, True)
self.active_adapters = self.active_adapters | {adapter_name}
def deactivate_adapter(self, adapter_name: str, offload: str = None):
"""Deactivate one adapter
Args:
adapter_name(`str`): The adapter needed to be activated
offload(`str`): Whether to offload to `cpu` or `meta` device
"""
if adapter_name not in self.adapters:
logger.warning(f'{adapter_name} not in adapters: {self.adapters.keys()}')
return
from .mapping import SWIFT_MAPPING
SWIFT_MAPPING[self.adapters[adapter_name].config.swift_type][1]\
.activate_adapter(self.base_model, adapter_name, False, offload=offload)
self.active_adapters = self.active_adapters - {adapter_name}
def get_trainable_parameters(self):
"""
Get the content of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in self.base_model.named_parameters():
num_params = param.numel()
# if using DS Zero 3 and the weights are initialized empty
if num_params == 0 and hasattr(param, 'ds_numel'):
num_params = param.ds_numel
all_param += num_params
if param.requires_grad:
trainable_params += num_params
return f'trainable params: {trainable_params:,d} || all params: {all_param:,d} ' \
f'|| trainable%: {100 * trainable_params / all_param:.4f}' \
'|| cuda memory: ' \
f'{sum([torch.cuda.memory_allocated(i) for i in range(torch.cuda.device_count())])/1024/1024/1024:.2f}' \
'GiB.'
class Swift:
"""The Wrapper to use both Peft and Swift tuners."""
@staticmethod
def prepare_model(model: Union[nn.Module, SwiftModel], config: Union[SwiftConfig, PeftConfig,
Dict[str, SwiftConfig]], **kwargs):
"""Prepare a model by the input config.
Args:
model(`Union[nn.Module, 'SwiftModel']`): The model to be tuned.
config(`Union[SwiftConfig, PeftConfig, Dict[str, SwiftConfig]]`): The config or config dict, can be either
SwiftConfigs or PeftConfigs
**kwargs:
Extra kwargs needed by SwiftModel or PeftModel.
Returns:
The model wrapped by SwiftModel or PeftModel.
"""
if isinstance(config, (SwiftConfig, dict)):
return SwiftModel(model, config, **kwargs)
else:
return get_peft_model(model, config, **kwargs)
@staticmethod
def merge_and_unload(model: Union[PeftModel, SwiftModel], **kwargs):
"""Merge tuners into the base model and unload them.
Args:
model(`Union[PeftModel, SwiftModel]`): The model instance with tuners
kwargs:
adapter_name(`Union[str, List[str]]`): The adapter_name to unload, only supported in swift tuners.
"""
from peft import PeftModel as _PeftModel
if isinstance(model, _PeftModel):
model.merge_and_unload()
elif isinstance(model, SwiftModel):
from swift import LoRAConfig
from swift.tuners import LoRA
adapter_name = kwargs.get('adapter_name', None)
if isinstance(adapter_name, str):
adapter_name = [adapter_name]
for adapter, output in model.adapters.items():
if isinstance(output.config, LoRAConfig) and (adapter_name is None or adapter in adapter_name):
LoRA.unpatch_lora(model, output.config, adapter)
@staticmethod
def merge(model: Union[PeftModel, SwiftModel], **kwargs):
"""Merge tuners into the base model, will not unload them.
Args:
model(`Union[PeftModel, SwiftModel]`): The model instance with tuners
"""
from .lora_layers import LoraLayer, LoRALayer
for sub_module in model.modules():
if isinstance(sub_module, (LoraLayer, LoRALayer)):
sub_module.merge(**kwargs)
@staticmethod
def unmerge(model: Union[PeftModel, SwiftModel], **kwargs):
"""Unmerge tuners from the base model
Args:
model(`Union[PeftModel, SwiftModel]`): The model instance with tuners
"""
from .lora_layers import LoraLayer, LoRALayer
for sub_module in model.modules():
if isinstance(sub_module, (LoraLayer, LoRALayer)):
sub_module.unmerge(**kwargs)
@staticmethod
def save_to_peft_format(ckpt_dir: str, output_dir: str) -> None:
"""Save swift format to peft format
Args:
ckpt_dir(`str`): Original swift output dir
output_dir(`str`): Converted peft format dir
"""
assert ckpt_dir and output_dir, 'Please pass in valid ckpt_dir and output_dir.'
assert os.path.exists(ckpt_dir), f'ckpt_dir: {ckpt_dir} must exists in local disk.'
if os.path.exists(os.path.join(ckpt_dir, SwiftModel.EXTRA_STATE_DIR)):
raise AssertionError('Cannot transfer to peft format, because you are additional state dicts.')
adapter_names = [
sub_dir for sub_dir in os.listdir(ckpt_dir) if os.path.isfile(os.path.join(ckpt_dir, sub_dir, CONFIG_NAME))
]
def has_custom_content(_json):
if _json.get('swift_type', _json.get('peft_type')) != SwiftTuners.LORA:
logger.warn('Only LoRA can be converted to peft format')
return True
from swift import LoRAConfig
return not LoRAConfig(**_json).can_be_saved_to_peft()
for adapter in adapter_names:
with open(os.path.join(ckpt_dir, adapter, CONFIG_NAME)) as f:
_json = json.load(f)
if has_custom_content(_json):
raise AssertionError('Cannot transfer to peft format, '
'because you have special parameters or adapter types.')
os.makedirs(output_dir, exist_ok=True)
if ckpt_dir != output_dir:
shutil.copytree(ckpt_dir, output_dir, dirs_exist_ok=True)
for adapter in adapter_names:
safe_serialization = os.path.isfile(os.path.join(output_dir, adapter, SAFETENSORS_WEIGHTS_NAME))
state_dict = SwiftModel.load_state_file(os.path.join(output_dir, adapter))
new_state_dict = {}
for key, value in state_dict.items():
if not key.startswith('base_model.model.'):
key = 'base_model.model.' + key
key = key.replace(f'lora_A.{adapter}.', 'lora_A.')
key = key.replace(f'lora_B.{adapter}.', 'lora_B.')
key = key.replace(f'lora_embedding_A.{adapter}.', 'lora_embedding_A.')
key = key.replace(f'lora_embedding_B.{adapter}.', 'lora_embedding_B.')
key = key.replace(f'lora_magnitude_vector.{adapter}', 'lora_magnitude_vector')
new_state_dict[key] = value
state_dict = new_state_dict
SwiftModel._save_state_dict(state_dict, os.path.join(output_dir, adapter), safe_serialization)
from swift import LoRAConfig
with open(os.path.join(output_dir, adapter, CONFIG_NAME)) as f:
_json = json.load(f)
peft_config = LoRAConfig(**_json).to_peft_config()
peft_config.save_pretrained(os.path.join(output_dir, adapter))
if 'default' in adapter_names:
shutil.move(os.path.join(output_dir, 'default', CONFIG_NAME), os.path.join(output_dir, CONFIG_NAME))
state_dict = SwiftModel.load_state_file(os.path.join(output_dir, 'default'))
safe_serialization = os.path.isfile(os.path.join(output_dir, 'default', SAFETENSORS_WEIGHTS_NAME))
SwiftModel._save_state_dict(state_dict, output_dir, safe_serialization)
shutil.rmtree(os.path.join(output_dir, 'default'))
@staticmethod
def from_pretrained(model: Union[nn.Module, SwiftModel, PeftModel],
model_id: str = None,
adapter_name: Union[str, List[str], Dict[str, str]] = None,
revision: str = None,
**kwargs):
"""Prepare a model by a model_id in the ModelScope hub or a local dir.
Args:
model(`Union[nn.Module, 'SwiftModel']`): The model to be tuned.
model_id(`str`): The model id of the modelhub or a local dir containing the configs/weights.
adapter_name(`str`, `optional`): The adapter_name to use.
revision(`str`, `optional`): The model revision if the model_id is a model id of the modelhub.
**kwargs:
Extra kwargs needed by ``SwiftModel.from_pretrained`` or ``PeftModel.from_pretrained``.
Returns:
The model wrapped by SwiftModel or PeftModel.
"""
if not os.path.exists(model_id):
model_id = snapshot_download(model_id, revision=revision)
is_peft_model = False
if os.path.exists(os.path.join(model_id, CONFIG_NAME)):
with open(os.path.join(model_id, CONFIG_NAME), 'r') as f:
_json = json.load(f)
is_peft_model = SWIFT_TYPE_KEY not in _json
_name = adapter_name if isinstance(
adapter_name, str) or adapter_name is None else adapter_name[0] \
if isinstance(adapter_name, list) else list(adapter_name.keys())[0]
_name = _name or ''
if os.path.exists(os.path.join(model_id, _name, CONFIG_NAME)):
with open(os.path.join(model_id, _name, CONFIG_NAME), 'r') as f:
_json = json.load(f)
is_peft_model = SWIFT_TYPE_KEY not in _json and 'extra_state_keys' not in _json
if is_peft_model:
def load_peft_model(_model, _adapter_name, _new_name=None):
if not _new_name:
_new_name = _adapter_name
import peft
if not isinstance(_model, peft.PeftModel):
return PeftModel.from_pretrained(
_model,
os.path.join(model_id, _adapter_name) if _adapter_name != 'default'
and os.path.exists(os.path.join(model_id, _adapter_name)) else model_id,
revision=revision,
adapter_name=_new_name,
**kwargs)
else:
_model.load_adapter(
os.path.join(model_id, _adapter_name) if _adapter_name != 'default'
and os.path.exists(os.path.join(model_id, _adapter_name)) else model_id, _new_name)
return _model
if not adapter_name:
peft_model = load_peft_model(model, 'default')
for _dir in os.listdir(model_id):
if os.path.isdir(os.path.join(model_id, _dir)) and \
os.path.exists(os.path.join(model_id, _dir, CONFIG_NAME)):
peft_model = load_peft_model(peft_model, _dir)
elif isinstance(adapter_name, str):
return load_peft_model(model, adapter_name)
elif isinstance(adapter_name, list):
peft_model = model
for name in adapter_name:
peft_model = load_peft_model(peft_model, name)
else:
peft_model = model
for key, value in adapter_name.items():
peft_model = load_peft_model(peft_model, key, value)
return peft_model
else:
return SwiftModel.from_pretrained(model, model_id, revision=revision, adapter_name=adapter_name, **kwargs)
| swift/swift/tuners/base.py/0 | {
"file_path": "swift/swift/tuners/base.py",
"repo_id": "swift",
"token_count": 20461
} | 210 |
# Copyright (c) Alibaba, Inc. and its affiliates.
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from einops import rearrange
from swift.utils.logger import get_logger
logger = get_logger()
class ResTuner(nn.Module):
def __init__(self, dim=None, layer_num=-1, depth=-1, zero_init_last=False, stage='', tuner_cfg={}, **kwargs):
super().__init__()
self.dim = dim
self.layer_num = layer_num
self.depth = depth
self.stage = stage
self.tuner_cfg = tuner_cfg
if (isinstance(tuner_cfg, str) and tuner_cfg == 'res_adapter') or \
(isinstance(tuner_cfg, dict) and 'res_adapter' in tuner_cfg):
tuner_cfg = tuner_cfg['res_adapter'] if isinstance(tuner_cfg, dict) else tuner_cfg
self.tuner = ResAdapter(
dim=dim,
layer_num=layer_num,
depth=depth,
zero_init_last=zero_init_last,
stage=stage,
tuner_cfg=tuner_cfg,
**kwargs)
elif (isinstance(tuner_cfg, str) and tuner_cfg == 'res_group_adapter') or \
(isinstance(tuner_cfg, dict) and 'res_group_adapter' in tuner_cfg):
tuner_cfg = tuner_cfg['res_group_adapter'] if isinstance(tuner_cfg, dict) else tuner_cfg
self.tuner = ResGroupAdapter(
dim=dim,
layer_num=layer_num,
depth=depth,
zero_init_last=zero_init_last,
stage=stage,
tuner_cfg=tuner_cfg,
**kwargs)
elif (isinstance(tuner_cfg, str) and tuner_cfg == 'upsample') or \
(isinstance(tuner_cfg, dict) and 'upsample' in tuner_cfg):
tuner_cfg = tuner_cfg['upsample'] if isinstance(tuner_cfg, dict) else tuner_cfg
if 'upsample_out_channels' in kwargs:
out_channels = kwargs['upsample_out_channels']
use_conv = True if out_channels else False
else:
out_channels = dim
use_conv = False
self.tuner = Upsample(
channels=dim, use_conv=use_conv, out_channels=out_channels, tuner_cfg=tuner_cfg, **kwargs)
else:
self.tuner = Identity()
def forward(self, x, *args, **kwargs):
if self.tuner_cfg == 'zero' or 'zero' in self.tuner_cfg:
x_out = 0.0
else:
x_out = self.tuner(x, *args, **kwargs)
return x_out
class ResAdapter(nn.Module):
def __init__(self,
dim,
layer_num=-1,
depth=-1,
zero_init_last=False,
stage='',
tuner_cfg=None,
act_layer=nn.GELU,
**kwargs):
super(ResAdapter, self).__init__()
self.dim = dim
self.layer_num = layer_num
self.depth = depth
self.adapter_length = tuner_cfg['adapter_length'] if 'adapter_length' in tuner_cfg else 32
self.adapter_type = tuner_cfg['adapter_type'] if 'adapter_type' in tuner_cfg else None
self.adapter_weight = tuner_cfg['adapter_weight'] if 'adapter_weight' in tuner_cfg else None
self.adapter_length = self.adapter_length[self.layer_num] if isinstance(self.adapter_length,
list) else self.adapter_length
assert isinstance(self.adapter_length, int) or (isinstance(self.adapter_length, tuple)
and len(self.adapter_length) == 3)
if isinstance(self.adapter_length, int):
self.ln1 = nn.Linear(dim, self.adapter_length)
else:
self.ln1 = nn.Linear(self.adapter_length[0], self.adapter_length[1])
self.activate = act_layer()
if isinstance(self.adapter_length, int):
self.ln2 = nn.Linear(self.adapter_length, dim)
else:
self.ln2 = nn.Linear(self.adapter_length[1], self.adapter_length[2])
dim = self.adapter_length[2]
self._xavier_init_weights(self.ln1)
if zero_init_last and layer_num == depth - 1:
self._zero_init_weights(self.ln2)
else:
self._xavier_init_weights(self.ln2)
self.scaling = init_weight_type(dim, self.adapter_weight)
self._prepared = False
def _zero_init_weights(self, m):
if isinstance(m, nn.Linear):
nn.init.zeros_(m.weight)
nn.init.zeros_(m.bias)
def _kaiming_init_weights(self, m):
if isinstance(m, nn.Linear):
nn.init.kaiming_uniform_(m.weight, a=math.sqrt(5))
nn.init.normal_(m.bias)
def _xavier_init_weights(self, m):
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight)
nn.init.normal_(m.bias, std=1e-6)
def forward(self, x):
if not self._prepared:
self.ln1.to(x.device)
self.activate.to(x.device)
self.ln2.to(x.device)
self._prepared = True
x_dtype = x.dtype
x = x.to(self.ln1.weight.dtype)
x_shortcut = x
if len(x_shortcut.size()) == 4:
B, C, N1, N2 = x.size()
x = x.view(x_shortcut.size()[0], x_shortcut.size()[1], -1).permute(0, 2, 1)
x_adapter = self.ln2(self.activate(self.ln1(x)))
if self.adapter_weight:
x_adapter = apply_data_weight(x_adapter, self.scaling, self.adapter_weight)
if len(x_shortcut.size()) == 4:
x_adapter = x_adapter.permute(0, 2, 1).view(x_shortcut.size()[0],
x_adapter.size()[-1],
x_shortcut.size()[2],
x_shortcut.size()[3])
x_out = x_shortcut + x_adapter
return x_out.to(x_dtype)
class ResGroupAdapter(nn.Module):
def __init__(self,
dim,
layer_num=-1,
depth=-1,
zero_init_last=False,
stage='',
tuner_cfg=None,
act_layer=nn.GELU,
**kwargs):
super(ResGroupAdapter, self).__init__()
self.dim = dim
self.layer_num = layer_num
self.depth = depth
self.adapter_type = tuner_cfg['adapter_type'] if 'adapter_type' in tuner_cfg else None
self.adapter_weight = tuner_cfg['adapter_weight'] if 'adapter_weight' in tuner_cfg else None
self.adapter_dim = tuner_cfg['dim'] if 'dim' in tuner_cfg else dim
self.adapter_head = tuner_cfg['head'] if 'head' in tuner_cfg else 4
self.adapter_scale_factor = tuner_cfg['scale_factor'] if 'scale_factor' in tuner_cfg else 2
assert self.adapter_dim % self.adapter_head == 0, 'adapter dim should be divisible by adapter head'
self.dim_mlp = self.adapter_dim // self.adapter_head
self.ln1 = nn.Linear(self.dim_mlp, self.dim_mlp * self.adapter_scale_factor)
self.ln2 = nn.Linear(self.dim_mlp * self.adapter_scale_factor, self.dim_mlp)
self.activate = act_layer()
self._kaiming_init_weights(self.ln1)
if zero_init_last and layer_num == depth - 1:
self._zero_init_weights(self.ln2)
else:
self._kaiming_init_weights(self.ln2)
self.scaling = init_weight_type(dim, self.adapter_weight)
self._prepared = False
def _zero_init_weights(self, m):
if isinstance(m, nn.Linear):
nn.init.zeros_(m.weight)
nn.init.zeros_(m.bias)
def _kaiming_init_weights(self, m):
if isinstance(m, nn.Linear):
nn.init.kaiming_uniform_(m.weight, a=math.sqrt(5))
nn.init.normal_(m.bias)
def _xavier_init_weights(self, m):
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight)
nn.init.normal_(m.bias, std=1e-6)
def forward(self, x):
if not self._prepared:
self.ln1.to(x.device)
self.activate.to(x.device)
self.ln2.to(x.device)
self._prepared = True
x_dtype = x.dtype
x = x.to(self.ln1.weight.dtype)
x_shortcut = x
batch, inner_dim, height, width = x.shape
x_adapter = x.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim)
x_adapter = rearrange(x_adapter, 'b n (c h) -> (b h) n c', h=self.adapter_head)
x_adapter = self.ln2(self.activate(self.ln1(x_adapter)))
x_adapter = rearrange(x_adapter, '(b h) n c -> b n (c h)', h=self.adapter_head)
if self.adapter_weight:
x_adapter = apply_data_weight(x_adapter, self.scaling, self.adapter_weight)
x_adapter = x_adapter.reshape(batch, height, width, -1).permute(0, 3, 1, 2).contiguous()
x_out = x_shortcut + x_adapter
return x_out.to(x_dtype)
class Identity(nn.Module):
def __init__(self):
super().__init__()
def forward(self, inputs, *args, **kwargs):
return inputs
class Upsample(nn.Module):
"""
An upsampling layer with an optional convolution.
:param channels: channels in the inputs and outputs.
:param use_conv: a bool determining if a convolution is applied.
:param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
upsampling occurs in the inner-two dimensions.
"""
def __init__(self, channels, use_conv=False, out_channels=None, padding=1, **kwargs):
super().__init__()
self.channels = channels
self.out_channels = out_channels or channels
self.use_conv = use_conv
if use_conv:
self.conv = nn.Conv2d(self.channels, self.out_channels, 3, padding=padding)
self.init_weights()
def init_weights(self):
def _init_weights(m):
if isinstance(m, nn.Conv2d):
nn.init.zeros_(m.weight)
nn.init.zeros_(m.bias)
self.apply(_init_weights)
def forward(self, x, target_size=None, *args, **kwargs):
assert x.shape[1] == self.channels
if target_size is None:
x = F.interpolate(x.float(), scale_factor=2, mode='nearest').type_as(x)
else:
x = F.interpolate(x.float(), target_size, mode='nearest').type_as(x)
if self.use_conv:
x = self.conv(x)
return x
def init_weight_type(dim, weight_type):
if weight_type is None:
scaling = None
elif weight_type == 'gate':
scaling = nn.Linear(dim, 1)
elif weight_type == 'scale':
scaling = nn.Parameter(torch.Tensor(1))
scaling.data.fill_(1)
elif weight_type == 'scale_kv':
scaling_k = nn.Parameter(torch.Tensor(1))
scaling_k.data.fill_(1)
scaling_v = nn.Parameter(torch.Tensor(1))
scaling_v.data.fill_(1)
scaling = (scaling_k, scaling_v)
elif weight_type == 'scale_channel':
scaling = nn.Parameter(torch.Tensor(dim))
scaling.data.fill_(1)
elif weight_type == 'scale_kv_channel':
scaling_k = nn.Parameter(torch.Tensor(dim))
scaling_k.data.fill_(1)
scaling_v = nn.Parameter(torch.Tensor(dim))
scaling_v.data.fill_(1)
scaling = (scaling_k, scaling_v)
elif weight_type and weight_type.startswith('scalar'):
scaling = float(weight_type.split('_')[-1])
else:
scaling = None
return scaling
def apply_data_weight(data, scaling, weight_type):
if weight_type in ['gate']:
scaling = torch.mean(torch.sigmoid(scaling(data)), dim=1).view(-1, 1, 1)
elif weight_type in ['scale', 'scale_channel'] or weight_type.startswith('scalar'):
scaling = scaling
else:
scaling = None
if scaling is not None:
data = data * scaling
return data
def detach_tensors(feats):
if type(feats) in [list, tuple]:
feats = [detach_tensors(feat) if feat is not None else None for feat in feats]
elif isinstance(feats, dict):
feats = {key: detach_tensors(val) for key, val in feats.items()}
elif isinstance(feats, torch.Tensor):
feats = feats.detach()
else:
feats = feats.detach()
return feats
def probe_tensors(module, feats, name):
feats = detach_tensors(feats)
setattr(module, name, feats)
def probe_input_pre_hook(self, args):
input = args[0]
probe_tensors(self, input, 'probe_input_data')
return args
def probe_output_hook(self, args, result):
output = result
probe_tensors(self, output, 'probe_output_data')
return output
| swift/swift/tuners/restuning_components.py/0 | {
"file_path": "swift/swift/tuners/restuning_components.py",
"repo_id": "swift",
"token_count": 6399
} | 211 |
# Copyright (c) Alibaba, Inc. and its affiliates.
# Part of the implementation is borrowed from kmeng01/rome.
"""
Contains utilities for extracting token representations and indices
from string templates. Used in computing the left and right vectors for ROME.
"""
from typing import Any, Callable, List, Tuple, Union
import torch
from modelscope import AutoTokenizer
from .nethook import Trace
def get_reprs_at_word_tokens(
model: torch.nn.Module,
tokenizer: Any,
context_templates: List[str],
words: List[str],
layer: int,
module_template: str,
subtoken: str,
track: str = 'in',
batch_first: bool = True,
) -> torch.Tensor:
"""
Retrieves the last token representation of `word` in `context_template`
when `word` is substituted into `context_template`. See `get_last_word_idx_in_template`
for more details.
"""
idxs = get_words_idxs_in_templates(tokenizer, context_templates, words, subtoken)
return get_reprs_at_idxs(
model,
tokenizer,
[context_templates[i].format(words[i]) for i in range(len(words))],
idxs,
layer,
module_template,
track,
batch_first,
)
def get_words_idxs_in_templates(tokenizer: AutoTokenizer, context_templates: List[str], words: List[str],
subtoken: str) -> List:
"""
Given list of template strings, each with *one* format specifier
(e.g. "{} plays basketball"), and words to be substituted into the
template, computes the post-tokenization index of their last tokens.
"""
assert all(tmp.count('{}') == 1
for tmp in context_templates), 'We currently do not support multiple fill-ins for context'
# Compute prefixes and suffixes of the tokenized context
fill_idxs = [tmp.index('{}') for tmp in context_templates]
prefixes, suffixes = [tmp[:fill_idxs[i]] for i, tmp in enumerate(context_templates)
], [tmp[fill_idxs[i] + 2:] for i, tmp in enumerate(context_templates)]
lens = []
for prefix, word, suffix in zip(prefixes, words, suffixes):
prefix_token = tokenizer.encode(prefix)
prefix_word_token = tokenizer.encode(prefix + word)
prefix_word_suffix_token = tokenizer.encode(prefix + word + suffix)
suffix_len = len(prefix_word_suffix_token) - len(prefix_word_token)
# Compute indices of last tokens
if subtoken == 'last' or subtoken == 'first_after_last':
lens.append([
len(prefix_word_token) -
(1 if subtoken == 'last' or suffix_len == 0 else 0) - len(prefix_word_suffix_token)
])
elif subtoken == 'first':
lens.append([len(prefix_token) - len(prefix_word_suffix_token)])
else:
raise ValueError(f'Unknown subtoken type: {subtoken}')
return lens
def get_reprs_at_idxs(
model: torch.nn.Module,
tokenizer: Callable,
contexts: List[str],
idxs: List[List[int]],
layer: int,
module_template: str,
track: str = 'in',
batch_first: bool = True,
) -> Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]:
"""
Runs input through model and returns averaged representations of the tokens
at each index in `idxs`.
"""
def _batch(n):
for i in range(0, len(contexts), n):
yield contexts[i:i + n], idxs[i:i + n]
assert track in {'in', 'out', 'both'}
both = track == 'both'
tin, tout = (
(track == 'in' or both),
(track == 'out' or both),
)
module_name = module_template.format(layer)
to_return = {'in': [], 'out': []}
def _process(cur_repr, batch_idxs, key):
nonlocal to_return
cur_repr = cur_repr[0] if isinstance(cur_repr, tuple) else cur_repr
if not batch_first:
cur_repr = cur_repr.transpose(0, 1)
for i, idx_list in enumerate(batch_idxs):
to_return[key].append(cur_repr[i][idx_list].mean(0))
for batch_contexts, batch_idxs in _batch(n=512):
contexts_tok = tokenizer(
batch_contexts, padding=True, return_token_type_ids=False,
return_tensors='pt').to(next(model.parameters()).device)
with torch.no_grad():
with Trace(
module=model,
layer=module_name,
retain_input=tin,
retain_output=tout,
) as tr:
model(**contexts_tok)
if tin:
_process(tr.input, batch_idxs, 'in')
if tout:
_process(tr.output, batch_idxs, 'out')
to_return = {k: torch.stack(v, 0) for k, v in to_return.items() if len(v) > 0}
if len(to_return) == 1:
return to_return['in'] if tin else to_return['out']
else:
return to_return['in'], to_return['out']
| swift/swift/tuners/rome/repr_tools.py/0 | {
"file_path": "swift/swift/tuners/rome/repr_tools.py",
"repo_id": "swift",
"token_count": 2109
} | 212 |
import os
import typing
from dataclasses import fields
from functools import partial, wraps
from typing import Any, Dict, List, OrderedDict, Type
from gradio import Accordion, Button, Checkbox, Dropdown, Slider, Tab, TabItem, Textbox
from swift.llm.utils.model import MODEL_MAPPING, ModelType
all_langs = ['zh', 'en']
builder: Type['BaseUI'] = None
base_builder: Type['BaseUI'] = None
lang = os.environ.get('SWIFT_UI_LANG', all_langs[0])
def update_data(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
elem_id = kwargs.get('elem_id', None)
self = args[0]
if builder is not None:
choices = base_builder.choice(elem_id)
if choices:
kwargs['choices'] = choices
if not isinstance(self, (Tab, TabItem, Accordion)) and 'interactive' not in kwargs: # noqa
kwargs['interactive'] = True
if 'is_list' in kwargs:
self.is_list = kwargs.pop('is_list')
if base_builder and base_builder.default(elem_id) is not None:
if os.environ.get('MODELSCOPE_ENVIRONMENT') == 'studio' and kwargs.get('value') is not None:
pass
else:
kwargs['value'] = base_builder.default(elem_id)
if builder is not None:
if elem_id in builder.locales(lang):
values = builder.locale(elem_id, lang)
if 'info' in values:
kwargs['info'] = values['info']
if 'value' in values:
kwargs['value'] = values['value']
if 'label' in values:
kwargs['label'] = values['label']
argument = base_builder.argument(elem_id)
if argument and 'label' in kwargs:
kwargs['label'] = kwargs['label'] + f'({argument})'
kwargs['elem_classes'] = 'align'
ret = fn(self, **kwargs)
self.constructor_args.update(kwargs)
if builder is not None:
builder.element_dict[elem_id] = self
return ret
return wrapper
Textbox.__init__ = update_data(Textbox.__init__)
Dropdown.__init__ = update_data(Dropdown.__init__)
Checkbox.__init__ = update_data(Checkbox.__init__)
Slider.__init__ = update_data(Slider.__init__)
TabItem.__init__ = update_data(TabItem.__init__)
Accordion.__init__ = update_data(Accordion.__init__)
Button.__init__ = update_data(Button.__init__)
class BaseUI:
choice_dict: Dict[str, List] = {}
default_dict: Dict[str, Any] = {}
locale_dict: Dict[str, Dict] = {}
element_dict: Dict[str, Dict] = {}
arguments: Dict[str, str] = {}
sub_ui: List[Type['BaseUI']] = []
group: str = None
lang: str = all_langs[0]
int_regex = r'^[-+]?[0-9]+$'
float_regex = r'[-+]?(?:\d*\.*\d+)'
bool_regex = r'^(T|t)rue$|^(F|f)alse$'
@classmethod
def build_ui(cls, base_tab: Type['BaseUI']):
"""Build UI"""
global builder, base_builder
cls.element_dict = {}
old_builder = builder
old_base_builder = base_builder
builder = cls
base_builder = base_tab
cls.do_build_ui(base_tab)
builder = old_builder
base_builder = old_base_builder
@classmethod
def do_build_ui(cls, base_tab: Type['BaseUI']):
"""Build UI"""
pass
@classmethod
def choice(cls, elem_id):
"""Get choice by elem_id"""
for sub_ui in BaseUI.sub_ui:
_choice = sub_ui.choice(elem_id)
if _choice:
return _choice
return cls.choice_dict.get(elem_id, [])
@classmethod
def default(cls, elem_id):
"""Get choice by elem_id"""
for sub_ui in BaseUI.sub_ui:
_choice = sub_ui.default(elem_id)
if _choice:
return _choice
return cls.default_dict.get(elem_id, None)
@classmethod
def locale(cls, elem_id, lang):
"""Get locale by elem_id"""
return cls.locales(lang)[elem_id]
@classmethod
def locales(cls, lang):
"""Get locale by lang"""
locales = OrderedDict()
for sub_ui in cls.sub_ui:
_locales = sub_ui.locales(lang)
locales.update(_locales)
for key, value in cls.locale_dict.items():
locales[key] = {k: v[lang] for k, v in value.items()}
return locales
@classmethod
def elements(cls):
"""Get all elements"""
elements = OrderedDict()
elements.update(cls.element_dict)
for sub_ui in cls.sub_ui:
_elements = sub_ui.elements()
elements.update(_elements)
return elements
@classmethod
def element(cls, elem_id):
"""Get element by elem_id"""
elements = cls.elements()
return elements[elem_id]
@classmethod
def argument(cls, elem_id):
"""Get argument by elem_id"""
return cls.arguments.get(elem_id)
@classmethod
def set_lang(cls, lang):
cls.lang = lang
for sub_ui in cls.sub_ui:
sub_ui.lang = lang
@staticmethod
def get_choices_from_dataclass(dataclass):
choice_dict = {}
for f in fields(dataclass):
if 'choices' in f.metadata:
choice_dict[f.name] = f.metadata['choices']
if 'Literal' in str(f.type) and typing.get_args(f.type):
choice_dict[f.name] = typing.get_args(f.type)
return choice_dict
@staticmethod
def get_default_value_from_dataclass(dataclass):
default_dict = {}
for f in fields(dataclass):
if hasattr(dataclass, f.name):
default_dict[f.name] = getattr(dataclass, f.name)
else:
default_dict[f.name] = None
return default_dict
@staticmethod
def get_argument_names(dataclass):
arguments = {}
for f in fields(dataclass):
arguments[f.name] = f'--{f.name}'
return arguments
@staticmethod
def get_custom_name_list():
return list(set(MODEL_MAPPING.keys()) - set(ModelType.get_model_name_list()))
| swift/swift/ui/base.py/0 | {
"file_path": "swift/swift/ui/base.py",
"repo_id": "swift",
"token_count": 2952
} | 213 |
from typing import Type
import gradio as gr
from swift.ui.base import BaseUI
class Advanced(BaseUI):
group = 'llm_train'
locale_dict = {
'advanced_param': {
'label': {
'zh': '高级参数设置',
'en': 'Advanced settings'
},
},
'optim': {
'label': {
'zh': 'Optimizer类型',
'en': 'The Optimizer type'
},
'info': {
'zh': '设置Optimizer类型',
'en': 'Set the Optimizer type'
}
},
'weight_decay': {
'label': {
'zh': '权重衰减',
'en': 'Weight decay'
},
'info': {
'zh': '设置weight decay',
'en': 'Set the weight decay'
}
},
'logging_steps': {
'label': {
'zh': '日志打印步数',
'en': 'Logging steps'
},
'info': {
'zh': '设置日志打印的步数间隔',
'en': 'Set the logging interval'
}
},
'lr_scheduler_type': {
'label': {
'zh': 'LrScheduler类型',
'en': 'The LrScheduler type'
},
'info': {
'zh': '设置LrScheduler类型',
'en': 'Set the LrScheduler type'
}
},
'warmup_ratio': {
'label': {
'zh': '学习率warmup比例',
'en': 'Lr warmup ratio'
},
'info': {
'zh': '设置学习率warmup比例',
'en': 'Set the warmup ratio in total steps'
}
},
'more_params': {
'label': {
'zh': '其他高级参数',
'en': 'Other params'
},
'info': {
'zh': '以json格式输入其他超参数',
'en': 'Input in the json format'
}
},
}
@classmethod
def do_build_ui(cls, base_tab: Type['BaseUI']):
with gr.Accordion(elem_id='advanced_param', open=False):
with gr.Blocks():
with gr.Row():
gr.Textbox(elem_id='optim', lines=1, scale=20)
gr.Textbox(elem_id='weight_decay', lines=1, scale=20)
gr.Textbox(elem_id='logging_steps', lines=1, scale=20)
gr.Textbox(elem_id='lr_scheduler_type', lines=1, scale=20)
gr.Slider(elem_id='warmup_ratio', minimum=0.0, maximum=1.0, step=0.05, scale=20)
with gr.Row():
gr.Textbox(elem_id='more_params', lines=4, scale=20)
| swift/swift/ui/llm_train/advanced.py/0 | {
"file_path": "swift/swift/ui/llm_train/advanced.py",
"repo_id": "swift",
"token_count": 1787
} | 214 |
# Copyright (c) Alibaba, Inc. and its affiliates.
import importlib.util
import logging
import os
from typing import Optional
init_loggers = {}
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
def get_logger(log_file: Optional[str] = None, log_level: Optional[int] = None, file_mode: str = 'w'):
""" Get logging logger
Args:
log_file: Log filename, if specified, file handler will be added to
logger
log_level: Logging level.
file_mode: Specifies the mode to open the file, if filename is
specified (if filemode is unspecified, it defaults to 'w').
"""
if log_level is None:
log_level = os.getenv('LOG_LEVEL', 'INFO').upper()
log_level = getattr(logging, log_level, logging.INFO)
logger_name = __name__.split('.')[0]
logger = logging.getLogger(logger_name)
logger.propagate = False
if logger_name in init_loggers:
add_file_handler_if_needed(logger, log_file, file_mode, log_level)
return logger
# handle duplicate logs to the console
# Starting in 1.8.0, PyTorch DDP attaches a StreamHandler <stderr> (NOTSET)
# to the root logger. As logger.propagate is True by default, this root
# level handler causes logging messages from rank>0 processes to
# unexpectedly show up on the console, creating much unwanted clutter.
# To fix this issue, we set the root logger's StreamHandler, if any, to log
# at the ERROR level.
for handler in logger.root.handlers:
if type(handler) is logging.StreamHandler:
handler.setLevel(logging.ERROR)
stream_handler = logging.StreamHandler()
handlers = [stream_handler]
if importlib.util.find_spec('torch') is not None:
is_worker0 = int(os.getenv('LOCAL_RANK', -1)) in {-1, 0}
else:
is_worker0 = True
if is_worker0 and log_file is not None:
file_handler = logging.FileHandler(log_file, file_mode)
handlers.append(file_handler)
for handler in handlers:
handler.setFormatter(formatter)
handler.setLevel(log_level)
logger.addHandler(handler)
if is_worker0:
logger.setLevel(log_level)
else:
logger.setLevel(logging.ERROR)
init_loggers[logger_name] = True
return logger
def add_file_handler_if_needed(logger, log_file, file_mode, log_level):
for handler in logger.handlers:
if isinstance(handler, logging.FileHandler):
return
if importlib.util.find_spec('torch') is not None:
is_worker0 = int(os.getenv('LOCAL_RANK', -1)) in {-1, 0}
else:
is_worker0 = True
if is_worker0 and log_file is not None:
file_handler = logging.FileHandler(log_file, file_mode)
file_handler.setFormatter(formatter)
file_handler.setLevel(log_level)
logger.addHandler(file_handler)
| swift/swift/utils/logger.py/0 | {
"file_path": "swift/swift/utils/logger.py",
"repo_id": "swift",
"token_count": 1127
} | 215 |
import os
import unittest
from swift.llm import (ModelType, get_default_template_type, get_model_tokenizer, get_template, inference,
inference_stream, limit_history_length, print_example)
from swift.utils import lower_bound, seed_everything
class TestLlmUtils(unittest.TestCase):
def test_count_startswith(self):
arr = [-100] * 1000 + list(range(1000))
self.assertTrue(lower_bound(0, len(arr), lambda i: arr[i] != -100) == 1000)
def test_count_endswith(self):
arr = list(range(1000)) + [-100] * 1000
self.assertTrue(lower_bound(0, len(arr), lambda i: arr[i] == -100) == 1000)
def test_inference(self):
model_type = ModelType.chatglm2_6b
model, tokenizer = get_model_tokenizer(model_type)
template_type = get_default_template_type(model_type)
template = get_template(template_type, tokenizer)
model.generation_config.max_length = 128
model.generation_config.do_sample = True
for query in ['你好', 'hello']:
seed_everything(42, True)
print('stream=True')
gen_text_stream, history = inference(model, template, query, stream=True, verbose=True)
print(f'[GEN]: {gen_text_stream}')
print(f'[HISTORY]: {history}')
#
seed_everything(42, True)
gen = inference_stream(model, template, query)
for gen_text_stream2, history2 in gen:
pass
print(f'[GEN]: {gen_text_stream2}')
print(f'[HISTORY]: {history2}')
#
seed_everything(42, True)
print('stream=False')
gen_text, history3 = inference(model, template, query, stream=False, verbose=True)
print(f'[GEN]: {gen_text}')
print(f'[HISTORY]: {history3}')
self.assertTrue(gen_text_stream == gen_text_stream2 == gen_text)
self.assertTrue(history == history2 == history3)
def test_print_example(self):
input_ids = [1000, 2000, 3000, 4000, 5000, 6000]
_, tokenizer = get_model_tokenizer(ModelType.chatglm3_6b, load_model=False)
from swift.llm.utils.utils import safe_tokenizer_decode
labels = [-100, -100, 1000, 2000, 3000, -100, -100, 4000, 5000, 6000]
print_example({'input_ids': input_ids, 'labels': labels}, tokenizer)
assert safe_tokenizer_decode(tokenizer, labels) == '[-100 * 2]before States appe[-100 * 2]innov developingishes'
labels = [-100, -100, -100]
print_example({'input_ids': input_ids, 'labels': labels}, tokenizer)
assert safe_tokenizer_decode(tokenizer, labels) == '[-100 * 3]'
labels = [1000, 2000, 3000, 4000, 5000, 6000]
print_example({'input_ids': input_ids, 'labels': labels}, tokenizer)
assert safe_tokenizer_decode(tokenizer, labels) == 'before States appe innov developingishes'
def test_limit_history_length(self):
model_type = ModelType.qwen_7b_chat
_, tokenizer = get_model_tokenizer(model_type, load_model=False)
template_type = get_default_template_type(model_type)
template = get_template(template_type, tokenizer)
old_history, new_history = limit_history_length(template, '你' * 100, [], 128)
self.assertTrue(len(old_history) == 0 and len(new_history) == 0)
old_history, new_history = limit_history_length(template, '你' * 100, [], 256)
self.assertTrue(len(old_history) == 0 and len(new_history) == 0)
self.assertTrue(len(tokenizer.encode('你' * 100)))
old_history, new_history = limit_history_length(template, '你' * 100, [['你' * 100, '你' * 100] for i in range(5)],
600)
self.assertTrue(len(old_history) == 3 and len(new_history) == 2)
if __name__ == '__main__':
unittest.main()
| swift/tests/llm/test_utils.py/0 | {
"file_path": "swift/tests/llm/test_utils.py",
"repo_id": "swift",
"token_count": 1697
} | 216 |
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import datasets
import pandas as pd
_CITATION = """\
@article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},
journal={arXiv preprint arXiv:2305.08322},
year={2023}
}
"""
_DESCRIPTION = """\
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels.
"""
_HOMEPAGE = "https://cevalbenchmark.com"
_LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License"
_URL = "ceval.zip"
task_list = [
"computer_network",
"operating_system",
"computer_architecture",
"college_programming",
"college_physics",
"college_chemistry",
"advanced_mathematics",
"probability_and_statistics",
"discrete_mathematics",
"electrical_engineer",
"metrology_engineer",
"high_school_mathematics",
"high_school_physics",
"high_school_chemistry",
"high_school_biology",
"middle_school_mathematics",
"middle_school_biology",
"middle_school_physics",
"middle_school_chemistry",
"veterinary_medicine",
"college_economics",
"business_administration",
"marxism",
"mao_zedong_thought",
"education_science",
"teacher_qualification",
"high_school_politics",
"high_school_geography",
"middle_school_politics",
"middle_school_geography",
"modern_chinese_history",
"ideological_and_moral_cultivation",
"logic",
"law",
"chinese_language_and_literature",
"art_studies",
"professional_tour_guide",
"legal_professional",
"high_school_chinese",
"high_school_history",
"middle_school_history",
"civil_servant",
"sports_science",
"plant_protection",
"basic_medicine",
"clinical_medicine",
"urban_and_rural_planner",
"accountant",
"fire_engineer",
"environmental_impact_assessment_engineer",
"tax_accountant",
"physician",
]
class CevalConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super().__init__(version=datasets.Version("1.0.0"), **kwargs)
class Ceval(datasets.GeneratorBasedBuilder):
BUILDER_CONFIGS = [
CevalConfig(
name=task_name,
)
for task_name in task_list
]
def _info(self):
features = datasets.Features(
{
"id": datasets.Value("int32"),
"question": datasets.Value("string"),
"A": datasets.Value("string"),
"B": datasets.Value("string"),
"C": datasets.Value("string"),
"D": datasets.Value("string"),
"answer": datasets.Value("string"),
"explanation": datasets.Value("string"),
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=features,
homepage=_HOMEPAGE,
license=_LICENSE,
citation=_CITATION,
)
def _split_generators(self, dl_manager):
data_dir = dl_manager.download_and_extract(_URL)
task_name = self.config.name
return [
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"filepath": os.path.join(data_dir, "test", f"{task_name}_test.csv"),
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"filepath": os.path.join(data_dir, "val", f"{task_name}_val.csv"),
},
),
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": os.path.join(data_dir, "dev", f"{task_name}_dev.csv"),
},
),
]
def _generate_examples(self, filepath):
df = pd.read_csv(filepath, encoding="utf-8")
for i, instance in enumerate(df.to_dict(orient="records")):
if "answer" not in instance.keys():
instance["answer"] = ""
if "explanation" not in instance.keys():
instance["explanation"] = ""
yield i, instance
| LLaMA-Factory/evaluation/ceval/ceval.py/0 | {
"file_path": "LLaMA-Factory/evaluation/ceval/ceval.py",
"repo_id": "LLaMA-Factory",
"token_count": 2234
} | 0 |
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
} | LLaMA-Factory/examples/deepspeed/ds_z3_offload_config.json/0 | {
"file_path": "LLaMA-Factory/examples/deepspeed/ds_z3_offload_config.json",
"repo_id": "LLaMA-Factory",
"token_count": 432
} | 1 |
### model
model_name_or_path: saves/llama3-8b/full/sft
### method
stage: sft
do_predict: true
finetuning_type: full
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 50
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/predict
overwrite_output_dir: true
### eval
per_device_eval_batch_size: 1
predict_with_generate: true
| LLaMA-Factory/examples/train_full/llama3_full_predict.yaml/0 | {
"file_path": "LLaMA-Factory/examples/train_full/llama3_full_predict.yaml",
"repo_id": "LLaMA-Factory",
"token_count": 161
} | 2 |
# coding=utf-8
# Copyright 2024 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections import defaultdict
import fire
from tqdm import tqdm
from llamafactory.data import get_dataset
from llamafactory.hparams import get_train_args
from llamafactory.model import load_tokenizer
def length_cdf(
model_name_or_path: str,
dataset: str = "alpaca_en",
dataset_dir: str = "data",
template: str = "default",
interval: int = 1000,
):
r"""
Calculates the distribution of the input lengths in the dataset.
Usage: python length_cdf.py --model_name_or_path path_to_model --dataset alpaca_en --template default
"""
model_args, data_args, training_args, _, _ = get_train_args(
dict(
stage="sft",
model_name_or_path=model_name_or_path,
dataset=dataset,
dataset_dir=dataset_dir,
template=template,
cutoff_len=1_000_000,
output_dir="dummy_dir",
overwrite_cache=True,
)
)
tokenizer_module = load_tokenizer(model_args)
trainset = get_dataset(model_args, data_args, training_args, stage="sft", **tokenizer_module)
total_num = len(trainset)
length_dict = defaultdict(int)
for sample in tqdm(trainset["input_ids"]):
length_dict[len(sample) // interval * interval] += 1
length_tuples = list(length_dict.items())
length_tuples.sort()
count_accu, prob_accu = 0, 0
for length, count in length_tuples:
count_accu += count
prob_accu += count / total_num * 100
print("{:d} ({:.2f}%) samples have length < {}.".format(count_accu, prob_accu, length + interval))
if __name__ == "__main__":
fire.Fire(length_cdf)
| LLaMA-Factory/scripts/length_cdf.py/0 | {
"file_path": "LLaMA-Factory/scripts/length_cdf.py",
"repo_id": "LLaMA-Factory",
"token_count": 858
} | 3 |
# Copyright 2024 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Dict, Generator, List, Set, Tuple
if TYPE_CHECKING:
from gradio.components import Component
class Manager:
def __init__(self) -> None:
self._id_to_elem: Dict[str, "Component"] = {}
self._elem_to_id: Dict["Component", str] = {}
def add_elems(self, tab_name: str, elem_dict: Dict[str, "Component"]) -> None:
r"""
Adds elements to manager.
"""
for elem_name, elem in elem_dict.items():
elem_id = "{}.{}".format(tab_name, elem_name)
self._id_to_elem[elem_id] = elem
self._elem_to_id[elem] = elem_id
def get_elem_list(self) -> List["Component"]:
r"""
Returns the list of all elements.
"""
return list(self._id_to_elem.values())
def get_elem_iter(self) -> Generator[Tuple[str, "Component"], None, None]:
r"""
Returns an iterator over all elements with their names.
"""
for elem_id, elem in self._id_to_elem.items():
yield elem_id.split(".")[-1], elem
def get_elem_by_id(self, elem_id: str) -> "Component":
r"""
Gets element by id.
Example: top.lang, train.dataset
"""
return self._id_to_elem[elem_id]
def get_id_by_elem(self, elem: "Component") -> str:
r"""
Gets id by element.
"""
return self._elem_to_id[elem]
def get_base_elems(self) -> Set["Component"]:
r"""
Gets the base elements that are commonly used.
"""
return {
self._id_to_elem["top.lang"],
self._id_to_elem["top.model_name"],
self._id_to_elem["top.model_path"],
self._id_to_elem["top.finetuning_type"],
self._id_to_elem["top.checkpoint_path"],
self._id_to_elem["top.quantization_bit"],
self._id_to_elem["top.quantization_method"],
self._id_to_elem["top.template"],
self._id_to_elem["top.rope_scaling"],
self._id_to_elem["top.booster"],
self._id_to_elem["top.visual_inputs"],
}
| LLaMA-Factory/src/llamafactory/webui/manager.py/0 | {
"file_path": "LLaMA-Factory/src/llamafactory/webui/manager.py",
"repo_id": "LLaMA-Factory",
"token_count": 1198
} | 4 |
简体中文 | [English](README_en.md)
<div align="center">
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/160532560-34cf7a1f-d950-435e-90d2-4b0a679e5119.png" align="middle" width = "800" />
</p>
<p align="center">
<a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-dfd.svg"></a>
<a href="https://github.com/PaddlePaddle/PaddleDetection/releases"><img src="https://img.shields.io/github/v/release/PaddlePaddle/PaddleDetection?color=ffa"></a>
<a href=""><img src="https://img.shields.io/badge/python-3.7+-aff.svg"></a>
<a href=""><img src="https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-pink.svg"></a>
<a href="https://github.com/PaddlePaddle/PaddleDetection/stargazers"><img src="https://img.shields.io/github/stars/PaddlePaddle/PaddleDetection?color=ccf"></a>
</p>
</div>
## 💌目录
- [💌目录](#目录)
- [🌈简介](#简介)
- [📣最新进展](#最新进展)
- [👫开源社区](#开源社区)
- [✨主要特性](#主要特性)
- [🧩模块化设计](#模块化设计)
- [📱丰富的模型库](#丰富的模型库)
- [🎗️产业特色模型|产业工具](#️产业特色模型产业工具)
- [💡🏆产业级部署实践](#产业级部署实践)
- [🍱安装](#安装)
- [🔥教程](#教程)
- [🔑FAQ](#faq)
- [🧩模块组件](#模块组件)
- [📱模型库](#模型库)
- [⚖️模型性能对比](#️模型性能对比)
- [🖥️服务器端模型性能对比](#️服务器端模型性能对比)
- [⌚️移动端模型性能对比](#️移动端模型性能对比)
- [🎗️产业特色模型|产业工具](#️产业特色模型产业工具-1)
- [💎PP-YOLOE 高精度目标检测模型](#pp-yoloe-高精度目标检测模型)
- [💎PP-YOLOE-R 高性能旋转框检测模型](#pp-yoloe-r-高性能旋转框检测模型)
- [💎PP-YOLOE-SOD 高精度小目标检测模型](#pp-yoloe-sod-高精度小目标检测模型)
- [💫PP-PicoDet 超轻量实时目标检测模型](#pp-picodet-超轻量实时目标检测模型)
- [📡PP-Tracking 实时多目标跟踪系统](#pp-tracking-实时多目标跟踪系统)
- [⛷️PP-TinyPose 人体骨骼关键点识别](#️pp-tinypose-人体骨骼关键点识别)
- [🏃🏻PP-Human 实时行人分析工具](#pp-human-实时行人分析工具)
- [🏎️PP-Vehicle 实时车辆分析工具](#️pp-vehicle-实时车辆分析工具)
- [💡产业实践范例](#产业实践范例)
- [🏆企业应用案例](#企业应用案例)
- [📝许可证书](#许可证书)
- [📌引用](#引用)
## 🌈简介
PaddleDetection是一个基于PaddlePaddle的目标检测端到端开发套件,在提供丰富的模型组件和测试基准的同时,注重端到端的产业落地应用,通过打造产业级特色模型|工具、建设产业应用范例等手段,帮助开发者实现数据准备、模型选型、模型训练、模型部署的全流程打通,快速进行落地应用。
主要模型效果示例如下(点击标题可快速跳转):
| [**通用目标检测**](#pp-yoloe-高精度目标检测模型) | [**小目标检测**](#pp-yoloe-sod-高精度小目标检测模型) | [**旋转框检测**](#pp-yoloe-r-高性能旋转框检测模型) | [**3D目标物检测**](https://github.com/PaddlePaddle/Paddle3D) |
| :--------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------: |
| <img src='https://user-images.githubusercontent.com/61035602/206095864-f174835d-4e9a-42f7-96b8-d684fc3a3687.png' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206095892-934be83a-f869-4a31-8e52-1074184149d1.jpg' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206111796-d9a9702a-c1a0-4647-b8e9-3e1307e9d34c.png' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206095622-cf6dbd26-5515-472f-9451-b39bbef5b1bf.gif' height="126px" width="180px"> |
| [**人脸检测**](#模型库) | [**2D关键点检测**](#️pp-tinypose-人体骨骼关键点识别) | [**多目标追踪**](#pp-tracking-实时多目标跟踪系统) | [**实例分割**](#模型库) |
| <img src='https://user-images.githubusercontent.com/61035602/206095684-72f42233-c9c7-4bd8-9195-e34859bd08bf.jpg' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206100220-ab01d347-9ff9-4f17-9718-290ec14d4205.gif' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206111753-836e7827-968e-4c80-92ef-7a78766892fc.gif' height="126px" width="180px" > | <img src='https://user-images.githubusercontent.com/61035602/206095831-cc439557-1a23-4a99-b6b0-b6f2e97e8c57.jpg' height="126px" width="180px"> |
| [**车辆分析——车牌识别**](#️pp-vehicle-实时车辆分析工具) | [**车辆分析——车流统计**](#️pp-vehicle-实时车辆分析工具) | [**车辆分析——违章检测**](#️pp-vehicle-实时车辆分析工具) | [**车辆分析——属性分析**](#️pp-vehicle-实时车辆分析工具) |
| <img src='https://user-images.githubusercontent.com/61035602/206099328-2a1559e0-3b48-4424-9bad-d68f9ba5ba65.gif' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206095918-d0e7ad87-7bbb-40f1-bcc1-37844e2271ff.gif' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206100295-7762e1ab-ffce-44fb-b69d-45fb93657fa0.gif' height="126px" width="180px" > | <img src='https://user-images.githubusercontent.com/61035602/206095905-8255776a-d8e6-4af1-b6e9-8d9f97e5059d.gif' height="126px" width="180px"> |
| [**行人分析——闯入分析**](#pp-human-实时行人分析工具) | [**行人分析——行为分析**](#pp-human-实时行人分析工具) | [**行人分析——属性分析**](#pp-human-实时行人分析工具) | [**行人分析——人流统计**](#pp-human-实时行人分析工具) |
| <img src='https://user-images.githubusercontent.com/61035602/206095792-ae0ac107-cd8e-492a-8baa-32118fc82b04.gif' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206095778-fdd73e5d-9f91-48c7-9d3d-6f2e02ec3f79.gif' height="126px" width="180px"> | <img src='https://user-images.githubusercontent.com/61035602/206095709-2c3a209e-6626-45dd-be16-7f0bf4d48a14.gif' height="126px" width="180px"> | <img src="https://user-images.githubusercontent.com/61035602/206113351-cc59df79-8672-4d76-b521-a15acf69ae78.gif" height="126px" width="180px"> |
同时,PaddleDetection提供了模型的在线体验功能,用户可以选择自己的数据进行在线推理。
`说明`:考虑到服务器负载压力,在线推理均为CPU推理,完整的模型开发实例以及产业部署实践代码示例请前往[🎗️产业特色模型|产业工具](#️产业特色模型产业工具-1)。
`传送门`:[模型在线体验](https://www.paddlepaddle.org.cn/models)
<div align="center">
<p align="center">
<img src="https://user-images.githubusercontent.com/61035602/206896755-bd0cd498-1149-4e94-ae30-da590ea78a7a.gif" align="middle"/>
</p>
</div>
## 📣最新进展
**🔥超越YOLOv8,飞桨推出精度最高的实时检测器RT-DETR!**
<div align="center">
<img src="https://github.com/PaddlePaddle/PaddleDetection/assets/17582080/196b0a10-d2e8-401c-9132-54b9126e0a33" height = "500" caption='' />
<p></p>
</div>
- `RT-DETR解读文章传送门`:
- [《超越YOLOv8,飞桨推出精度最高的实时检测器RT-DETR!》](https://mp.weixin.qq.com/s/o03QM2rZNjHVto36gcV0Yw)
- `代码传送门`:[RT-DETR](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rtdetr)
## 👫开源社区
- **📑项目合作:** 如果您是企业开发者且有明确的目标检测垂类应用需求,请扫描如下二维码入群,并联系`群管理员AI`后可免费与官方团队展开不同层次的合作。
- **🏅️社区贡献:** PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与贡献方式可以参考[开源项目开发指南](docs/contribution/README.md)。
- **💻直播教程:** PaddleDetection会定期在飞桨直播间([B站:飞桨PaddlePaddle](https://space.bilibili.com/476867757)、[微信: 飞桨PaddlePaddle](https://mp.weixin.qq.com/s/6ji89VKqoXDY6SSGkxS8NQ)),针对发新内容、以及产业范例、使用教程等进行直播分享。
- **🎁加入社区:** **微信扫描二维码并填写问卷之后,可以及时获取如下信息,包括:**
- 社区最新文章、直播课等活动预告
- 往期直播录播&PPT
- 30+行人车辆等垂类高性能预训练模型
- 七大任务开源数据集下载链接汇总
- 40+前沿检测领域顶会算法
- 15+从零上手目标检测理论与实践视频课程
- 10+工业安防交通全流程项目实操(含源码)
<div align="center">
<img src="https://github.com/PaddlePaddle/PaddleDetection/assets/22989727/0466954b-ab4d-4984-bd36-796c37f0ee9c" width = "150" height = "150",caption='' />
<p>PaddleDetection官方交流群二维码</p>
</div>
## 📖 技术交流合作
- 飞桨低代码开发工具(PaddleX)—— 面向国内外主流AI硬件的飞桨精选模型一站式开发工具。包含如下核心优势:
- 【产业高精度模型库】:覆盖10个主流AI任务 40+精选模型,丰富齐全。
- 【特色模型产线】:提供融合大小模型的特色模型产线,精度更高,效果更好。
- 【低代码开发模式】:图形化界面支持统一开发范式,便捷高效。
- 【私有化部署多硬件支持】:适配国内外主流AI硬件,支持本地纯离线使用,满足企业安全保密需要。
- PaddleX官网地址:https://aistudio.baidu.com/intro/paddlex
- PaddleX官方交流频道:https://aistudio.baidu.com/community/channel/610
- **🎈社区近期活动**
- **🔥PaddleDetection v2.6版本更新解读**
<div align="center">
<img src="https://user-images.githubusercontent.com/61035602/224244188-da8495fc-eea9-432f-bc2d-6f0144c2dde9.png" height = "250" caption='' />
<p></p>
</div>
- `v2.6版本版本更新解读文章传送门`:[《PaddleDetection v2.6发布:目标小?数据缺?标注累?泛化差?PP新员逐一应对!》](https://mp.weixin.qq.com/s/SLITj5k120d_fQc7jEO8Vw)
- **🏆半监督检测**
- `文章传送门`:[CVPR 2023 | 单阶段半监督目标检测SOTA:ARSL](https://mp.weixin.qq.com/s/UZLIGL6va2KBfofC-nKG4g)
- `代码传送门`:[ARSL](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/semi_det)
<div align="center">
<img src="https://user-images.githubusercontent.com/61035602/230522850-21873665-ba79-4f8d-8dce-43d736111df8.png" height = "250" caption='' />
<p></p>
</div>
- **👀YOLO系列专题**
- `文章传送门`:[YOLOv8来啦!YOLO内卷期模型怎么选?9+款AI硬件如何快速部署?深度解析](https://mp.weixin.qq.com/s/rPwprZeHEpmGOe5wxrmO5g)
- `代码传送门`:[PaddleYOLO全系列](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/docs/feature_models/PaddleYOLO_MODEL.md)
<div align="center">
<img src="https://user-images.githubusercontent.com/61035602/213202797-3a1b24f3-53c0-4094-bb31-db2f84438fbc.jpeg" height = "250" caption='' />
<p></p>
</div>
- **🎯少目标迁移学习专题**
- `文章传送门`:[囿于数据少?泛化性差?PaddleDetection少样本迁移学习助你一键突围!](https://mp.weixin.qq.com/s/dFEQoxSzVCOaWVZPb3N7WA)
- **⚽️2022卡塔尔世界杯专题**
- `文章传送门`:[世界杯决赛号角吹响!趁周末来搭一套足球3D+AI量化分析系统吧!](https://mp.weixin.qq.com/s/koJxjWDPBOlqgI-98UsfKQ)
<div align="center">
<img src="https://user-images.githubusercontent.com/61035602/208036574-f151a7ff-a5f1-4495-9316-a47218a6576b.gif" height = "250" caption='' />
<p></p>
</div>
- **🔍旋转框小目标检测专题**
- `文章传送门`:[Yes, PP-YOLOE!80.73mAP、38.5mAP,旋转框、小目标检测能力双SOTA!](https://mp.weixin.qq.com/s/6ji89VKqoXDY6SSGkxS8NQ)
<div align="center">
<img src="https://user-images.githubusercontent.com/61035602/208037368-5b9f01f7-afd9-46d8-bc80-271ccb5db7bb.png" height = "220" caption='' />
<p></p>
</div>
- **🎊YOLO Vision世界学术交流大会**
- **PaddleDetection**受邀参与首个以**YOLO为主题**的**YOLO-VISION**世界大会,与全球AI领先开发者学习交流。
- `活动链接传送门`:[YOLO-VISION](https://ultralytics.com/yolo-vision)
<div align="center">
<img src="https://user-images.githubusercontent.com/48054808/192301374-940cf2fa-9661-419b-9c46-18a4570df381.jpeg" width="400"/>
</div>
- **🏅️社区贡献**
- `活动链接传送门`:[Yes, PP-YOLOE! 基于PP-YOLOE的算法开发](https://github.com/PaddlePaddle/PaddleDetection/issues/7345)
## ✨主要特性
#### 🧩模块化设计
PaddleDetection将检测模型解耦成不同的模块组件,通过自定义模块组件组合,用户可以便捷高效地完成检测模型的搭建。`传送门`:[🧩模块组件](#模块组件)。
#### 📱丰富的模型库
PaddleDetection支持大量的最新主流的算法基准以及预训练模型,涵盖2D/3D目标检测、实例分割、人脸检测、关键点检测、多目标跟踪、半监督学习等方向。`传送门`:[📱模型库](#模型库)、[⚖️模型性能对比](#️模型性能对比)。
#### 🎗️产业特色模型|产业工具
PaddleDetection打造产业级特色模型以及分析工具:PP-YOLOE+、PP-PicoDet、PP-TinyPose、PP-HumanV2、PP-Vehicle等,针对通用、高频垂类应用场景提供深度优化解决方案以及高度集成的分析工具,降低开发者的试错、选择成本,针对业务场景快速应用落地。`传送门`:[🎗️产业特色模型|产业工具](#️产业特色模型产业工具-1)。
#### 💡🏆产业级部署实践
PaddleDetection整理工业、农业、林业、交通、医疗、金融、能源电力等AI应用范例,打通数据标注-模型训练-模型调优-预测部署全流程,持续降低目标检测技术产业落地门槛。`传送门`:[💡产业实践范例](#产业实践范例)、[🏆企业应用案例](#企业应用案例)。
<div align="center">
<p align="center">
<img src="https://user-images.githubusercontent.com/61035602/206431371-912a14c8-ce1e-48ec-ae6f-7267016b308e.png" align="middle" width="1280"/>
</p>
</div>
## 🍱安装
参考[安装说明](docs/tutorials/INSTALL_cn.md)进行安装。
## 🔥教程
**深度学习入门教程**
- [零基础入门深度学习](https://www.paddlepaddle.org.cn/tutorials/projectdetail/4676538)
- [零基础入门目标检测](https://aistudio.baidu.com/aistudio/education/group/info/1617)
**快速开始**
- [快速体验](docs/tutorials/QUICK_STARTED_cn.md)
- [示例:30分钟快速开发交通标志检测模型](docs/tutorials/GETTING_STARTED_cn.md)
**数据准备**
- [数据准备](docs/tutorials/data/README.md)
- [数据处理模块](docs/advanced_tutorials/READER.md)
**配置文件说明**
- [RCNN参数说明](docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation.md)
- [PP-YOLO参数说明](docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md)
**模型开发**
- [新增检测模型](docs/advanced_tutorials/MODEL_TECHNICAL.md)
- 二次开发
- [目标检测](docs/advanced_tutorials/customization/detection.md)
- [关键点检测](docs/advanced_tutorials/customization/keypoint_detection.md)
- [多目标跟踪](docs/advanced_tutorials/customization/pphuman_mot.md)
- [行为识别](docs/advanced_tutorials/customization/action_recognotion/)
- [属性识别](docs/advanced_tutorials/customization/pphuman_attribute.md)
**部署推理**
- [模型导出教程](deploy/EXPORT_MODEL.md)
- [模型压缩](https://github.com/PaddlePaddle/PaddleSlim)
- [剪裁/量化/蒸馏教程](configs/slim)
- [Paddle Inference部署](deploy/README.md)
- [Python端推理部署](deploy/python)
- [C++端推理部署](deploy/cpp)
- [Paddle Lite部署](deploy/lite)
- [Paddle Serving部署](deploy/serving)
- [ONNX模型导出](deploy/EXPORT_ONNX_MODEL.md)
- [推理benchmark](deploy/BENCHMARK_INFER.md)
## 🔑FAQ
- [FAQ/常见问题汇总](docs/tutorials/FAQ)
## 🧩模块组件
<table align="center">
<tbody>
<tr align="center" valign="center">
<td>
<b>Backbones</b>
</td>
<td>
<b>Necks</b>
</td>
<td>
<b>Loss</b>
</td>
<td>
<b>Common</b>
</td>
<td>
<b>Data Augmentation</b>
</td>
</tr>
<tr valign="top">
<td>
<ul>
<li><a href="ppdet/modeling/backbones/resnet.py">ResNet</a></li>
<li><a href="ppdet/modeling/backbones/res2net.py">CSPResNet</a></li>
<li><a href="ppdet/modeling/backbones/senet.py">SENet</a></li>
<li><a href="ppdet/modeling/backbones/res2net.py">Res2Net</a></li>
<li><a href="ppdet/modeling/backbones/hrnet.py">HRNet</a></li>
<li><a href="ppdet/modeling/backbones/lite_hrnet.py">Lite-HRNet</a></li>
<li><a href="ppdet/modeling/backbones/darknet.py">DarkNet</a></li>
<li><a href="ppdet/modeling/backbones/csp_darknet.py">CSPDarkNet</a></li>
<li><a href="ppdet/modeling/backbones/mobilenet_v1.py">MobileNetV1</a></li>
<li><a href="ppdet/modeling/backbones/mobilenet_v3.py">MobileNetV1</a></li>
<li><a href="ppdet/modeling/backbones/shufflenet_v2.py">ShuffleNetV2</a></li>
<li><a href="ppdet/modeling/backbones/ghostnet.py">GhostNet</a></li>
<li><a href="ppdet/modeling/backbones/blazenet.py">BlazeNet</a></li>
<li><a href="ppdet/modeling/backbones/dla.py">DLA</a></li>
<li><a href="ppdet/modeling/backbones/hardnet.py">HardNet</a></li>
<li><a href="ppdet/modeling/backbones/lcnet.py">LCNet</a></li>
<li><a href="ppdet/modeling/backbones/esnet.py">ESNet</a></li>
<li><a href="ppdet/modeling/backbones/swin_transformer.py">Swin-Transformer</a></li>
<li><a href="ppdet/modeling/backbones/convnext.py">ConvNeXt</a></li>
<li><a href="ppdet/modeling/backbones/vgg.py">VGG</a></li>
<li><a href="ppdet/modeling/backbones/vision_transformer.py">Vision Transformer</a></li>
<li><a href="configs/convnext">ConvNext</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="ppdet/modeling/necks/bifpn.py">BiFPN</a></li>
<li><a href="ppdet/modeling/necks/blazeface_fpn.py">BlazeFace-FPN</a></li>
<li><a href="ppdet/modeling/necks/centernet_fpn.py">CenterNet-FPN</a></li>
<li><a href="ppdet/modeling/necks/csp_pan.py">CSP-PAN</a></li>
<li><a href="ppdet/modeling/necks/custom_pan.py">Custom-PAN</a></li>
<li><a href="ppdet/modeling/necks/fpn.py">FPN</a></li>
<li><a href="ppdet/modeling/necks/es_pan.py">ES-PAN</a></li>
<li><a href="ppdet/modeling/necks/hrfpn.py">HRFPN</a></li>
<li><a href="ppdet/modeling/necks/lc_pan.py">LC-PAN</a></li>
<li><a href="ppdet/modeling/necks/ttf_fpn.py">TTF-FPN</a></li>
<li><a href="ppdet/modeling/necks/yolo_fpn.py">YOLO-FPN</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="ppdet/modeling/losses/smooth_l1_loss.py">Smooth-L1</a></li>
<li><a href="ppdet/modeling/losses/detr_loss.py">Detr Loss</a></li>
<li><a href="ppdet/modeling/losses/fairmot_loss.py">Fairmot Loss</a></li>
<li><a href="ppdet/modeling/losses/fcos_loss.py">Fcos Loss</a></li>
<li><a href="ppdet/modeling/losses/gfocal_loss.py">GFocal Loss</a></li>
<li><a href="ppdet/modeling/losses/jde_loss.py">JDE Loss</a></li>
<li><a href="ppdet/modeling/losses/keypoint_loss.py">KeyPoint Loss</a></li>
<li><a href="ppdet/modeling/losses/solov2_loss.py">SoloV2 Loss</a></li>
<li><a href="ppdet/modeling/losses/focal_loss.py">Focal Loss</a></li>
<li><a href="ppdet/modeling/losses/iou_loss.py">GIoU/DIoU/CIoU</a></li>
<li><a href="ppdet/modeling/losses/iou_aware_loss.py">IoUAware</a></li>
<li><a href="ppdet/modeling/losses/sparsercnn_loss.py">SparseRCNN Loss</a></li>
<li><a href="ppdet/modeling/losses/ssd_loss.py">SSD Loss</a></li>
<li><a href="ppdet/modeling/losses/focal_loss.py">YOLO Loss</a></li>
<li><a href="ppdet/modeling/losses/yolo_loss.py">CT Focal Loss</a></li>
<li><a href="ppdet/modeling/losses/varifocal_loss.py">VariFocal Loss</a></li>
</ul>
</td>
<td>
</ul>
<li><b>Post-processing</b></li>
<ul>
<ul>
<li><a href="ppdet/modeling/post_process.py">SoftNMS</a></li>
<li><a href="ppdet/modeling/post_process.py">MatrixNMS</a></li>
</ul>
</ul>
<li><b>Training</b></li>
<ul>
<ul>
<li><a href="tools/train.py#L62">FP16 training</a></li>
<li><a href="docs/tutorials/DistributedTraining_cn.md">Multi-machine training </a></li>
</ul>
</ul>
<li><b>Common</b></li>
<ul>
<ul>
<li><a href="ppdet/modeling/backbones/resnet.py#L41">Sync-BN</a></li>
<li><a href="configs/gn/README.md">Group Norm</a></li>
<li><a href="configs/dcn/README.md">DCNv2</a></li>
<li><a href="ppdet/optimizer/ema.py">EMA</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="ppdet/data/transform/operators.py">Resize</a></li>
<li><a href="ppdet/data/transform/operators.py">Lighting</a></li>
<li><a href="ppdet/data/transform/operators.py">Flipping</a></li>
<li><a href="ppdet/data/transform/operators.py">Expand</a></li>
<li><a href="ppdet/data/transform/operators.py">Crop</a></li>
<li><a href="ppdet/data/transform/operators.py">Color Distort</a></li>
<li><a href="ppdet/data/transform/operators.py">Random Erasing</a></li>
<li><a href="ppdet/data/transform/operators.py">Mixup </a></li>
<li><a href="ppdet/data/transform/operators.py">AugmentHSV</a></li>
<li><a href="ppdet/data/transform/operators.py">Mosaic</a></li>
<li><a href="ppdet/data/transform/operators.py">Cutmix </a></li>
<li><a href="ppdet/data/transform/operators.py">Grid Mask</a></li>
<li><a href="ppdet/data/transform/operators.py">Auto Augment</a></li>
<li><a href="ppdet/data/transform/operators.py">Random Perspective</a></li>
</ul>
</td>
</tr>
</td>
</tr>
</tbody>
</table>
## 📱模型库
<table align="center">
<tbody>
<tr align="center" valign="center">
<td>
<b>2D Detection</b>
</td>
<td>
<b>Multi Object Tracking</b>
</td>
<td>
<b>KeyPoint Detection</b>
</td>
<td>
<b>Others</b>
</td>
</tr>
<tr valign="top">
<td>
<ul>
<li><a href="configs/faster_rcnn/README.md">Faster RCNN</a></li>
<li><a href="ppdet/modeling/necks/fpn.py">FPN</a></li>
<li><a href="configs/cascade_rcnn/README.md">Cascade-RCNN</a></li>
<li><a href="configs/rcnn_enhance">PSS-Det</a></li>
<li><a href="configs/retinanet/README.md">RetinaNet</a></li>
<li><a href="configs/yolov3/README.md">YOLOv3</a></li>
<li><a href="configs/yolof/README.md">YOLOF</a></li>
<li><a href="configs/yolox/README.md">YOLOX</a></li>
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov5">YOLOv5</a></li>
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov6">YOLOv6</a></li>
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov7">YOLOv7</a></li>
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/yolov8">YOLOv8</a></li>
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO/tree/develop/configs/rtmdet">RTMDet</a></li>
<li><a href="configs/ppyolo/README_cn.md">PP-YOLO</a></li>
<li><a href="configs/ppyolo#pp-yolo-tiny">PP-YOLO-Tiny</a></li>
<li><a href="configs/picodet">PP-PicoDet</a></li>
<li><a href="configs/ppyolo/README_cn.md">PP-YOLOv2</a></li>
<li><a href="configs/ppyoloe/README_legacy.md">PP-YOLOE</a></li>
<li><a href="configs/ppyoloe/README_cn.md">PP-YOLOE+</a></li>
<li><a href="configs/smalldet">PP-YOLOE-SOD</a></li>
<li><a href="configs/rotate/README.md">PP-YOLOE-R</a></li>
<li><a href="configs/ssd/README.md">SSD</a></li>
<li><a href="configs/centernet">CenterNet</a></li>
<li><a href="configs/fcos">FCOS</a></li>
<li><a href="configs/rotate/fcosr">FCOSR</a></li>
<li><a href="configs/ttfnet">TTFNet</a></li>
<li><a href="configs/tood">TOOD</a></li>
<li><a href="configs/gfl">GFL</a></li>
<li><a href="configs/gfl/gflv2_r50_fpn_1x_coco.yml">GFLv2</a></li>
<li><a href="configs/detr">DETR</a></li>
<li><a href="configs/deformable_detr">Deformable DETR</a></li>
<li><a href="configs/sparse_rcnn">Sparse RCNN</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/mot/jde">JDE</a></li>
<li><a href="configs/mot/fairmot">FairMOT</a></li>
<li><a href="configs/mot/deepsort">DeepSORT</a></li>
<li><a href="configs/mot/bytetrack">ByteTrack</a></li>
<li><a href="configs/mot/ocsort">OC-SORT</a></li>
<li><a href="configs/mot/botsort">BoT-SORT</a></li>
<li><a href="configs/mot/centertrack">CenterTrack</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/keypoint/hrnet">HRNet</a></li>
<li><a href="configs/keypoint/higherhrnet">HigherHRNet</a></li>
<li><a href="configs/keypoint/lite_hrnet">Lite-HRNet</a></li>
<li><a href="configs/keypoint/tiny_pose">PP-TinyPose</a></li>
</ul>
</td>
<td>
</ul>
<li><b>Instance Segmentation</b></li>
<ul>
<ul>
<li><a href="configs/mask_rcnn">Mask RCNN</a></li>
<li><a href="configs/cascade_rcnn">Cascade Mask RCNN</a></li>
<li><a href="configs/solov2">SOLOv2</a></li>
</ul>
</ul>
<li><b>Face Detection</b></li>
<ul>
<ul>
<li><a href="configs/face_detection">BlazeFace</a></li>
</ul>
</ul>
<li><b>Semi-Supervised Detection</b></li>
<ul>
<ul>
<li><a href="configs/semi_det">DenseTeacher</a></li>
</ul>
</ul>
<li><b>3D Detection</b></li>
<ul>
<ul>
<li><a href="https://github.com/PaddlePaddle/Paddle3D">Smoke</a></li>
<li><a href="https://github.com/PaddlePaddle/Paddle3D">CaDDN</a></li>
<li><a href="https://github.com/PaddlePaddle/Paddle3D">PointPillars</a></li>
<li><a href="https://github.com/PaddlePaddle/Paddle3D">CenterPoint</a></li>
<li><a href="https://github.com/PaddlePaddle/Paddle3D">SequeezeSegV3</a></li>
<li><a href="https://github.com/PaddlePaddle/Paddle3D">IA-SSD</a></li>
<li><a href="https://github.com/PaddlePaddle/Paddle3D">PETR</a></li>
</ul>
</ul>
<li><b>Vehicle Analysis Toolbox</b></li>
<ul>
<ul>
<li><a href="deploy/pipeline/README.md">PP-Vehicle</a></li>
</ul>
</ul>
<li><b>Human Analysis Toolbox</b></li>
<ul>
<ul>
<li><a href="deploy/pipeline/README.md">PP-Human</a></li>
<li><a href="deploy/pipeline/README.md">PP-HumanV2</a></li>
</ul>
</ul>
<li><b>Sport Analysis Toolbox</b></li>
<ul>
<ul>
<li><a href="https://github.com/PaddlePaddle/PaddleSports">PP-Sports</a></li>
</ul>
</td>
</tr>
</tbody>
</table>
## ⚖️模型性能对比
#### 🖥️服务器端模型性能对比
各模型结构和骨干网络的代表模型在COCO数据集上精度mAP和单卡Tesla V100上预测速度(FPS)对比图。
<div align="center">
<img src="https://user-images.githubusercontent.com/61035602/206434766-caaa781b-b922-481f-af09-15faac9ed33b.png" width="800"/>
</div>
<details>
<summary><b> 测试说明(点击展开)</b></summary>
- ViT为ViT-Cascade-Faster-RCNN模型,COCO数据集mAP高达55.7%
- Cascade-Faster-RCNN为Cascade-Faster-RCNN-ResNet50vd-DCN,PaddleDetection将其优化到COCO数据mAP为47.8%时推理速度为20FPS
- PP-YOLOE是对PP-YOLO v2模型的进一步优化,L版本在COCO数据集mAP为51.6%,Tesla V100预测速度78.1FPS
- PP-YOLOE+是对PPOLOE模型的进一步优化,L版本在COCO数据集mAP为53.3%,Tesla V100预测速度78.1FPS
- YOLOX和YOLOv5均为基于PaddleDetection复现算法,YOLOv5代码在[PaddleYOLO](https://github.com/PaddlePaddle/PaddleYOLO)中,参照[PaddleYOLO_MODEL](docs/feature_models/PaddleYOLO_MODEL.md)
- 图中模型均可在[📱模型库](#模型库)中获取
</details>
#### ⌚️移动端模型性能对比
各移动端模型在COCO数据集上精度mAP和高通骁龙865处理器上预测速度(FPS)对比图。
<div align="center">
<img src="https://user-images.githubusercontent.com/61035602/206434741-10460690-8fc3-4084-a11a-16fe4ce2fc85.png" width="550"/>
</div>
<details>
<summary><b> 测试说明(点击展开)</b></summary>
- 测试数据均使用高通骁龙865(4xA77+4xA55)处理器,batch size为1, 开启4线程测试,测试使用NCNN预测库,测试脚本见[MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark)
- PP-PicoDet及PP-YOLO-Tiny为PaddleDetection自研模型,可在[📱模型库](#模型库)中获取,其余模型PaddleDetection暂未提供
</details>
## 🎗️产业特色模型|产业工具
产业特色模型|产业工具是PaddleDetection针对产业高频应用场景打造的兼顾精度和速度的模型以及工具箱,注重从数据处理-模型训练-模型调优-模型部署的端到端打通,且提供了实际生产环境中的实践范例代码,帮助拥有类似需求的开发者高效的完成产品开发落地应用。
该系列模型|工具均已PP前缀命名,具体介绍、预训练模型以及产业实践范例代码如下。
### 💎PP-YOLOE 高精度目标检测模型
<details>
<summary><b> 简介(点击展开)</b></summary>
PP-YOLOE是基于PP-YOLOv2的卓越的单阶段Anchor-free模型,超越了多种流行的YOLO模型。PP-YOLOE避免了使用诸如Deformable Convolution或者Matrix NMS之类的特殊算子,以使其能轻松地部署在多种多样的硬件上。其使用大规模数据集obj365预训练模型进行预训练,可以在不同场景数据集上快速调优收敛。
`传送门`:[PP-YOLOE说明](configs/ppyoloe/README_cn.md)。
`传送门`:[arXiv论文](https://arxiv.org/abs/2203.16250)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
| 模型名称 | COCO精度(mAP) | V100 TensorRT FP16速度(FPS) | 推荐部署硬件 | 配置文件 | 模型下载 |
| :---------- | :-------------: | :-------------------------: | :----------: | :-----------------------------------------------------: | :-------------------------------------------------------------------------------------: |
| PP-YOLOE+_l | 53.3 | 149.2 | 服务器 | [链接](configs/ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco.pdparams) |
`传送门`:[全部预训练模型](configs/ppyoloe/README_cn.md)。
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| ---- | ----------------- | --------------------------------------------------------------------------------------------- | ------------------------------------------------------------- | --------------------------------------------------- |
| 农业 | 农作物检测 | 用于葡萄栽培中基于图像的监测和现场机器人技术,提供了来自5种不同葡萄品种的实地实例 | [PP-YOLOE+ 下游任务](./configs/ppyoloe/application/README.md) | [下载链接](./configs/ppyoloe/application/README.md) |
| 通用 | 低光场景检测 | 低光数据集使用ExDark,包括从极低光环境到暮光环境等10种不同光照条件下的图片。 | [PP-YOLOE+ 下游任务](./configs/ppyoloe/application/README.md) | [下载链接](./configs/ppyoloe/application/README.md) |
| 工业 | PCB电路板瑕疵检测 | 工业数据集使用PKU-Market-PCB,该数据集用于印刷电路板(PCB)的瑕疵检测,提供了6种常见的PCB缺陷 | [PP-YOLOE+ 下游任务](./configs/ppyoloe/application/README.md) | [下载链接](./configs/ppyoloe/application/README.md) |
</details>
### 💎PP-YOLOE-R 高性能旋转框检测模型
<details>
<summary><b> 简介(点击展开)</b></summary>
PP-YOLOE-R是一个高效的单阶段Anchor-free旋转框检测模型,基于PP-YOLOE+引入了一系列改进策略来提升检测精度。根据不同的硬件对精度和速度的要求,PP-YOLOE-R包含s/m/l/x四个尺寸的模型。在DOTA 1.0数据集上,PP-YOLOE-R-l和PP-YOLOE-R-x在单尺度训练和测试的情况下分别达到了78.14mAP和78.28 mAP,这在单尺度评估下超越了几乎所有的旋转框检测模型。通过多尺度训练和测试,PP-YOLOE-R-l和PP-YOLOE-R-x的检测精度进一步提升至80.02mAP和80.73 mAP,超越了所有的Anchor-free方法并且和最先进的Anchor-based的两阶段模型精度几乎相当。在保持高精度的同时,PP-YOLOE-R避免使用特殊的算子,例如Deformable Convolution或Rotated RoI Align,使其能轻松地部署在多种多样的硬件上。
`传送门`:[PP-YOLOE-R说明](configs/rotate/ppyoloe_r)。
`传送门`:[arXiv论文](https://arxiv.org/abs/2211.02386)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
| 模型 | Backbone | mAP | V100 TRT FP16 (FPS) | RTX 2080 Ti TRT FP16 (FPS) | Params (M) | FLOPs (G) | 学习率策略 | 角度表示 | 数据增广 | GPU数目 | 每GPU图片数目 | 模型下载 | 配置文件 |
| :----------: | :------: | :---: | :-----------------: | :------------------------: | :--------: | :-------: | :--------: | :------: | :------: | :-----: | :-----------: | :---------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
| PP-YOLOE-R-l | CRN-l | 80.02 | 69.7 | 48.3 | 53.29 | 281.65 | 3x | oc | MS+RR | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_r_crn_l_3x_dota_ms.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/ppyoloe_r/ppyoloe_r_crn_l_3x_dota_ms.yml) |
`传送门`:[全部预训练模型](configs/rotate/ppyoloe_r)。
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| ---- | ---------- | --------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| 通用 | 旋转框检测 | 手把手教你上手PP-YOLOE-R旋转框检测,10分钟将脊柱数据集精度训练至95mAP | [基于PP-YOLOE-R的旋转框检测](https://aistudio.baidu.com/aistudio/projectdetail/5058293) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/5058293) |
</details>
### 💎PP-YOLOE-SOD 高精度小目标检测模型
<details>
<summary><b> 简介(点击展开)</b></summary>
PP-YOLOE-SOD(Small Object Detection)是PaddleDetection团队针对小目标检测提出的检测方案,在VisDrone-DET数据集上单模型精度达到38.5mAP,达到了SOTA性能。其分别基于切图拼图流程优化的小目标检测方案以及基于原图模型算法优化的小目标检测方案。同时提供了数据集自动分析脚本,只需输入数据集标注文件,便可得到数据集统计结果,辅助判断数据集是否是小目标数据集以及是否需要采用切图策略,同时给出网络超参数参考值。
`传送门`:[PP-YOLOE-SOD 小目标检测模型](configs/smalldet)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
- VisDrone数据集预训练模型
| 模型 | COCOAPI mAP<sup>val<br>0.5:0.95 | COCOAPI mAP<sup>val<br>0.5 | COCOAPI mAP<sup>test_dev<br>0.5:0.95 | COCOAPI mAP<sup>test_dev<br>0.5 | MatlabAPI mAP<sup>test_dev<br>0.5:0.95 | MatlabAPI mAP<sup>test_dev<br>0.5 | 下载 | 配置文件 |
| :------------------ | :-----------------------------: | :------------------------: | :----------------------------------: | :-----------------------------: | :------------------------------------: | :-------------------------------: | :---------------------------------------------------------------------------------------------: | :----------------------------------------------------------: |
| **PP-YOLOE+_SOD-l** | **31.9** | **52.1** | **25.6** | **43.5** | **30.25** | **51.18** | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_sod_crn_l_80e_visdrone.pdparams) | [配置文件](visdrone/ppyoloe_plus_sod_crn_l_80e_visdrone.yml) |
`传送门`:[全部预训练模型](configs/smalldet)。
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| ---- | ---------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| 通用 | 小目标检测 | 基于PP-YOLOE-SOD的无人机航拍图像检测案例全流程实操。 | [基于PP-YOLOE-SOD的无人机航拍图像检测](https://aistudio.baidu.com/aistudio/projectdetail/5036782) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/5036782) |
</details>
### 💫PP-PicoDet 超轻量实时目标检测模型
<details>
<summary><b> 简介(点击展开)</b></summary>
全新的轻量级系列模型PP-PicoDet,在移动端具有卓越的性能,成为全新SOTA轻量级模型。
`传送门`:[PP-PicoDet说明](configs/picodet/README.md)。
`传送门`:[arXiv论文](https://arxiv.org/abs/2111.00902)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
| 模型名称 | COCO精度(mAP) | 骁龙865 四线程速度(FPS) | 推荐部署硬件 | 配置文件 | 模型下载 |
| :-------- | :-------------: | :---------------------: | :------------: | :--------------------------------------------------: | :----------------------------------------------------------------------------------: |
| PicoDet-L | 36.1 | 39.7 | 移动端、嵌入式 | [链接](configs/picodet/picodet_l_320_coco_lcnet.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) |
`传送门`:[全部预训练模型](configs/picodet/README.md)。
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| -------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
| 智慧城市 | 道路垃圾检测 | 通过在市政环卫车辆上安装摄像头对路面垃圾检测并分析,实现对路面遗撒的垃圾进行监控,记录并通知环卫人员清理,大大提升了环卫人效。 | [基于PP-PicoDet的路面垃圾检测](https://aistudio.baidu.com/aistudio/projectdetail/3846170?channelType=0&channel=0) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/3846170?channelType=0&channel=0) |
</details>
### 📡PP-Tracking 实时多目标跟踪系统
<details>
<summary><b> 简介(点击展开)</b></summary>
PaddleDetection团队提供了实时多目标跟踪系统PP-Tracking,是基于PaddlePaddle深度学习框架的业界首个开源的实时多目标跟踪系统,具有模型丰富、应用广泛和部署高效三大优势。 PP-Tracking支持单镜头跟踪(MOT)和跨镜头跟踪(MTMCT)两种模式,针对实际业务的难点和痛点,提供了行人跟踪、车辆跟踪、多类别跟踪、小目标跟踪、流量统计以及跨镜头跟踪等各种多目标跟踪功能和应用,部署方式支持API调用和GUI可视化界面,部署语言支持Python和C++,部署平台环境支持Linux、NVIDIA Jetson等。
`传送门`:[PP-Tracking说明](configs/mot/README.md)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
| 模型名称 | 模型简介 | 精度 | 速度(FPS) | 推荐部署硬件 | 配置文件 | 模型下载 |
| :-------- | :----------------------------------: | :--------------------: | :-------: | :--------------------: | :--------------------------------------------------------: | :------------------------------------------------------------------------------------------------: |
| ByteTrack | SDE多目标跟踪算法 仅包含检测模型 | MOT-17 test: 78.4 | - | 服务器、移动端、嵌入式 | [链接](configs/mot/bytetrack/bytetrack_yolox.yml) | [下载地址](https://bj.bcebos.com/v1/paddledet/models/mot/yolox_x_24e_800x1440_mix_det.pdparams) |
| FairMOT | JDE多目标跟踪算法 多任务联合学习方法 | MOT-16 test: 75.0 | - | 服务器、移动端、嵌入式 | [链接](configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) |
| OC-SORT | SDE多目标跟踪算法 仅包含检测模型 | MOT-17 half val: 75.5 | - | 服务器、移动端、嵌入式 | [链接](configs/mot/ocsort/ocsort_yolox.yml) | [下载地址](https://bj.bcebos.com/v1/paddledet/models/mot/yolox_x_24e_800x1440_mix_mot_ch.pdparams) |
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| ---- | ---------- | -------------------------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| 通用 | 多目标跟踪 | 快速上手单镜头、多镜头跟踪 | [PP-Tracking之手把手玩转多目标跟踪](https://aistudio.baidu.com/aistudio/projectdetail/3022582) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/3022582) |
</details>
### ⛷️PP-TinyPose 人体骨骼关键点识别
<details>
<summary><b> 简介(点击展开)</b></summary>
PaddleDetection 中的关键点检测部分紧跟最先进的算法,包括 Top-Down 和 Bottom-Up 两种方法,可以满足用户的不同需求。同时,PaddleDetection 提供针对移动端设备优化的自研实时关键点检测模型 PP-TinyPose。
`传送门`:[PP-TinyPose说明](configs/keypoint/tiny_pose)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
| 模型名称 | 模型简介 | COCO精度(AP) | 速度(FPS) | 推荐部署硬件 | 配置文件 | 模型下载 |
| :---------: | :----------------------------------: | :------------: | :-----------------------: | :------------: | :-----------------------------------------------------: | :--------------------------------------------------------------------------------------: |
| PP-TinyPose | 轻量级关键点算法<br/>输入尺寸256x192 | 68.8 | 骁龙865 四线程: 158.7 FPS | 移动端、嵌入式 | [链接](configs/keypoint/tiny_pose/tinypose_256x192.yml) | [下载地址](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) |
`传送门`:[全部预训练模型](configs/keypoint/README.md)。
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| ---- | ---- | ---------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| 运动 | 健身 | 提供从模型选型、数据准备、模型训练优化,到后处理逻辑和模型部署的全流程可复用方案,有效解决了复杂健身动作的高效识别,打造AI虚拟健身教练! | [基于PP-TinyPose增强版的智能健身动作识别](https://aistudio.baidu.com/aistudio/projectdetail/4385813) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/4385813) |
</details>
### 🏃🏻PP-Human 实时行人分析工具
<details>
<summary><b> 简介(点击展开)</b></summary>
PaddleDetection深入探索核心行业的高频场景,提供了行人开箱即用分析工具,支持图片/单镜头视频/多镜头视频/在线视频流多种输入方式,广泛应用于智慧交通、智慧城市、工业巡检等领域。支持服务器端部署及TensorRT加速,T4服务器上可达到实时。
PP-Human支持四大产业级功能:五大异常行为识别、26种人体属性分析、实时人流计数、跨镜头(ReID)跟踪。
`传送门`:[PP-Human行人分析工具使用指南](deploy/pipeline/README.md)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
| 任务 | T4 TensorRT FP16: 速度(FPS) | 推荐部署硬件 | 模型下载 | 模型体积 |
| :----------------: | :---------------------------: | :----------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------: |
| 行人检测(高精度) | 39.8 | 服务器 | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 行人跟踪(高精度) | 31.4 | 服务器 | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 属性识别(高精度) | 单人 117.6 | 服务器 | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | 目标检测:182M<br>属性识别:86M |
| 摔倒识别 | 单人 100 | 服务器 | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) <br> [关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) <br> [基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪:182M<br>关键点检测:101M<br>基于关键点行为识别:21.8M |
| 闯入识别 | 31.4 | 服务器 | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 打架识别 | 50.8 | 服务器 | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M |
| 抽烟识别 | 340.1 | 服务器 | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | 目标检测:182M<br>基于人体id的目标检测:27M |
| 打电话识别 | 166.7 | 服务器 | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M<br>基于人体id的图像分类:45M |
`传送门`:[完整预训练模型](deploy/pipeline/README.md)。
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| -------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------- |
| 智能安防 | 摔倒检测 | 飞桨行人分析PP-Human中提供的摔倒识别算法,采用了关键点+时空图卷积网络的技术,对摔倒姿势无限制、背景环境无要求。 | [基于PP-Human v2的摔倒检测](https://aistudio.baidu.com/aistudio/projectdetail/4606001) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/4606001) |
| 智能安防 | 打架识别 | 本项目基于PaddleVideo视频开发套件训练打架识别模型,然后将训练好的模型集成到PaddleDetection的PP-Human中,助力行人行为分析。 | [基于PP-Human的打架识别](https://aistudio.baidu.com/aistudio/projectdetail/4086987?contributionType=1) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/4086987?contributionType=1) |
| 智能安防 | 摔倒检测 | 基于PP-Human完成来客分析整体流程。使用PP-Human完成来客分析中非常常见的场景: 1. 来客属性识别(单镜和跨境可视化);2. 来客行为识别(摔倒识别)。 | [基于PP-Human的来客分析案例教程](https://aistudio.baidu.com/aistudio/projectdetail/4537344) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/4537344) |
</details>
### 🏎️PP-Vehicle 实时车辆分析工具
<details>
<summary><b> 简介(点击展开)</b></summary>
PaddleDetection深入探索核心行业的高频场景,提供了车辆开箱即用分析工具,支持图片/单镜头视频/多镜头视频/在线视频流多种输入方式,广泛应用于智慧交通、智慧城市、工业巡检等领域。支持服务器端部署及TensorRT加速,T4服务器上可达到实时。
PP-Vehicle囊括四大交通场景核心功能:车牌识别、属性识别、车流量统计、违章检测。
`传送门`:[PP-Vehicle车辆分析工具指南](deploy/pipeline/README.md)。
</details>
<details>
<summary><b> 预训练模型(点击展开)</b></summary>
| 任务 | T4 TensorRT FP16: 速度(FPS) | 推荐部署硬件 | 模型方案 | 模型体积 |
| :----------------: | :-------------------------: | :----------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------: |
| 车辆检测(高精度) | 38.9 | 服务器 | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车辆跟踪(高精度) | 25 | 服务器 | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | 182M |
| 车牌识别 | 213.7 | 服务器 | [车牌检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz) <br> [车牌识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz) | 车牌检测:3.9M <br> 车牌字符识别: 12M |
| 车辆属性 | 136.8 | 服务器 | [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | 7.2M |
`传送门`:[完整预训练模型](deploy/pipeline/README.md)。
</details>
<details>
<summary><b> 产业应用代码示例(点击展开)</b></summary>
| 行业 | 类别 | 亮点 | 文档说明 | 模型下载 |
| -------- | ---------------- | ------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| 智慧交通 | 交通监控车辆分析 | 本项目基于PP-Vehicle演示智慧交通中最刚需的车流量监控、车辆违停检测以及车辆结构化(车牌、车型、颜色)分析三大场景。 | [基于PP-Vehicle的交通监控分析系统](https://aistudio.baidu.com/aistudio/projectdetail/4512254) | [下载链接](https://aistudio.baidu.com/aistudio/projectdetail/4512254) |
</details>
## 💡产业实践范例
产业实践范例是PaddleDetection针对高频目标检测应用场景,提供的端到端开发示例,帮助开发者打通数据标注-模型训练-模型调优-预测部署全流程。
针对每个范例我们都通过[AI-Studio](https://ai.baidu.com/ai-doc/AISTUDIO/Tk39ty6ho)提供了项目代码以及说明,用户可以同步运行体验。
`传送门`:[产业实践范例完整列表](industrial_tutorial/README.md)
- [基于PP-YOLOE-R的旋转框检测](https://aistudio.baidu.com/aistudio/projectdetail/5058293)
- [基于PP-YOLOE-SOD的无人机航拍图像检测](https://aistudio.baidu.com/aistudio/projectdetail/5036782)
- [基于PP-Vehicle的交通监控分析系统](https://aistudio.baidu.com/aistudio/projectdetail/4512254)
- [基于PP-Human v2的摔倒检测](https://aistudio.baidu.com/aistudio/projectdetail/4606001)
- [基于PP-TinyPose增强版的智能健身动作识别](https://aistudio.baidu.com/aistudio/projectdetail/4385813)
- [基于PP-Human的打架识别](https://aistudio.baidu.com/aistudio/projectdetail/4086987?contributionType=1)
- [基于Faster-RCNN的瓷砖表面瑕疵检测](https://aistudio.baidu.com/aistudio/projectdetail/2571419)
- [基于PaddleDetection的PCB瑕疵检测](https://aistudio.baidu.com/aistudio/projectdetail/2367089)
- [基于FairMOT实现人流量统计](https://aistudio.baidu.com/aistudio/projectdetail/2421822)
- [基于YOLOv3实现跌倒检测](https://aistudio.baidu.com/aistudio/projectdetail/2500639)
- [基于PP-PicoDetv2 的路面垃圾检测](https://aistudio.baidu.com/aistudio/projectdetail/3846170?channelType=0&channel=0)
- [基于人体关键点检测的合规检测](https://aistudio.baidu.com/aistudio/projectdetail/4061642?contributionType=1)
- [基于PP-Human的来客分析案例教程](https://aistudio.baidu.com/aistudio/projectdetail/4537344)
- 持续更新中...
## 🏆企业应用案例
企业应用案例是企业在实生产环境下落地应用PaddleDetection的方案思路,相比产业实践范例其更多强调整体方案设计思路,可供开发者在项目方案设计中做参考。
`传送门`:[企业应用案例完整列表](https://www.paddlepaddle.org.cn/customercase)
- [中国南方电网——变电站智慧巡检](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2330)
- [国铁电气——轨道在线智能巡检系统](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2280)
- [京东物流——园区车辆行为识别](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2611)
- [中兴克拉—厂区传统仪表统计监测](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2618)
- [宁德时代—动力电池高精度质量检测](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2609)
- [中国科学院空天信息创新研究院——高尔夫球场遥感监测](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2483)
- [御航智能——基于边缘的无人机智能巡检](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2481)
- [普宙无人机——高精度森林巡检](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2121)
- [领邦智能——红外无感测温监控](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2615)
- [北京地铁——口罩检测](https://mp.weixin.qq.com/s/znrqaJmtA7CcjG0yQESWig)
- [音智达——工厂人员违规行为检测](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2288)
- [华夏天信——输煤皮带机器人智能巡检](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2331)
- [优恩物联网——社区住户分类支持广告精准投放](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2485)
- [螳螂慧视——室内3D点云场景物体分割与检测](https://www.paddlepaddle.org.cn/support/news?action=detail&id=2599)
- 持续更新中...
## 📝许可证书
本项目的发布受[Apache 2.0 license](LICENSE)许可认证。
## 📌引用
```
@misc{ppdet2019,
title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.},
author={PaddlePaddle Authors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},
year={2019}
}
```
| PaddleDetection/README.md/0 | {
"file_path": "PaddleDetection/README.md",
"repo_id": "PaddleDetection",
"token_count": 39838
} | 5 |
_BASE_: [
'../datasets/coco_instance.yml',
'../runtime.yml',
'_base_/optimizer_1x.yml',
'_base_/cascade_mask_rcnn_r50_fpn.yml',
'_base_/cascade_mask_fpn_reader.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_vd_ssld_v2_pretrained.pdparams
weights: output/cascade_mask_rcnn_r50_vd_fpn_ssld_1x_coco/model_final
ResNet:
depth: 50
variant: d
norm_type: bn
freeze_at: 0
return_idx: [0,1,2,3]
num_stages: 4
lr_mult_list: [0.05, 0.05, 0.1, 0.15]
| PaddleDetection/configs/cascade_rcnn/cascade_mask_rcnn_r50_vd_fpn_ssld_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/cascade_rcnn/cascade_mask_rcnn_r50_vd_fpn_ssld_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 250
} | 6 |
_BASE_: [
'centernet_r50_140e_coco.yml'
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ShuffleNetV2_x1_0_pretrained.pdparams
weights: output/centernet_shufflenetv2_140e_coco/model_final
CenterNet:
backbone: ShuffleNetV2
neck: CenterNetDLAFPN
head: CenterNetHead
post_process: CenterNetPostProcess
ShuffleNetV2:
scale: 1.0
feature_maps: [5, 13, 17]
act: leaky_relu
CenterNetDLAFPN:
first_level: 0
last_level: 3
down_ratio: 8
dcn_v2: False
TrainReader:
batch_size: 32
TestReader:
sample_transforms:
- Decode: {}
- WarpAffine: {keep_res: False, input_h: 512, input_w: 512}
- NormalizeImage: {mean: [0.40789655, 0.44719303, 0.47026116], std: [0.2886383 , 0.27408165, 0.27809834]}
- Permute: {}
| PaddleDetection/configs/centernet/centernet_shufflenetv2_140e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/centernet/centernet_shufflenetv2_140e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 339
} | 7 |
_BASE_: [
'faster_rcnn_dcn_r50_fpn_1x_coco.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNeXt101_vd_64x4d_pretrained.pdparams
weights: output/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x_coco/model_final
ResNet:
# for ResNeXt: groups, base_width, base_channels
depth: 101
groups: 64
base_width: 4
variant: d
norm_type: bn
freeze_at: 0
return_idx: [0,1,2,3]
num_stages: 4
dcn_v2_stages: [1,2,3]
| PaddleDetection/configs/dcn/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/dcn/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 219
} | 8 |
architecture: DETR
# pretrain_weights: # rewrite in FocalNet.pretrained in ppdet/modeling/backbones/focalnet.py
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/focalnet_large_lrf_384_fl4_pretrained.pdparams
hidden_dim: 256
use_focal_loss: True
DETR:
backbone: FocalNet
transformer: DINOTransformer
detr_head: DINOHead
post_process: DETRPostProcess
FocalNet:
arch: 'focalnet_L_384_22k_fl4'
out_indices: [1, 2, 3]
pretrained: https://bj.bcebos.com/v1/paddledet/models/pretrained/focalnet_large_lrf_384_fl4_pretrained.pdparams
DINOTransformer:
num_queries: 900
position_embed_type: sine
num_levels: 4
nhead: 8
num_encoder_layers: 6
num_decoder_layers: 6
dim_feedforward: 2048
dropout: 0.0
activation: relu
pe_temperature: 20
pe_offset: 0.0
num_denoising: 100
label_noise_ratio: 0.5
box_noise_scale: 1.0
learnt_init_query: True
DINOHead:
loss:
name: DINOLoss
loss_coeff: {class: 1, bbox: 5, giou: 2}
aux_loss: True
matcher:
name: HungarianMatcher
matcher_coeff: {class: 2, bbox: 5, giou: 2}
DETRPostProcess:
num_top_queries: 300
| PaddleDetection/configs/dino/_base_/dino_focalnet.yml/0 | {
"file_path": "PaddleDetection/configs/dino/_base_/dino_focalnet.yml",
"repo_id": "PaddleDetection",
"token_count": 492
} | 9 |
_BASE_: [
'../datasets/wider_face.yml',
'../runtime.yml',
'_base_/optimizer_1000e.yml',
'_base_/blazeface_fpn.yml',
'_base_/face_reader.yml',
]
weights: output/blazeface_fpn_ssh_1000e/model_final
multi_scale_eval: True
| PaddleDetection/configs/face_detection/blazeface_fpn_ssh_1000e.yml/0 | {
"file_path": "PaddleDetection/configs/face_detection/blazeface_fpn_ssh_1000e.yml",
"repo_id": "PaddleDetection",
"token_count": 111
} | 10 |
_BASE_: [
'faster_rcnn_r34_fpn_1x_coco.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet34_pretrained.pdparams
weights: output/faster_rcnn_r34_fpn_multiscaletest_1x_coco/model_final
EvalReader:
sample_transforms:
- Decode: {}
# - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True}
- MultiscaleTestResize: {origin_target_size: [800, 1333], target_size: [700 , 900], use_flip: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
TestReader:
sample_transforms:
- Decode: {}
# - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True}
- MultiscaleTestResize: {origin_target_size: [800, 1333], target_size: [700 , 900], use_flip: False}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {} | PaddleDetection/configs/faster_rcnn/faster_rcnn_r34_fpn_multiscaletest_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/faster_rcnn/faster_rcnn_r34_fpn_multiscaletest_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 380
} | 11 |
worker_num: 2
TrainReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- RandomFlip: {}
batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 128}
- Gt2FCOSTarget:
object_sizes_boundary: [64, 128, 256, 512]
center_sampling_radius: 1.5
downsample_ratios: [8, 16, 32, 64, 128]
norm_reg_targets: True
batch_size: 2
shuffle: True
drop_last: True
use_shared_memory: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 128}
batch_size: 1
TestReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 128}
batch_size: 1
fuse_normalize: True
| PaddleDetection/configs/fcos/_base_/fcos_reader.yml/0 | {
"file_path": "PaddleDetection/configs/fcos/_base_/fcos_reader.yml",
"repo_id": "PaddleDetection",
"token_count": 523
} | 12 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/optimizer_1x.yml',
'_base_/faster_rcnn_r50_fpn.yml',
'_base_/faster_fpn_reader.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_1x_coco.pdparams
weights: output/faster_rcnn_r50_vd_fpn_1x_coco_cotuning_roadsign/model_final
snapshot_epoch: 5
ResNet:
# index 0 stands for res2
depth: 50
variant: d
norm_type: bn
freeze_at: 0
return_idx: [0,1,2,3]
num_stages: 4
epoch: 30
LearningRate:
base_lr: 0.001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [8, 11]
- !LinearWarmup
start_factor: 0.1
steps: 1000
use_cot: True
BBoxHead:
head: TwoFCHead
roi_extractor:
resolution: 7
sampling_ratio: 0
aligned: True
bbox_assigner: BBoxAssigner
cot_classes: 80
loss_cot:
name: COTLoss
cot_lambda: 1
cot_scale: 1
num_classes: 4
metric: COCO
map_type: integral
TrainDataset:
!COCODataSet
image_dir: images
anno_path: annotations/train_shots10.json
dataset_dir: dataset/roadsign_coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: images
anno_path: annotations/roadsign_valid.json
dataset_dir: dataset/roadsign_coco
TestDataset:
!ImageFolder
anno_path: annotations/roadsign_valid.json
dataset_dir: dataset/roadsign_coco | PaddleDetection/configs/few-shot/faster_rcnn_r50_vd_fpn_1x_coco_cotuning_roadsign.yml/0 | {
"file_path": "PaddleDetection/configs/few-shot/faster_rcnn_r50_vd_fpn_1x_coco_cotuning_roadsign.yml",
"repo_id": "PaddleDetection",
"token_count": 640
} | 13 |
_BASE_: [
'../datasets/coco_instance.yml',
'../runtime.yml',
'../cascade_rcnn/_base_/optimizer_1x.yml',
'../cascade_rcnn/_base_/cascade_mask_rcnn_r50_fpn.yml',
'../cascade_rcnn/_base_/cascade_mask_fpn_reader.yml',
]
weights: output/cascade_mask_rcnn_r50_fpn_gn_2x_coco/model_final
CascadeRCNN:
backbone: ResNet
neck: FPN
rpn_head: RPNHead
bbox_head: CascadeHead
mask_head: MaskHead
# post process
bbox_post_process: BBoxPostProcess
mask_post_process: MaskPostProcess
FPN:
out_channel: 256
norm_type: gn
CascadeHead:
head: CascadeXConvNormHead
roi_extractor:
resolution: 7
sampling_ratio: 0
aligned: True
bbox_assigner: BBoxAssigner
CascadeXConvNormHead:
num_convs: 4
out_channel: 1024
norm_type: gn
MaskHead:
head: MaskFeat
roi_extractor:
resolution: 14
sampling_ratio: 0
aligned: True
mask_assigner: MaskAssigner
share_bbox_feat: False
MaskFeat:
num_convs: 4
out_channel: 256
norm_type: gn
epoch: 24
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [16, 22]
- !LinearWarmup
start_factor: 0.1
steps: 1000
| PaddleDetection/configs/gn/cascade_mask_rcnn_r50_fpn_gn_2x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/gn/cascade_mask_rcnn_r50_fpn_gn_2x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 505
} | 14 |
# Keypoint Inference Benchmark
## Benchmark on Server
We tested benchmarks in different runtime environments。 See the table below for details.
| Model | CPU + MKLDNN (thread=1) | CPU + MKLDNN (thread=4) | GPU | TensorRT (FP32) | TensorRT (FP16) |
| :------------------------ | :------: | :------: | :-----: | :---: | :---: |
| LiteHRNet-18-256x192 | 88.8 ms | 40.7 ms | 4.4 ms | 2.0 ms | 1.8 ms |
| LiteHRNet-18-384x288 | 188.0 ms | 79.3 ms | 4.8 ms | 3.6 ms | 3.2 ms |
| LiteHRNet-30-256x192 | 148.4 ms | 69.0 ms | 7.1 ms | 3.1 ms | 2.8 ms |
| LiteHRNet-30-384x288 | 309.8 ms | 133.5 ms | 8.2 ms | 6.0 ms | 5.3 ms |
| PP-TinyPose-128x96 | 25.2 ms | 14.1 ms | 2.7 ms | 0.9 ms | 0.8 ms |
| PP-TinyPose-256x192 | 82.4 ms | 36.1 ms | 3.0 ms | 1.5 ms | 1.1 ms |
**Notes:**
- These tests above are based Python deployment.
- The environment is NVIDIA T4 / PaddlePaddle(commit: 7df301f2fc0602745e40fa3a7c43ccedd41786ca) / CUDA10.1 / CUDNN7 / Python3.7 / TensorRT6.
- The test is based on deploy/python/det_keypoint_unite_infer.py with image demo/000000014439.jpg. And input batch size for keypoint model is set to 8.
- The time only includes inference time.
| Model | CPU + MKLDNN (thread=1) | CPU + MKLDNN (thread=4) | GPU | TensorRT (FP32) | TensorRT (FP16) |
| :------------------------ | :------: | :------: | :-----: | :---: | :---: |
| DARK_HRNet_w32-256x192 | 363.93 ms | 97.38 ms | 4.13 ms | 3.74 ms | 1.75 ms |
| DARK_HRNet_w32-384x288 | 823.71 ms | 218.55 ms | 9.44 ms | 8.91 ms | 2.96 ms |
| HRNet_w32-256x192 | 363.67 ms | 97.64 ms | 4.11 ms | 3.71 ms | 1.72 ms |
| HRNet_w32-256x256_mpii | 485.56 ms | 131.48 ms | 4.81 ms | 4.26 ms | 2.00 ms |
| HRNet_w32-384x288 | 822.73 ms | 215.48 ms | 9.40 ms | 8.81 ms | 2.97 ms |
| PP-TinyPose-128x96 | 24.06 ms | 13.05 ms | 2.43 ms | 0.75 ms | 0.72 ms |
| PP-TinyPose-256x192 | 82.73 ms | 36.25 ms | 2.57 ms | 1.38 ms | 1.15 ms |
**Notes:**
- These tests above are based C++ deployment.
- The environment is NVIDIA T4 / PaddlePaddle(commit: 7df301f2fc0602745e40fa3a7c43ccedd41786ca) / CUDA10.1 / CUDNN7 / Python3.7 / TensorRT6.
- The test is based on deploy/python/det_keypoint_unite_infer.py with image demo/000000014439.jpg. And input batch size for keypoint model is set to 8.
- The time only includes inference time.
## Benchmark on Mobile
We tested benchmarks on Kirin and Qualcomm Snapdragon devices. See the table below for details.
| Model | Kirin 980 (1-thread) | Kirin 980 (4-threads) | Qualcomm Snapdragon 845 (1-thread) | Qualcomm Snapdragon 845 (4-threads) | Qualcomm Snapdragon 660 (1-thread) | Qualcomm Snapdragon 660 (4-threads) |
| :------------------------ | :---: | :---: | :---: | :---: | :---: | :---: |
| PicoDet-s-192x192 (det) | 14.85 ms | 5.45 ms | 17.50 ms | 7.56 ms | 80.08 ms | 27.36 ms |
| PicoDet-s-320x320 (det) | 38.09 ms | 12.00 ms | 45.26 ms | 17.07 ms | 232.81 ms | 58.68 ms |
| PP-TinyPose-128x96 (pose) | 12.03 ms | 5.09 ms | 13.14 ms | 6.73 ms | 71.87 ms | 20.04 ms |
**Notes:**
- These tests above are based Paddle Lite deployment, and version is v2.10-rc.
- The time only includes inference time.
| PaddleDetection/configs/keypoint/KeypointBenchmark.md/0 | {
"file_path": "PaddleDetection/configs/keypoint/KeypointBenchmark.md",
"repo_id": "PaddleDetection",
"token_count": 1224
} | 15 |
# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT.
_BASE_: [
'../bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml',
'../bytetrack/_base_/mot17.yml',
'../bytetrack/_base_/ppyoloe_mot_reader_640x640.yml'
]
weights: output/botsort_ppyoloe/model_final
log_iter: 20
snapshot_epoch: 2
metric: MOT # eval/infer mode, set 'COCO' can be training mode
num_classes: 1
architecture: ByteTrack
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/ppyoloe_crn_l_300e_coco.pdparams
ByteTrack:
detector: YOLOv3 # PPYOLOe version
reid: None
tracker: BOTSORTTracker
det_weights: https://bj.bcebos.com/v1/paddledet/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams
reid_weights: None
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
# Tracking requires higher quality boxes, so NMS score_threshold will be higher
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: -1 # 100
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.1 # 0.01 in original detector
nms_threshold: 0.4 # 0.6 in original detector
BOTSORTTracker:
track_high_thresh: 0.3
track_low_thresh: 0.2
new_track_thresh: 0.4
match_thresh: 0.7
track_buffer: 30
min_box_area: 0
camera_motion: False
cmc_method: 'sparseOptFlow' # only camera_motion is True,
# sparseOptFlow | files (Vidstab GMC) | orb | ecc
# MOTDataset for MOT evaluation and inference
EvalMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
data_root: MOT17/images/half
keep_ori_im: True # set as True in DeepSORT and ByteTrack
TestMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
keep_ori_im: True # set True if save visualization images or video
| PaddleDetection/configs/mot/botsort/botsort_ppyoloe.yml/0 | {
"file_path": "PaddleDetection/configs/mot/botsort/botsort_ppyoloe.yml",
"repo_id": "PaddleDetection",
"token_count": 885
} | 16 |
简体中文 | [English](README.md)
# ByteTrack的检测器
## 简介
[ByteTrack](https://arxiv.org/abs/2110.06864)(ByteTrack: Multi-Object Tracking by Associating Every Detection Box) 通过关联每个检测框来跟踪,而不仅是关联高分的检测框。此处提供了几个常用检测器的配置作为参考。由于训练数据集、输入尺度、训练epoch数、NMS阈值设置等的不同均会导致模型精度和性能的差异,请自行根据需求进行适配。
## 模型库
### 在MOT17-half val数据集上的检测结果
| 骨架网络 | 网络类型 | 输入尺度 | 学习率策略 |推理时间(fps) | Box AP | 下载 | 配置文件 |
| :-------------- | :------------- | :--------: | :---------: | :-----------: | :-----: | :------: | :-----: |
| DarkNet-53 | YOLOv3 | 608X608 | 40e | ---- | 42.7 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolov3_darknet53_40e_608x608_mot17half.pdparams) | [配置文件](./yolov3_darknet53_40e_608x608_mot17half.yml) |
| CSPResNet | PPYOLOe | 640x640 | 36e | ---- | 52.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams) | [配置文件](./ppyoloe_crn_l_36e_640x640_mot17half.yml) |
| CSPDarkNet | YOLOX-x(mix_mot_ch) | 800x1440 | 24e | ---- | 61.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_mot_ch.pdparams) | [配置文件](./yolox_x_24e_800x1440_mix_mot_ch.yml) |
| CSPDarkNet | YOLOX-x(mix_det) | 800x1440 | 24e | ---- | 65.4 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) | [配置文件](./yolox_x_24e_800x1440_mix_det.yml) |
**注意:**
- 以上模型除YOLOX外采用**MOT17-half train**数据集训练,数据集可以从[此链接](https://bj.bcebos.com/v1/paddledet/data/mot/MOT17.zip)下载。
- **MOT17-half train**是MOT17的train序列(共7个)每个视频的前一半帧的图片和标注组成的数据集,而为了验证精度可以都用**MOT17-half val**数据集去评估,它是每个视频的后一半帧组成的,数据集可以从[此链接](https://paddledet.bj.bcebos.com/data/mot/mot17half/annotations.zip)下载,并解压放在`dataset/mot/MOT17/images/`文件夹下。
- YOLOX-x(mix_mot_ch)采用**mix_mot_ch**数据集,是MOT17、CrowdHuman组成的联合数据集;YOLOX-x(mix_det)采用**mix_det**数据集,是MOT17、CrowdHuman、Cityscapes、ETHZ组成的联合数据集,数据集整理的格式和目录可以参考[此链接](https://github.com/ifzhang/ByteTrack#data-preparation),最终放置于`dataset/mot/`目录下。为了验证精度可以都用**MOT17-half val**数据集去评估。
- 行人跟踪请使用行人检测器结合行人ReID模型。车辆跟踪请使用车辆检测器结合车辆ReID模型。
- 用于ByteTrack跟踪时,这些模型的NMS阈值等后处理设置会与纯检测任务的设置不同。
## 快速开始
通过如下命令一键式启动评估、评估和导出
```bash
job_name=ppyoloe_crn_l_36e_640x640_mot17half
config=configs/mot/bytetrack/detector/${job_name}.yml
log_dir=log_dir/${job_name}
# 1. training
python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp
# 2. evaluation
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=output/${job_name}/model_final.pdparams
# 3. export
CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c ${config} -o weights=output/${job_name}/model_final.pdparams
```
| PaddleDetection/configs/mot/bytetrack/detector/README.md/0 | {
"file_path": "PaddleDetection/configs/mot/bytetrack/detector/README.md",
"repo_id": "PaddleDetection",
"token_count": 2224
} | 17 |
# DeepSORT does not need to train on MOT dataset, only used for evaluation.
# MOT dataset needs to be trained on the detector(like YOLOv3) only using bboxes.
# And gt IDs don't need to be trained.
EvalMOTReader:
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
TestMOTReader:
inputs_def:
image_shape: [3, 608, 1088]
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
| PaddleDetection/configs/mot/deepsort/_base_/deepsort_reader_1088x608.yml/0 | {
"file_path": "PaddleDetection/configs/mot/deepsort/_base_/deepsort_reader_1088x608.yml",
"repo_id": "PaddleDetection",
"token_count": 269
} | 18 |
# This config represents a ReID only configuration of DeepSORT, it has two uses.
# One is used for loading the detection results and ReID model to get tracking results;
# Another is used for exporting the ReID model to deploy infer.
_BASE_: [
'../../../datasets/mot.yml',
'../../../runtime.yml',
'../_base_/deepsort_reader_1088x608.yml',
]
EvalMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
data_root: MOT16/images/train
keep_ori_im: True # set as True in DeepSORT
det_weights: None
reid_weights: https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams
# A ReID only configuration of DeepSORT, detector should be None.
architecture: DeepSORT
pretrain_weights: None
DeepSORT:
detector: None
reid: PCBPyramid
tracker: DeepSORTTracker
PCBPyramid:
model_name: "ResNet101"
num_conv_out_channels: 128
num_classes: 751 # default 751 classes in Market-1501 dataset.
DeepSORTTracker:
input_size: [64, 192]
min_box_area: 0 # 0 means no need to filter out too small boxes
vertical_ratio: -1 # -1 means no need to filter out bboxes, usually set 1.6 for pedestrian
budget: 100
max_age: 70
n_init: 3
metric_type: cosine
matching_threshold: 0.2
max_iou_distance: 0.9
motion: KalmanFilter
| PaddleDetection/configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml/0 | {
"file_path": "PaddleDetection/configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml",
"repo_id": "PaddleDetection",
"token_count": 458
} | 19 |
_BASE_: [
'../../datasets/mot.yml',
'../../runtime.yml',
'_base_/optimizer_30e.yml',
'_base_/fairmot_dla34.yml',
'_base_/fairmot_reader_1088x608.yml',
]
weights: output/fairmot_dla34_30e_1088x608_bytetracker/model_final
# for ablation study, MIX + MOT17-half
TrainDataset:
!MOTDataSet
dataset_dir: dataset/mot
image_lists: ['mot17.half', 'caltech.all', 'cuhksysu.train', 'prw.train', 'citypersons.train', 'eth.train']
data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide']
# for MOT evaluation
# If you want to change the MOT evaluation dataset, please modify 'data_root'
EvalMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
data_root: MOT17/images/half
keep_ori_im: False # set True if save visualization images or video, or used in DeepSORT
JDETracker:
use_byte: True
match_thres: 0.8
conf_thres: 0.4
low_conf_thres: 0.2
min_box_area: 200
vertical_ratio: 1.6 # for pedestrian
| PaddleDetection/configs/mot/fairmot/fairmot_dla34_30e_1088x608_bytetracker.yml/0 | {
"file_path": "PaddleDetection/configs/mot/fairmot/fairmot_dla34_30e_1088x608_bytetracker.yml",
"repo_id": "PaddleDetection",
"token_count": 388
} | 20 |
_BASE_: [
'../../_base_/picodet_esnet.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/LCNet_x2_5_ssld_pretrained.pdparams
weights: output/picodet_lcnet_x2_5_layout/model_final
find_unused_parameters: True
PicoDet:
backbone: LCNet
neck: CSPPAN
head: PicoHead
nms_cpu: True
LCNet:
scale: 2.5
feature_maps: [3, 4, 5]
CSPPAN:
spatial_scales: [0.125, 0.0625, 0.03125]
slim: Distill
slim_method: FGD
distill_loss: FGDFeatureLoss
distill_loss_name: ['neck_f_3', 'neck_f_2', 'neck_f_1', 'neck_f_0']
FGDFeatureLoss:
student_channels: 128
teacher_channels: 128
temp: 0.5
alpha_fgd: 0.001
beta_fgd: 0.0005
gamma_fgd: 0.0005
lambda_fgd: 0.000005
| PaddleDetection/configs/picodet/legacy_model/application/layout_analysis/picodet_lcnet_x2_5_layout.yml/0 | {
"file_path": "PaddleDetection/configs/picodet/legacy_model/application/layout_analysis/picodet_lcnet_x2_5_layout.yml",
"repo_id": "PaddleDetection",
"token_count": 330
} | 21 |
[English](README.md) | 简体中文
# 特色垂类检测模型
我们提供了针对不同场景的基于PaddlePaddle的检测模型,用户可以下载模型进行使用。
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
| 车辆检测 | YOLOv3 | 54.5 | [下载链接](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [配置文件](./vehicle_yolov3_darknet.yml) |
## 车辆检测(Vehicle Detection)
车辆检测的主要应用之一是交通监控。在这样的监控场景中,待检测的车辆多为道路红绿灯柱上的摄像头拍摄所得。
### 1. 模型结构
Backbone为Dacknet53的YOLOv3。
### 2. 训练参数配置
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行车辆检测的模型训练时,我们对以下参数进行了修改:
* num_classes: 6
* anchors: [[8, 9], [10, 23], [19, 15], [23, 33], [40, 25], [54, 50], [101, 80], [139, 145], [253, 224]]
* nms/nms_top_k: 400
* nms/score_threshold: 0.005
* dataset_dir: dataset/vehicle
### 3. 精度指标
模型在我们内部数据上的精度指标为:
IOU=.50:.05:.95时的AP为 0.545。
IOU=.5时的AP为 0.764。
### 4. 预测
用户可以使用我们训练好的模型进行车辆检测:
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/ppvehicle/vehicle_yolov3/vehicle_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams \
--infer_dir configs/ppvehicle/vehicle_yolov3/demo \
--draw_threshold 0.2 \
--output_dir configs/ppvehicle/vehicle_yolov3/demo/output
```
预测结果示例:


| PaddleDetection/configs/ppvehicle/vehicle_yolov3/README_cn.md/0 | {
"file_path": "PaddleDetection/configs/ppvehicle/vehicle_yolov3/README_cn.md",
"repo_id": "PaddleDetection",
"token_count": 1340
} | 22 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/ppyolov2_r50vd_dcn.yml',
'./_base_/optimizer_365e.yml',
'./_base_/ppyolov2_reader.yml',
]
snapshot_epoch: 8
weights: output/ppyolov2_r50vd_dcn_365e_coco/model_final
| PaddleDetection/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 128
} | 23 |
# PP-YOLOE+ 下游任务
我们验证了PP-YOLOE+模型强大的泛化能力,在农业、低光、工业等不同场景下游任务检测效果稳定提升!
农业数据集采用[Embrapa WGISD](https://github.com/thsant/wgisd),该数据集用于葡萄栽培中基于图像的监测和现场机器人技术,提供了来自5种不同葡萄品种的实地实例,
处理后的COCO格式,包含图片训练集242张,测试集58张,5个类别,[Embrapa WGISD COCO格式下载](https://bj.bcebos.com/v1/paddledet/data/wgisd.zip);
低光数据集使用[ExDark](https://github.com/cs-chan/Exclusively-Dark-Image-Dataset/tree/master/Dataset),该数据集是一个专门在低光照环境下拍摄出针对低光目标检测的数据集,包括从极低光环境到暮光环境等10种不同光照条件下的图片,
处理后的COCO格式,包含图片训练集5891张,测试集1472张,12个类别,[ExDark COCO格式下载](https://bj.bcebos.com/v1/paddledet/data/Exdark.zip);
工业数据集使用[PKU-Market-PCB](https://robotics.pkusz.edu.cn/resources/dataset/),该数据集用于印刷电路板(PCB)的瑕疵检测,提供了6种常见的PCB缺陷,
处理后的COCO格式,包含图片训练集555张,测试集138张,6个类别,[PKU-Market-PCB COCO格式下载](https://bj.bcebos.com/v1/paddledet/data/PCB_coco.zip)。
商超数据集[SKU110k](https://github.com/eg4000/SKU110K_CVPR19)是商品超市场景下的密集目标检测数据集,包含11,762张图片和超过170个实例。其中包括8,233张用于训练的图像、588张用于验证的图像和2,941张用于测试的图像。
## 实验结果:
| 模型 | 数据集 | mAP<sup>val<br>0.5:0.95 | 下载链接 | 配置文件 |
|:---------|:---------------:|:-----------------------:|:---------:| :-----: |
|PP-YOLOE_m| Embrapa WGISD | 52.7 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_80e_wgisd.pdparams) | [配置文件](./ppyoloe_crn_m_80e_wgisd.yml) |
|PP-YOLOE+_m<br>(obj365_pretrained)| Embrapa WGISD | 60.8(+8.1) | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_obj365_pretrained_wgisd.pdparams) | [配置文件](./ppyoloe_plus_crn_m_80e_obj365_pretrained_wgisd.yml) |
|PP-YOLOE+_m<br>(coco_pretrained)| Embrapa WGISD | 59.7(+7.0) | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco_pretrained_wgisd.pdparams) | [配置文件](./ppyoloe_plus_crn_m_80e_coco_pretrained_wgisd.yml) |
|PP-YOLOE_m| ExDark | 56.4 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_80e_exdark.pdparams) | [配置文件](./ppyoloe_crn_m_80e_exdark.yml) |
|PP-YOLOE+_m<br>(obj365_pretrained)| ExDark | 57.7(+1.3) | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_obj365_pretrained_exdark.pdparams) | [配置文件](./ppyoloe_plus_crn_m_80e_obj365_pretrained_exdark.yml) |
|PP-YOLOE+_m<br>(coco_pretrained)| ExDark | 58.1(+1.7) | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco_pretrained_exdark.pdparams) | [配置文件](./ppyoloe_plus_crn_m_80e_coco_pretrained_exdark.yml) |
|PP-YOLOE_m| PKU-Market-PCB | 50.8 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_80e_pcb.pdparams) | [配置文件](./ppyoloe_crn_m_80e_pcb.yml) |
|PP-YOLOE+_m<br>(obj365_pretrained)| PKU-Market-PCB | 52.7(+1.9) | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_obj365_pretrained_pcb.pdparams) | [配置文件](./ppyoloe_plus_crn_m_80e_obj365_pretrained_pcb.yml) |
|PP-YOLOE+_m<br>(coco_pretrained)| PKU-Market-PCB | 52.4(+1.6) | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_coco_pretrained_pcb.pdparams) | [配置文件](./ppyoloe_plus_crn_m_80e_coco_pretrained_pcb.yml) |
**注意:**
- PP-YOLOE模型训练过程中使用8 GPUs进行训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率。
- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)。
## SKU110k Model ZOO
| Model | Epoch | GPU number | images/GPU | backbone | input shape | Box AP<sup>val<br>0.5:0.95 (maxDets=300) | Box AP<sup>test<br>0.5:0.95 (maxDets=300) | download | config |
|:--------------:|:-----:|:-------:|:----------:|:----------:| :-------:|:-------------------------:|:---------------------------:|:---------:|:------:|
| PP-YOLOE+_s | 80 | 8 | 8 | cspresnet-s | 960 | 57.4 | 58.8 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_sku110k.pdparams) | [config](./ppyoloe_plus_crn_s_80e_sku110k.yml) |
| PP-YOLOE+_m | 80 | 8 | 8 | cspresnet-m | 960 | 58.2 | 59.7 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_m_80e_sku110k.pdparams) | [config](./ppyoloe_plus_crn_m_80e_sku110k.yml) |
| PP-YOLOE+_l | 80 | 8 | 4 | cspresnet-l | 960 | 58.8 | 60.2 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_80e_sku110k.pdparams) | [config](./ppyoloe_plus_crn_l_80e_sku110k.yml) |
| PP-YOLOE+_x | 80 | 8 | 4 | cspresnet-x | 960 | 59.0 | 60.3 | [download](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_x_80e_sku110k.pdparams) | [config](./ppyoloe_plus_crn_x_80e_sku110k.yml) |
**注意:**
- SKU110k系列模型训练过程中使用8 GPUs进行训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率。
- SKU110k数据集使用**maxDets=300**的mAP值作为评估指标。
- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)。
## 引用
```
@inproceedings{goldman2019dense,
author = {Eran Goldman and Roei Herzig and Aviv Eisenschtat and Jacob Goldberger and Tal Hassner},
title = {Precise Detection in Densely Packed Scenes},
booktitle = {Proc. Conf. Comput. Vision Pattern Recognition (CVPR)},
year = {2019}
}
@article{Exdark,
title={Getting to Know Low-light Images with The Exclusively Dark Dataset},
author={Loh, Yuen Peng and Chan, Chee Seng},
journal={Computer Vision and Image Understanding},
volume={178},
pages={30-42},
year={2019},
doi={https://doi.org/10.1016/j.cviu.2018.10.010}
}
```
| PaddleDetection/configs/ppyoloe/application/README.md/0 | {
"file_path": "PaddleDetection/configs/ppyoloe/application/README.md",
"repo_id": "PaddleDetection",
"token_count": 3592
} | 24 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/optimizer_300e.yml',
'./_base_/ppyoloe_crn.yml',
'./_base_/ppyoloe_reader.yml',
]
log_iter: 100
snapshot_epoch: 10
weights: output/ppyoloe_crn_x_300e_coco/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/CSPResNetb_x_pretrained.pdparams
depth_mult: 1.33
width_mult: 1.25
| PaddleDetection/configs/ppyoloe/ppyoloe_crn_x_300e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/ppyoloe/ppyoloe_crn_x_300e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 181
} | 25 |
_BASE_: [
'../datasets/coco_instance.yml',
'../runtime.yml',
'_base_/optimizer_1x.yml',
'_base_/queryinst_r50_fpn.yml',
'_base_/queryinst_reader.yml',
]
log_iter: 50
find_unused_parameters: true
weights: output/queryinst_r50_fpn_1x_pro100_coco/model_final
| PaddleDetection/configs/queryinst/queryinst_r50_fpn_1x_pro100_coco.yml/0 | {
"file_path": "PaddleDetection/configs/queryinst/queryinst_r50_fpn_1x_pro100_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 128
} | 26 |
worker_num: 2
TrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 1}
- RandomFlip: {}
- NormalizeImage: {is_scale: True, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 2
shuffle: True
drop_last: True
collate_batch: False
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {is_scale: True, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
TestReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {is_scale: True, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
| PaddleDetection/configs/retinanet/_base_/retinanet_reader.yml/0 | {
"file_path": "PaddleDetection/configs/retinanet/_base_/retinanet_reader.yml",
"repo_id": "PaddleDetection",
"token_count": 491
} | 27 |
architecture: YOLOv3
norm_type: sync_bn
use_ema: true
ema_decay: 0.9998
YOLOv3:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOERHead
post_process: ~
CSPResNet:
layers: [3, 6, 6, 3]
channels: [64, 128, 256, 512, 1024]
return_idx: [1, 2, 3]
use_large_stem: True
use_alpha: True
CustomCSPPAN:
out_channels: [768, 384, 192]
stage_num: 1
block_num: 3
act: 'swish'
spp: true
use_alpha: True
PPYOLOERHead:
fpn_strides: [32, 16, 8]
grid_cell_offset: 0.5
use_varifocal_loss: true
static_assigner_epoch: -1
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.05}
static_assigner:
name: FCOSRAssigner
factor: 12
threshold: 0.23
boundary: [[512, 10000], [256, 512], [-1, 256]]
assigner:
name: RotatedTaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 2000
keep_top_k: -1
score_threshold: 0.1
nms_threshold: 0.1
normalized: False
| PaddleDetection/configs/rotate/ppyoloe_r/_base_/ppyoloe_r_crn.yml/0 | {
"file_path": "PaddleDetection/configs/rotate/ppyoloe_r/_base_/ppyoloe_r_crn.yml",
"repo_id": "PaddleDetection",
"token_count": 460
} | 28 |
_BASE_: [
'../../datasets/spine_coco.yml',
'../../runtime.yml',
'_base_/s2anet_optimizer_1x.yml',
'_base_/s2anet.yml',
'_base_/s2anet_reader.yml',
]
weights: output/s2anet_1x_spine/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams
# for 4 card
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [7, 10]
- !LinearWarmup
start_factor: 0.3333333333333333
epochs: 5
S2ANetHead:
reg_loss_weight: [1.0, 1.0, 1.0, 1.0, 1.05]
cls_loss_weight: [1.05, 1.0]
| PaddleDetection/configs/rotate/s2anet/s2anet_1x_spine.yml/0 | {
"file_path": "PaddleDetection/configs/rotate/s2anet/s2anet_1x_spine.yml",
"repo_id": "PaddleDetection",
"token_count": 289
} | 29 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/optimizer_6x.yml',
'_base_/rtdetr_r50vd.yml',
'_base_/rtdetr_reader.yml',
]
weights: output/rtdetr_r101vd_6x_coco/model_final
find_unused_parameters: True
log_iter: 200
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet101_vd_ssld_pretrained.pdparams
ResNet:
# index 0 stands for res2
depth: 101
variant: d
norm_type: bn
freeze_at: 0
return_idx: [1, 2, 3]
lr_mult_list: [0.01, 0.01, 0.01, 0.01]
num_stages: 4
freeze_stem_only: True
HybridEncoder:
hidden_dim: 384
use_encoder_idx: [2]
num_encoder_layers: 1
encoder_layer:
name: TransformerLayer
d_model: 384
nhead: 8
dim_feedforward: 2048
dropout: 0.
activation: 'gelu'
expansion: 1.0
| PaddleDetection/configs/rtdetr/rtdetr_r101vd_6x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/rtdetr/rtdetr_r101vd_6x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 368
} | 30 |
# Weights of yolov3_mobilenet_v1_voc
pretrain_weights: https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_270e_voc.pdparams
slim: PrunerQAT
PrunerQAT:
criterion: fpgm
pruned_params: ['conv2d_27.w_0', 'conv2d_28.w_0', 'conv2d_29.w_0',
'conv2d_30.w_0', 'conv2d_31.w_0', 'conv2d_32.w_0',
'conv2d_34.w_0', 'conv2d_35.w_0', 'conv2d_36.w_0',
'conv2d_37.w_0', 'conv2d_38.w_0', 'conv2d_39.w_0',
'conv2d_41.w_0', 'conv2d_42.w_0', 'conv2d_43.w_0',
'conv2d_44.w_0', 'conv2d_45.w_0', 'conv2d_46.w_0']
pruned_ratios: [0.1,0.2,0.2,0.2,0.2,0.1,0.2,0.3,0.3,0.3,0.2,0.1,0.3,0.4,0.4,0.4,0.4,0.3]
print_prune_params: False
quant_config: {
'weight_quantize_type': 'channel_wise_abs_max', 'activation_quantize_type': 'moving_average_abs_max',
'weight_bits': 8, 'activation_bits': 8, 'dtype': 'int8', 'window_size': 10000, 'moving_rate': 0.9,
'quantizable_layer_type': ['Conv2D', 'Linear']}
print_qat_model: True
| PaddleDetection/configs/slim/extensions/yolov3_mobilenetv1_prune_qat.yml/0 | {
"file_path": "PaddleDetection/configs/slim/extensions/yolov3_mobilenetv1_prune_qat.yml",
"repo_id": "PaddleDetection",
"token_count": 591
} | 31 |
# Weights of yolov3_mobilenet_v1_voc
pretrain_weights: https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_270e_voc.pdparams
slim: Pruner
Pruner:
criterion: fpgm
pruned_params: ['conv2d_27.w_0', 'conv2d_28.w_0', 'conv2d_29.w_0',
'conv2d_30.w_0', 'conv2d_31.w_0', 'conv2d_32.w_0',
'conv2d_34.w_0', 'conv2d_35.w_0', 'conv2d_36.w_0',
'conv2d_37.w_0', 'conv2d_38.w_0', 'conv2d_39.w_0',
'conv2d_41.w_0', 'conv2d_42.w_0', 'conv2d_43.w_0',
'conv2d_44.w_0', 'conv2d_45.w_0', 'conv2d_46.w_0']
pruned_ratios: [0.1,0.2,0.2,0.2,0.2,0.1,0.2,0.3,0.3,0.3,0.2,0.1,0.3,0.4,0.4,0.4,0.4,0.3]
print_params: False
| PaddleDetection/configs/slim/prune/yolov3_prune_fpgm.yml/0 | {
"file_path": "PaddleDetection/configs/slim/prune/yolov3_prune_fpgm.yml",
"repo_id": "PaddleDetection",
"token_count": 464
} | 32 |
metric: COCO
num_classes: 15
TrainDataset:
!COCODataSet
image_dir: train_images_500_025
anno_path: train_500_025.json
dataset_dir: dataset/dota_sliced
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: val_images_500_025
anno_path: val_500_025.json
dataset_dir: dataset/dota_sliced
TestDataset:
!ImageFolder
anno_path: val_500_025.json
dataset_dir: dataset/dota_sliced
| PaddleDetection/configs/smalldet/_base_/DOTA_sliced_500_025_detection.yml/0 | {
"file_path": "PaddleDetection/configs/smalldet/_base_/DOTA_sliced_500_025_detection.yml",
"repo_id": "PaddleDetection",
"token_count": 210
} | 33 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../ppyolo/_base_/ppyolo_r50vd_dcn.yml',
'../ppyolo/_base_/optimizer_1x.yml',
'../ppyolo/_base_/ppyolo_reader.yml',
]
snapshot_epoch: 8
use_ema: true
weights: output/ppyolo_r50vd_dcn_1x_visdrone/model_final
epoch: 192
LearningRate:
base_lr: 0.005
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 153
- 173
- !LinearWarmup
start_factor: 0.
steps: 4000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
metric: COCO
num_classes: 9
TrainDataset:
!COCODataSet
image_dir: train
anno_path: annotations/train.json
dataset_dir: dataset/VisDrone2019_coco
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: val
anno_path: annotations/val.json
dataset_dir: dataset/VisDrone2019_coco
TestDataset:
!ImageFolder
anno_path: annotations/val.json
| PaddleDetection/configs/sniper/ppyolo_r50vd_dcn_1x_visdrone.yml/0 | {
"file_path": "PaddleDetection/configs/sniper/ppyolo_r50vd_dcn_1x_visdrone.yml",
"repo_id": "PaddleDetection",
"token_count": 454
} | 34 |
# Enhanced Training of Query-Based Object Detection via Selective Query Recollection
## Introduction
This paper investigates a phenomenon where query-based object detectors mispredict at the last decoding stage while predicting correctly at an intermediate stage. It design and present Selective Query Recollection (SQR), a simple and effective training strategy for query-based object detectors. It cumulatively collects intermediate queries as decoding stages go deeper and selectively forwards the queries to the downstream stages aside from the sequential structure.
## Model Zoo
| Backbone | Model | Images/GPU | GPUs | Epochs | Box AP | Config | Download |
|:--------:|:-------------------:|:----------:|:----:|:------:|:------:|:------------------------------------------------:|:---------:|
| R-50 | Deformable DETR SQR | 1 | 4 | 12 | 32.9 | [config](./deformable_detr_sqr_r50_12e_coco.yml) |[model](https://bj.bcebos.com/v1/paddledet/models/deformable_detr_sqr_r50_12e_coco.pdparams) |
> We did not find the config for the 12 epochs experiment in the paper, which we wrote ourselves with reference to the standard 12 epochs config in mmdetection. The same accuracy was obtained in the official project and in this project with this [config](./deformable_detr_sqr_r50_12e_coco.yml). <br> We haven't finished validating the 50 epochs experiment yet, if you need the config, please refer to [here](https://pan.baidu.com/s/1eWavnAiRoFXm3mMlpn9WPw?pwd=3z6m).
## Citations
```
@InProceedings{Chen_2023_CVPR,
author = {Chen, Fangyi and Zhang, Han and Hu, Kai and Huang, Yu-Kai and Zhu, Chenchen and Savvides, Marios},
title = {Enhanced Training of Query-Based Object Detection via Selective Query Recollection},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {23756-23765}
}
```
| PaddleDetection/configs/sqr/README.md/0 | {
"file_path": "PaddleDetection/configs/sqr/README.md",
"repo_id": "PaddleDetection",
"token_count": 664
} | 35 |
# TOOD
## Introduction
[TOOD: Task-aligned One-stage Object Detection](https://arxiv.org/abs/2108.07755)
TOOD is an object detection model. We reproduced the model of the paper.
## Model Zoo
| Backbone | Model | Images/GPU | Inf time (fps) | Box AP | Config | Download |
|:------:|:--------:|:--------:|:--------------:|:------:|:------:|:--------:|
| R-50 | TOOD | 4 | --- | 42.5 | [config](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/tood/tood_r50_fpn_1x_coco.yml) | [model](https://paddledet.bj.bcebos.com/models/tood_r50_fpn_1x_coco.pdparams) |
**Notes:**
- TOOD is trained on COCO train2017 dataset and evaluated on val2017 results of `mAP(IoU=0.5:0.95)`.
- TOOD uses 8GPU to train 12 epochs.
GPU multi-card training
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/tood/tood_r50_fpn_1x_coco.yml --fleet
```
## Citations
```
@inproceedings{feng2021tood,
title={TOOD: Task-aligned One-stage Object Detection},
author={Feng, Chengjian and Zhong, Yujie and Gao, Yu and Scott, Matthew R and Huang, Weilin},
booktitle={ICCV},
year={2021}
}
```
| PaddleDetection/configs/tood/README.md/0 | {
"file_path": "PaddleDetection/configs/tood/README.md",
"repo_id": "PaddleDetection",
"token_count": 481
} | 36 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/optimizer_10x.yml',
'_base_/pafnet.yml',
'_base_/pafnet_reader.yml',
]
weights: output/pafnet_10x_coco/model_final
| PaddleDetection/configs/ttfnet/pafnet_10x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/ttfnet/pafnet_10x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 99
} | 37 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/optimizer_270e.yml',
'_base_/yolov3_darknet53.yml',
'_base_/yolov3_reader.yml',
]
snapshot_epoch: 5
weights: output/yolov3_darknet53_270e_coco/model_final
norm_type: bn
YOLOv3Loss:
ignore_thresh: 0.5
downsample: [32, 16, 8]
label_smooth: false
TrainReader:
inputs_def:
num_max_boxes: 50
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53], ratio: 2.0}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeBox: {}
- PadBox: {num_max_boxes: 50}
- BboxXYXY2XYWH: {}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8], iou_thresh: 0.5}
batch_size: 8
shuffle: true
drop_last: true
mixup_epoch: -1
use_shared_memory: true
| PaddleDetection/configs/yolov3/yolov3_darknet53_original_270e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/yolov3/yolov3_darknet53_original_270e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 581
} | 38 |
_BASE_: [
'yolox_tiny_300e_coco.yml'
]
depth_mult: 0.33
width_mult: 0.375
log_iter: 100
snapshot_epoch: 10
weights: output/yolox_cdn_tiny_300e_coco/model_final
CSPDarkNet:
arch: "P5" # using the same backbone of YOLOv5 releases v6.0 and later version
return_idx: [2, 3, 4]
depthwise: False
| PaddleDetection/configs/yolox/yolox_cdn_tiny_300e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/yolox/yolox_cdn_tiny_300e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 134
} | 39 |
# Inference Benchmark
## 一、Prepare the Environment
- 1、Test Environment:
- CUDA 10.1
- CUDNN 7.6
- TensorRT-6.0.1
- PaddlePaddle v2.0.1
- The GPUS are Tesla V100 and GTX 1080 Ti and Jetson AGX Xavier
- 2、Test Method:
- In order to compare the inference speed of different models, the input shape is 3x640x640, use `demo/000000014439_640x640.jpg`.
- Batch_size=1
- Delete the warmup time of the first 100 rounds and test the average time of 100 rounds in ms/image, including network calculation time and data copy time to CPU.
- Using Fluid C++ prediction engine: including Fluid C++ prediction, Fluid TensorRT prediction, the following test Float32 (FP32) and Float16 (FP16) inference speed.
**Attention:** For TensorRT, please refer to the [TENSOR tutorial](TENSOR_RT.md) for the difference between fixed and dynamic dimensions. Due to the imperfect support for the two-stage model under fixed size, dynamic size test was adopted for the Faster RCNN model. Fixed size and dynamic size do not support exactly the same OP for fusion, so the performance of the same model tested at fixed size and dynamic size may differ slightly.
## 二、Inferring Speed
### 1、Linux System
#### (1)Tesla V100
| Model | backbone | Fixed size or not | The net size | paddle_inference | trt_fp32 | trt_fp16 |
| --------------- | ------------- | ----------------- | ------------ | ---------------- | -------- | -------- |
| Faster RCNN FPN | ResNet50 | no | 640x640 | 27.99 | 26.15 | 21.92 |
| Faster RCNN FPN | ResNet50 | no | 800x1312 | 32.49 | 25.54 | 21.70 |
| YOLOv3 | Mobilenet\_v1 | yes | 608x608 | 9.74 | 8.61 | 6.28 |
| YOLOv3 | Darknet53 | yes | 608x608 | 17.84 | 15.43 | 9.86 |
| PPYOLO | ResNet50 | yes | 608x608 | 20.77 | 18.40 | 13.53 |
| SSD | Mobilenet\_v1 | yes | 300x300 | 5.17 | 4.43 | 4.29 |
| TTFNet | Darknet53 | yes | 512x512 | 10.14 | 8.71 | 5.55 |
| FCOS | ResNet50 | yes | 640x640 | 35.47 | 35.02 | 34.24 |
#### (2)Jetson AGX Xavier
| Model | backbone | Fixed size or not | The net size | paddle_inference | trt_fp32 | trt_fp16 |
| --------------- | ------------- | ----------------- | ------------ | ---------------- | -------- | -------- |
| Faster RCNN FPN | ResNet50 | no | 640x640 | 169.45 | 158.92 | 119.25 |
| Faster RCNN FPN | ResNet50 | no | 800x1312 | 228.07 | 156.39 | 117.03 |
| YOLOv3 | Mobilenet\_v1 | yes | 608x608 | 48.76 | 43.83 | 18.41 |
| YOLOv3 | Darknet53 | yes | 608x608 | 121.61 | 110.30 | 42.38 |
| PPYOLO | ResNet50 | yes | 608x608 | 111.80 | 99.40 | 48.05 |
| SSD | Mobilenet\_v1 | yes | 300x300 | 10.52 | 8.84 | 8.77 |
| TTFNet | Darknet53 | yes | 512x512 | 73.77 | 64.03 | 31.46 |
| FCOS | ResNet50 | yes | 640x640 | 217.11 | 214.38 | 205.78 |
### 2、Windows System
#### (1)GTX 1080Ti
| Model | backbone | Fixed size or not | The net size | paddle_inference | trt_fp32 | trt_fp16 |
| --------------- | ------------- | ----------------- | ------------ | ---------------- | -------- | -------- |
| Faster RCNN FPN | ResNet50 | no | 640x640 | 50.74 | 57.17 | 62.08 |
| Faster RCNN FPN | ResNet50 | no | 800x1312 | 50.31 | 57.61 | 62.05 |
| YOLOv3 | Mobilenet\_v1 | yes | 608x608 | 14.51 | 11.23 | 11.13 |
| YOLOv3 | Darknet53 | yes | 608x608 | 30.26 | 23.92 | 24.02 |
| PPYOLO | ResNet50 | yes | 608x608 | 38.06 | 31.40 | 31.94 |
| SSD | Mobilenet\_v1 | yes | 300x300 | 16.47 | 13.87 | 13.76 |
| TTFNet | Darknet53 | yes | 512x512 | 21.83 | 17.14 | 17.09 |
| FCOS | ResNet50 | yes | 640x640 | 71.88 | 69.93 | 69.52 |
| PaddleDetection/deploy/BENCHMARK_INFER_en.md/0 | {
"file_path": "PaddleDetection/deploy/BENCHMARK_INFER_en.md",
"repo_id": "PaddleDetection",
"token_count": 2436
} | 40 |
Global:
reader_config: configs/yolov8_reader.yml
include_nms: True
Evaluation: True
model_dir: ./yolov8_s_500e_coco_trt_nms/
model_filename: model.pdmodel
params_filename: model.pdiparams
Distillation:
alpha: 1.0
loss: soft_label
QuantAware:
onnx_format: true
activation_quantize_type: 'moving_average_abs_max'
quantize_op_types:
- conv2d
- depthwise_conv2d
TrainConfig:
train_iter: 8000
eval_iter: 1000
learning_rate:
type: CosineAnnealingDecay
learning_rate: 0.00003
T_max: 10000
optimizer_builder:
optimizer:
type: SGD
weight_decay: 4.0e-05
| PaddleDetection/deploy/auto_compression/configs/yolov8_s_qat_dis.yaml/0 | {
"file_path": "PaddleDetection/deploy/auto_compression/configs/yolov8_s_qat_dis.yaml",
"repo_id": "PaddleDetection",
"token_count": 259
} | 41 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <ctime>
#include <memory>
#include <string>
#include <utility>
#include <vector>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "paddle_inference_api.h" // NOLINT
#include "include/config_parser.h"
#include "include/keypoint_postprocess.h"
#include "include/preprocess_op.h"
using namespace paddle_infer;
namespace PaddleDetection {
// Visualiztion KeyPoint Result
cv::Mat VisualizeKptsResult(const cv::Mat& img,
const std::vector<KeyPointResult>& results,
const std::vector<int>& colormap);
class KeyPointDetector {
public:
explicit KeyPointDetector(const std::string& model_dir,
const std::string& device = "CPU",
bool use_mkldnn = false,
int cpu_threads = 1,
const std::string& run_mode = "paddle",
const int batch_size = 1,
const int gpu_id = 0,
const int trt_min_shape = 1,
const int trt_max_shape = 1280,
const int trt_opt_shape = 640,
bool trt_calib_mode = false,
bool use_dark = true) {
this->device_ = device;
this->gpu_id_ = gpu_id;
this->cpu_math_library_num_threads_ = cpu_threads;
this->use_mkldnn_ = use_mkldnn;
this->use_dark = use_dark;
this->trt_min_shape_ = trt_min_shape;
this->trt_max_shape_ = trt_max_shape;
this->trt_opt_shape_ = trt_opt_shape;
this->trt_calib_mode_ = trt_calib_mode;
config_.load_config(model_dir);
this->use_dynamic_shape_ = config_.use_dynamic_shape_;
this->min_subgraph_size_ = config_.min_subgraph_size_;
threshold_ = config_.draw_threshold_;
preprocessor_.Init(config_.preprocess_info_);
LoadModel(model_dir, batch_size, run_mode);
}
// Load Paddle inference model
void LoadModel(const std::string& model_dir,
const int batch_size = 1,
const std::string& run_mode = "paddle");
// Run predictor
void Predict(const std::vector<cv::Mat> imgs,
std::vector<std::vector<float>>& center,
std::vector<std::vector<float>>& scale,
const double threshold = 0.5,
const int warmup = 0,
const int repeats = 1,
std::vector<KeyPointResult>* result = nullptr,
std::vector<double>* times = nullptr);
// Get Model Label list
const std::vector<std::string>& GetLabelList() const {
return config_.label_list_;
}
private:
std::string device_ = "CPU";
int gpu_id_ = 0;
int cpu_math_library_num_threads_ = 1;
bool use_dark = true;
bool use_mkldnn_ = false;
int min_subgraph_size_ = 3;
bool use_dynamic_shape_ = false;
int trt_min_shape_ = 1;
int trt_max_shape_ = 1280;
int trt_opt_shape_ = 640;
bool trt_calib_mode_ = false;
// Preprocess image and copy data to input buffer
void Preprocess(const cv::Mat& image_mat);
// Postprocess result
void Postprocess(std::vector<float>& output,
std::vector<int> output_shape,
std::vector<int64_t>& idxout,
std::vector<int> idx_shape,
std::vector<KeyPointResult>* result,
std::vector<std::vector<float>>& center,
std::vector<std::vector<float>>& scale);
std::shared_ptr<Predictor> predictor_;
Preprocessor preprocessor_;
ImageBlob inputs_;
std::vector<float> output_data_;
std::vector<int64_t> idx_data_;
float threshold_;
ConfigPaser config_;
};
} // namespace PaddleDetection
| PaddleDetection/deploy/cpp/include/keypoint_detector.h/0 | {
"file_path": "PaddleDetection/deploy/cpp/include/keypoint_detector.h",
"repo_id": "PaddleDetection",
"token_count": 1949
} | 42 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <glog/logging.h>
#include <math.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <algorithm>
#include <iostream>
#include <numeric>
#include <string>
#include <vector>
#ifdef _WIN32
#include <direct.h>
#include <io.h>
#elif LINUX
#include <stdarg.h>
#endif
#include <gflags/gflags.h>
#include "include/keypoint_detector.h"
#include "include/object_detector.h"
#include "include/preprocess_op.h"
DEFINE_string(model_dir, "", "Path of object detector inference model");
DEFINE_string(model_dir_keypoint,
"",
"Path of keypoint detector inference model");
DEFINE_string(image_file, "", "Path of input image");
DEFINE_string(image_dir,
"",
"Dir of input image, `image_file` has a higher priority.");
DEFINE_int32(batch_size, 1, "batch_size of object detector");
DEFINE_int32(batch_size_keypoint, 8, "batch_size of keypoint detector");
DEFINE_string(
video_file,
"",
"Path of input video, `video_file` or `camera_id` has a highest priority.");
DEFINE_int32(camera_id, -1, "Device id of camera to predict");
DEFINE_bool(
use_gpu,
false,
"Deprecated, please use `--device` to set the device you want to run.");
DEFINE_string(device,
"CPU",
"Choose the device you want to run, it can be: CPU/GPU/XPU, "
"default is CPU.");
DEFINE_double(threshold, 0.5, "Threshold of score.");
DEFINE_double(threshold_keypoint, 0.5, "Threshold of score.");
DEFINE_string(output_dir, "output", "Directory of output visualization files.");
DEFINE_string(run_mode,
"paddle",
"Mode of running(paddle/trt_fp32/trt_fp16/trt_int8)");
DEFINE_int32(gpu_id, 0, "Device id of GPU to execute");
DEFINE_bool(run_benchmark,
false,
"Whether to predict a image_file repeatedly for benchmark");
DEFINE_bool(use_mkldnn, false, "Whether use mkldnn with CPU");
DEFINE_int32(cpu_threads, 1, "Num of threads with CPU");
DEFINE_int32(trt_min_shape, 1, "Min shape of TRT DynamicShapeI");
DEFINE_int32(trt_max_shape, 1280, "Max shape of TRT DynamicShapeI");
DEFINE_int32(trt_opt_shape, 640, "Opt shape of TRT DynamicShapeI");
DEFINE_bool(trt_calib_mode,
false,
"If the model is produced by TRT offline quantitative calibration, "
"trt_calib_mode need to set True");
DEFINE_bool(use_dark, true, "Whether use dark decode in keypoint postprocess");
void PrintBenchmarkLog(std::vector<double> det_time, int img_num) {
LOG(INFO) << "----------------------- Config info -----------------------";
LOG(INFO) << "runtime_device: " << FLAGS_device;
LOG(INFO) << "ir_optim: "
<< "True";
LOG(INFO) << "enable_memory_optim: "
<< "True";
int has_trt = FLAGS_run_mode.find("trt");
if (has_trt >= 0) {
LOG(INFO) << "enable_tensorrt: "
<< "True";
std::string precision = FLAGS_run_mode.substr(4, 8);
LOG(INFO) << "precision: " << precision;
} else {
LOG(INFO) << "enable_tensorrt: "
<< "False";
LOG(INFO) << "precision: "
<< "fp32";
}
LOG(INFO) << "enable_mkldnn: " << (FLAGS_use_mkldnn ? "True" : "False");
LOG(INFO) << "cpu_math_library_num_threads: " << FLAGS_cpu_threads;
LOG(INFO) << "----------------------- Data info -----------------------";
LOG(INFO) << "batch_size: " << FLAGS_batch_size;
LOG(INFO) << "input_shape: "
<< "dynamic shape";
LOG(INFO) << "----------------------- Model info -----------------------";
FLAGS_model_dir.erase(FLAGS_model_dir.find_last_not_of(OS_PATH_SEP) + 1);
LOG(INFO) << "model_name: " << FLAGS_model_dir;
LOG(INFO) << "----------------------- Perf info ------------------------";
LOG(INFO) << "Total number of predicted data: " << img_num
<< " and total time spent(ms): "
<< std::accumulate(det_time.begin(), det_time.end(), 0.);
img_num = std::max(1, img_num);
LOG(INFO) << "preproce_time(ms): " << det_time[0] / img_num
<< ", inference_time(ms): " << det_time[1] / img_num
<< ", postprocess_time(ms): " << det_time[2] / img_num;
}
void PrintKptsBenchmarkLog(std::vector<double> det_time, int img_num) {
LOG(INFO) << "----------------------- Data info -----------------------";
LOG(INFO) << "batch_size_keypoint: " << FLAGS_batch_size_keypoint;
LOG(INFO) << "----------------------- Model info -----------------------";
FLAGS_model_dir_keypoint.erase(
FLAGS_model_dir_keypoint.find_last_not_of(OS_PATH_SEP) + 1);
LOG(INFO) << "keypoint_model_name: " << FLAGS_model_dir_keypoint;
LOG(INFO) << "----------------------- Perf info ------------------------";
LOG(INFO) << "Total number of predicted data: " << img_num
<< " and total time spent(ms): "
<< std::accumulate(det_time.begin(), det_time.end(), 0.);
img_num = std::max(1, img_num);
LOG(INFO) << "Average time cost per person:";
LOG(INFO) << "preproce_time(ms): " << det_time[0] / img_num
<< ", inference_time(ms): " << det_time[1] / img_num
<< ", postprocess_time(ms): " << det_time[2] / img_num;
}
static std::string DirName(const std::string& filepath) {
auto pos = filepath.rfind(OS_PATH_SEP);
if (pos == std::string::npos) {
return "";
}
return filepath.substr(0, pos);
}
static bool PathExists(const std::string& path) {
#ifdef _WIN32
struct _stat buffer;
return (_stat(path.c_str(), &buffer) == 0);
#else
struct stat buffer;
return (stat(path.c_str(), &buffer) == 0);
#endif // !_WIN32
}
static void MkDir(const std::string& path) {
if (PathExists(path)) return;
int ret = 0;
#ifdef _WIN32
ret = _mkdir(path.c_str());
#else
ret = mkdir(path.c_str(), 0755);
#endif // !_WIN32
if (ret != 0) {
std::string path_error(path);
path_error += " mkdir failed!";
throw std::runtime_error(path_error);
}
}
static void MkDirs(const std::string& path) {
if (path.empty()) return;
if (PathExists(path)) return;
MkDirs(DirName(path));
MkDir(path);
}
void PredictVideo(const std::string& video_path,
PaddleDetection::ObjectDetector* det,
PaddleDetection::KeyPointDetector* keypoint,
const std::string& output_dir = "output") {
// Open video
cv::VideoCapture capture;
std::string video_out_name = "output.mp4";
if (FLAGS_camera_id != -1) {
capture.open(FLAGS_camera_id);
} else {
capture.open(video_path.c_str());
video_out_name =
video_path.substr(video_path.find_last_of(OS_PATH_SEP) + 1);
}
if (!capture.isOpened()) {
printf("can not open video : %s\n", video_path.c_str());
return;
}
// Get Video info : resolution, fps, frame count
int video_width = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_WIDTH));
int video_height = static_cast<int>(capture.get(CV_CAP_PROP_FRAME_HEIGHT));
int video_fps = static_cast<int>(capture.get(CV_CAP_PROP_FPS));
int video_frame_count =
static_cast<int>(capture.get(CV_CAP_PROP_FRAME_COUNT));
printf("fps: %d, frame_count: %d\n", video_fps, video_frame_count);
// Create VideoWriter for output
cv::VideoWriter video_out;
std::string video_out_path(output_dir);
if (output_dir.rfind(OS_PATH_SEP) != output_dir.size() - 1) {
video_out_path += OS_PATH_SEP;
}
video_out_path += video_out_name;
video_out.open(video_out_path.c_str(),
0x00000021,
video_fps,
cv::Size(video_width, video_height),
true);
if (!video_out.isOpened()) {
printf("create video writer failed!\n");
return;
}
PaddleDetection::PoseSmooth smoother =
PaddleDetection::PoseSmooth(video_width, video_height);
std::vector<PaddleDetection::ObjectResult> result;
std::vector<int> bbox_num;
std::vector<double> det_times;
auto labels = det->GetLabelList();
auto colormap = PaddleDetection::GenerateColorMap(labels.size());
// Store keypoint results
std::vector<PaddleDetection::KeyPointResult> result_kpts;
std::vector<cv::Mat> imgs_kpts;
std::vector<std::vector<float>> center_bs;
std::vector<std::vector<float>> scale_bs;
std::vector<int> colormap_kpts = PaddleDetection::GenerateColorMap(20);
// Capture all frames and do inference
cv::Mat frame;
int frame_id = 1;
bool is_rbox = false;
while (capture.read(frame)) {
if (frame.empty()) {
break;
}
std::vector<cv::Mat> imgs;
imgs.push_back(frame);
printf("detect frame: %d\n", frame_id);
det->Predict(imgs, FLAGS_threshold, 0, 1, &result, &bbox_num, &det_times);
std::vector<PaddleDetection::ObjectResult> out_result;
for (const auto& item : result) {
if (item.confidence < FLAGS_threshold || item.class_id == -1) {
continue;
}
out_result.push_back(item);
if (item.rect.size() > 6) {
is_rbox = true;
printf("class=%d confidence=%.4f rect=[%d %d %d %d %d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3],
item.rect[4],
item.rect[5],
item.rect[6],
item.rect[7]);
} else {
printf("class=%d confidence=%.4f rect=[%d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3]);
}
}
if (keypoint) {
result_kpts.clear();
int imsize = out_result.size();
for (int i = 0; i < imsize; i++) {
auto item = out_result[i];
cv::Mat crop_img;
std::vector<double> keypoint_times;
std::vector<int> rect = {
item.rect[0], item.rect[1], item.rect[2], item.rect[3]};
std::vector<float> center;
std::vector<float> scale;
if (item.class_id == 0) {
PaddleDetection::CropImg(frame, crop_img, rect, center, scale);
center_bs.emplace_back(center);
scale_bs.emplace_back(scale);
imgs_kpts.emplace_back(crop_img);
}
if (imgs_kpts.size() == FLAGS_batch_size_keypoint ||
((i == imsize - 1) && !imgs_kpts.empty())) {
keypoint->Predict(imgs_kpts,
center_bs,
scale_bs,
FLAGS_threshold,
0,
1,
&result_kpts,
&keypoint_times);
imgs_kpts.clear();
center_bs.clear();
scale_bs.clear();
}
}
if (result_kpts.size() == 1) {
for (int i = 0; i < result_kpts.size(); i++) {
result_kpts[i] = smoother.smooth_process(&(result_kpts[i]));
}
}
cv::Mat out_im = VisualizeKptsResult(frame, result_kpts, colormap_kpts);
video_out.write(out_im);
} else {
// Visualization result
cv::Mat out_im = PaddleDetection::VisualizeResult(
frame, out_result, labels, colormap, is_rbox);
video_out.write(out_im);
}
frame_id += 1;
}
capture.release();
video_out.release();
}
void PredictImage(const std::vector<std::string> all_img_paths,
const int batch_size,
const double threshold,
const bool run_benchmark,
PaddleDetection::ObjectDetector* det,
PaddleDetection::KeyPointDetector* keypoint,
const std::string& output_dir = "output") {
std::vector<double> det_t = {0, 0, 0};
int steps = ceil(static_cast<float>(all_img_paths.size()) / batch_size);
int kpts_imgs = 0;
std::vector<double> keypoint_t = {0, 0, 0};
printf("total images = %d, batch_size = %d, total steps = %d\n",
all_img_paths.size(),
batch_size,
steps);
for (int idx = 0; idx < steps; idx++) {
std::vector<cv::Mat> batch_imgs;
int left_image_cnt = all_img_paths.size() - idx * batch_size;
if (left_image_cnt > batch_size) {
left_image_cnt = batch_size;
}
for (int bs = 0; bs < left_image_cnt; bs++) {
std::string image_file_path = all_img_paths.at(idx * batch_size + bs);
cv::Mat im = cv::imread(image_file_path, 1);
batch_imgs.insert(batch_imgs.end(), im);
}
// Store all detected result
std::vector<PaddleDetection::ObjectResult> result;
std::vector<int> bbox_num;
std::vector<double> det_times;
// Store keypoint results
std::vector<PaddleDetection::KeyPointResult> result_kpts;
std::vector<cv::Mat> imgs_kpts;
std::vector<std::vector<float>> center_bs;
std::vector<std::vector<float>> scale_bs;
std::vector<int> colormap_kpts = PaddleDetection::GenerateColorMap(20);
bool is_rbox = false;
if (run_benchmark) {
det->Predict(
batch_imgs, threshold, 10, 10, &result, &bbox_num, &det_times);
} else {
det->Predict(batch_imgs, threshold, 0, 1, &result, &bbox_num, &det_times);
}
// get labels and colormap
auto labels = det->GetLabelList();
auto colormap = PaddleDetection::GenerateColorMap(labels.size());
int item_start_idx = 0;
for (int i = 0; i < left_image_cnt; i++) {
cv::Mat im = batch_imgs[i];
std::vector<PaddleDetection::ObjectResult> im_result;
int detect_num = 0;
for (int j = 0; j < bbox_num[i]; j++) {
PaddleDetection::ObjectResult item = result[item_start_idx + j];
if (item.confidence < threshold || item.class_id == -1) {
continue;
}
detect_num += 1;
im_result.push_back(item);
if (item.rect.size() > 6) {
is_rbox = true;
printf("class=%d confidence=%.4f rect=[%d %d %d %d %d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3],
item.rect[4],
item.rect[5],
item.rect[6],
item.rect[7]);
} else {
printf("class=%d confidence=%.4f rect=[%d %d %d %d]\n",
item.class_id,
item.confidence,
item.rect[0],
item.rect[1],
item.rect[2],
item.rect[3]);
}
}
std::cout << all_img_paths.at(idx * batch_size + i)
<< " The number of detected box: " << detect_num << std::endl;
item_start_idx = item_start_idx + bbox_num[i];
std::vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(95);
std::string output_path(output_dir);
if (output_dir.rfind(OS_PATH_SEP) != output_dir.size() - 1) {
output_path += OS_PATH_SEP;
}
std::string image_file_path = all_img_paths.at(idx * batch_size + i);
if (keypoint) {
int imsize = im_result.size();
for (int i = 0; i < imsize; i++) {
auto item = im_result[i];
cv::Mat crop_img;
std::vector<double> keypoint_times;
std::vector<int> rect = {
item.rect[0], item.rect[1], item.rect[2], item.rect[3]};
std::vector<float> center;
std::vector<float> scale;
if (item.class_id == 0) {
PaddleDetection::CropImg(im, crop_img, rect, center, scale);
center_bs.emplace_back(center);
scale_bs.emplace_back(scale);
imgs_kpts.emplace_back(crop_img);
kpts_imgs += 1;
}
if (imgs_kpts.size() == FLAGS_batch_size_keypoint ||
((i == imsize - 1) && !imgs_kpts.empty())) {
if (run_benchmark) {
keypoint->Predict(imgs_kpts,
center_bs,
scale_bs,
0.5,
10,
10,
&result_kpts,
&keypoint_times);
} else {
keypoint->Predict(imgs_kpts,
center_bs,
scale_bs,
0.5,
0,
1,
&result_kpts,
&keypoint_times);
}
imgs_kpts.clear();
center_bs.clear();
scale_bs.clear();
keypoint_t[0] += keypoint_times[0];
keypoint_t[1] += keypoint_times[1];
keypoint_t[2] += keypoint_times[2];
}
}
std::string kpts_savepath =
output_path + "keypoint_" +
image_file_path.substr(image_file_path.find_last_of(OS_PATH_SEP) + 1);
cv::Mat kpts_vis_img =
VisualizeKptsResult(im, result_kpts, colormap_kpts);
cv::imwrite(kpts_savepath, kpts_vis_img, compression_params);
printf("Visualized output saved as %s\n", kpts_savepath.c_str());
} else {
// Visualization result
cv::Mat vis_img = PaddleDetection::VisualizeResult(
im, im_result, labels, colormap, is_rbox);
std::string det_savepath =
output_path +
image_file_path.substr(image_file_path.find_last_of(OS_PATH_SEP) + 1);
cv::imwrite(det_savepath, vis_img, compression_params);
printf("Visualized output saved as %s\n", det_savepath.c_str());
}
}
det_t[0] += det_times[0];
det_t[1] += det_times[1];
det_t[2] += det_times[2];
}
PrintBenchmarkLog(det_t, all_img_paths.size());
if (keypoint) {
PrintKptsBenchmarkLog(keypoint_t, kpts_imgs);
}
}
int main(int argc, char** argv) {
// Parsing command-line
google::ParseCommandLineFlags(&argc, &argv, true);
if (FLAGS_model_dir.empty() ||
(FLAGS_image_file.empty() && FLAGS_image_dir.empty() &&
FLAGS_video_file.empty())) {
std::cout << "Usage: ./main --model_dir=/PATH/TO/INFERENCE_MODEL/ "
"(--model_dir_keypoint=/PATH/TO/INFERENCE_MODEL/)"
<< "--image_file=/PATH/TO/INPUT/IMAGE/" << std::endl;
return -1;
}
if (!(FLAGS_run_mode == "paddle" || FLAGS_run_mode == "trt_fp32" ||
FLAGS_run_mode == "trt_fp16" || FLAGS_run_mode == "trt_int8")) {
std::cout
<< "run_mode should be 'paddle', 'trt_fp32', 'trt_fp16' or 'trt_int8'.";
return -1;
}
transform(FLAGS_device.begin(),
FLAGS_device.end(),
FLAGS_device.begin(),
::toupper);
if (!(FLAGS_device == "CPU" || FLAGS_device == "GPU" ||
FLAGS_device == "XPU")) {
std::cout << "device should be 'CPU', 'GPU' or 'XPU'.";
return -1;
}
if (FLAGS_use_gpu) {
std::cout << "Deprecated, please use `--device` to set the device you want "
"to run.";
return -1;
}
// Load model and create a object detector
PaddleDetection::ObjectDetector det(FLAGS_model_dir,
FLAGS_device,
FLAGS_use_mkldnn,
FLAGS_cpu_threads,
FLAGS_run_mode,
FLAGS_batch_size,
FLAGS_gpu_id,
FLAGS_trt_min_shape,
FLAGS_trt_max_shape,
FLAGS_trt_opt_shape,
FLAGS_trt_calib_mode);
PaddleDetection::KeyPointDetector* keypoint = nullptr;
if (!FLAGS_model_dir_keypoint.empty()) {
keypoint = new PaddleDetection::KeyPointDetector(FLAGS_model_dir_keypoint,
FLAGS_device,
FLAGS_use_mkldnn,
FLAGS_cpu_threads,
FLAGS_run_mode,
FLAGS_batch_size_keypoint,
FLAGS_gpu_id,
FLAGS_trt_min_shape,
FLAGS_trt_max_shape,
FLAGS_trt_opt_shape,
FLAGS_trt_calib_mode,
FLAGS_use_dark);
}
// Do inference on input video or image
if (!PathExists(FLAGS_output_dir)) {
MkDirs(FLAGS_output_dir);
}
if (!FLAGS_video_file.empty() || FLAGS_camera_id != -1) {
PredictVideo(FLAGS_video_file, &det, keypoint, FLAGS_output_dir);
} else if (!FLAGS_image_file.empty() || !FLAGS_image_dir.empty()) {
std::vector<std::string> all_img_paths;
std::vector<cv::String> cv_all_img_paths;
if (!FLAGS_image_file.empty()) {
all_img_paths.push_back(FLAGS_image_file);
if (FLAGS_batch_size > 1) {
std::cout << "batch_size should be 1, when set `image_file`."
<< std::endl;
return -1;
}
} else {
cv::glob(FLAGS_image_dir, cv_all_img_paths);
for (const auto& img_path : cv_all_img_paths) {
all_img_paths.push_back(img_path);
}
}
PredictImage(all_img_paths,
FLAGS_batch_size,
FLAGS_threshold,
FLAGS_run_benchmark,
&det,
keypoint,
FLAGS_output_dir);
}
delete keypoint;
keypoint = nullptr;
return 0;
}
| PaddleDetection/deploy/cpp/src/main_keypoint.cc/0 | {
"file_path": "PaddleDetection/deploy/cpp/src/main_keypoint.cc",
"repo_id": "PaddleDetection",
"token_count": 11207
} | 43 |
[English](README.md) | 简体中文
# PP-PicoDet + PP-TinyPose (Pipeline) CPU-GPU Python部署示例
本目录下提供`det_keypoint_unite_infer.py`快速完成多人模型配置 PP-PicoDet + PP-TinyPose 在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图多人关键点检测`示例。执行如下脚本即可完成.**注意**: PP-TinyPose单模型独立部署,请参考[PP-TinyPose 单模型](../README.md)
## 1. 部署环境准备
在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库。
## 2. 部署模型准备
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../../README.md)或者[自行导出PaddleDetection部署模型](../../README.md)。
## 3. 运行部署示例
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection/deploy/fastdeploy/cpu-gpu/python/det_keypoint_unite
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
# git checkout develop
# 下载PP-TinyPose模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
tar -xvf PP_TinyPose_256x192_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
tar -xvf PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/000000018491.jpg
# CPU推理
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image_file 000000018491.jpg --device cpu
# GPU推理
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image_file 000000018491.jpg --device gpu
# GPU上Paddle-TensorRT推理(注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image_file 000000018491.jpg --device gpu --use_trt True
```
运行完成可视化结果如下图所示
<div align="center">
<img src="https://user-images.githubusercontent.com/16222477/196393343-eeb6b68f-0bc6-4927-871f-5ac610da7293.jpeg", width=640px, height=427px />
</div>
- 关于如何通过FastDeploy使用更多不同的推理后端,以及如何使用不同的硬件,请参考文档:[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
## 4. 部署示例选项说明
|参数|含义|默认值
|---|---|---|
|--tinypose_model_dir|指定关键点模型文件夹所在的路径|None|
|--det_model_dir|指定目标模型文件夹所在的路径|None|
|--image_file|指定测试图片所在的路径|None|
|--device|指定即将运行的硬件类型,支持的值为`[cpu, gpu]`,当设置为cpu时,可运行在x86 cpu/arm cpu等cpu上|cpu|
|--use_trt|是否使用trt,该项只在device为gpu时有效|False|
## 5. PPTinyPose 模型串联 Python接口
```python
fd.pipeline.PPTinyPose(det_model=None, pptinypose_model=None)
```
PPTinyPose Pipeline 模型加载和初始化,其中det_model是使用`fd.vision.detection.PicoDet`初始化的检测模型,pptinypose_model是使用`fd.vision.keypointdetection.PPTinyPose`初始化的关键点检测模型。
## 6. 更多指南
- [PaddleDetection Python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/object_detection.html)
- [FastDeploy部署PaddleDetection模型概览](../../../)
- [C++部署](../../cpp/)
## 7. 常见问题
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md) | PaddleDetection/deploy/fastdeploy/cpu-gpu/python/det_keypoint_unite/README.md/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/cpu-gpu/python/det_keypoint_unite/README.md",
"repo_id": "PaddleDetection",
"token_count": 2463
} | 44 |
import fastdeploy as fd
import cv2
import os
def parse_arguments():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--model_dir",
required=True,
help="path of PP-TinyPose model directory")
parser.add_argument(
"--image_file", required=True, help="path of test image file.")
return parser.parse_args()
args = parse_arguments()
runtime_option = fd.RuntimeOption()
runtime_option.use_kunlunxin()
tinypose_model_file = os.path.join(args.model_dir, "model.pdmodel")
tinypose_params_file = os.path.join(args.model_dir, "model.pdiparams")
tinypose_config_file = os.path.join(args.model_dir, "infer_cfg.yml")
# setup runtime
tinypose_model = fd.vision.keypointdetection.PPTinyPose(
tinypose_model_file,
tinypose_params_file,
tinypose_config_file,
runtime_option=runtime_option)
# predict
im = cv2.imread(args.image_file)
tinypose_result = tinypose_model.predict(im)
print("Paddle TinyPose Result:\n", tinypose_result)
# visualize
vis_im = fd.vision.vis_keypoint_detection(
im, tinypose_result, conf_threshold=0.5)
cv2.imwrite("visualized_result.jpg", vis_im)
print("TinyPose visualized result save in ./visualized_result.jpg")
| PaddleDetection/deploy/fastdeploy/kunlunxin/python/pptinypose_infer.py/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/kunlunxin/python/pptinypose_infer.py",
"repo_id": "PaddleDetection",
"token_count": 470
} | 45 |
# PaddleDetection 算能 C++部署示例
本目录下提供`infer.cc`,`快速完成 PP-YOLOE ,在SOPHGO BM1684x板子上加速部署的示例。PP-YOLOV8和 PicoDet的部署逻辑类似,只需要切换模型即可。
## 1. 部署环境准备
在部署前,需自行编译基于算能硬件的预测库,参考文档[算能硬件部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#算能硬件部署环境)
## 2. 部署模型准备
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleDetection部署模型](../README.md)。
## 3. 生成基本目录文件
该例程由以下几个部分组成
```text
.
├── CMakeLists.txt
├── fastdeploy-sophgo # 编译文件夹
├── image # 存放图片的文件夹
├── infer.cc
└── model # 存放模型文件的文件夹
```
## 4. 运行部署示例
### 4.1 编译并拷贝SDK到thirdpartys文件夹
请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-sophgo目录.
### 4.2 拷贝模型文件,以及配置文件至model文件夹
将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
将转换后的SOPHGO bmodel模型文件拷贝至model中
### 4.3 准备测试图片至image文件夹
```bash
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp 000000014439.jpg ./images
```
### 4.4 编译example
```bash
cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-sophgo
make
```
## 4.5 运行例程
```bash
#ppyoloe推理示例
./infer_demo model images/000000014439.jpg
```
## 5. 更多指南
- [FastDeploy部署PaddleDetection模型概览](../../)
- [Python部署](../python)
- [模型转换](../README.md) | PaddleDetection/deploy/fastdeploy/sophgo/cpp/README.md/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/sophgo/cpp/README.md",
"repo_id": "PaddleDetection",
"token_count": 1196
} | 46 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "include/config_parser.h"
namespace PaddleDetection {
void load_jsonf(std::string jsonfile, Json::Value &jsondata) {
std::ifstream ifs;
ifs.open(jsonfile);
Json::CharReaderBuilder builder;
builder["collectComments"] = true;
JSONCPP_STRING errs;
if (!parseFromStream(builder, ifs, &jsondata, &errs)) {
std::cout << errs << std::endl;
return;
}
}
} // namespace PaddleDetection
| PaddleDetection/deploy/lite/src/config_parser.cc/0 | {
"file_path": "PaddleDetection/deploy/lite/src/config_parser.cc",
"repo_id": "PaddleDetection",
"token_count": 318
} | 47 |
crop_thresh: 0.5
visual: True
warmup_frame: 50
DET:
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip
batch_size: 1
MOT:
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1
enable: True
VEHICLE_PLATE:
det_model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz
det_limit_side_len: 736
det_limit_type: "min"
rec_model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz
rec_image_shape: [3, 48, 320]
rec_batch_num: 6
word_dict_path: deploy/pipeline/ppvehicle/rec_word_dict.txt
enable: True
| PaddleDetection/deploy/pipeline/config/examples/infer_cfg_vehicle_plate.yml/0 | {
"file_path": "PaddleDetection/deploy/pipeline/config/examples/infer_cfg_vehicle_plate.yml",
"repo_id": "PaddleDetection",
"token_count": 346
} | 48 |
English | [简体中文](pphuman_attribute.md)
# Attribute Recognition Modules of PP-Human
Pedestrian attribute recognition has been widely used in the intelligent community, industrial, and transportation monitoring. Many attribute recognition modules have been gathered in PP-Human, including gender, age, hats, eyes, clothing and up to 26 attributes in total. Also, the pre-trained models are offered here and users can download and use them directly.
| Task | Algorithm | Precision | Inference Speed(ms) | Download Link |
|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: |
| High-Precision Model | PP-HGNet_small | mA: 95.4 | per person 1.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.tar) |
| Fast Model | PP-LCNet_x1_0 | mA: 94.5 | per person 0.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) |
| Balanced Model | PP-HGNet_tiny | mA: 95.2 | per person 1.14ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.tar) |
1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data.
2. The inference speed is V100, the speed of using TensorRT FP16.
3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./pphuman_mot_en.md). The High precision and Faster model are both available.
4. You should place the model unziped in the directory of `PaddleDetection/output_inference/`.
## Instruction
1. Download the model from the link in the above table, and unzip it to```./output_inference```, and set the "enable: True" in ATTR of infer_cfg_pphuman.yml
The meaning of configs of `infer_cfg_pphuman.yml`:
```
ATTR: #module name
model_dir: output_inference/PPLCNet_x1_0_person_attribute_945_infer/ #model path
batch_size: 8 #maxmum batchsize when inference
enable: False #whether to enable this model
```
2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./PPHuman_QUICK_STARTED.md#41-参数说明) for more details):
```python
#single image
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_file=test_image.jpg \
--device=gpu \
#image directory
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--image_dir=images/ \
--device=gpu \
```
3. When inputting the video, run the command as follows:
```python
#a single video file
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_file=test_video.mp4 \
--device=gpu \
#directory of videos
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
--video_dir=test_videos/ \
--device=gpu \
```
4. If you want to change the model path, there are two methods:
- The first: In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR.
- The second: Add `-o ATTR.model_dir` in the command line following the --config to change the model path:
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \
-o ATTR.model_dir=output_inference/PPLCNet_x1_0_person_attribute_945_infer/\
--video_file=test_video.mp4 \
--device=gpu
```
The test result is:
<div width="600" align="center">
<img src="https://user-images.githubusercontent.com/48054808/159898428-5bda0831-7249-4889-babd-9165f26f664d.gif"/>
</div>
Data Source and Copyright:Skyinfor Technology. Thanks for the provision of actual scenario data, which are only used for academic research here.
## Introduction to the Solution
1. The PP-YOLOE model is used to handle detection boxs of input images/videos from object detection/ multi-object tracking. For details, please refer to the document [PP-YOLOE](../../../configs/ppyoloe).
2. Capture every pedestrian in the input images with the help of coordiantes of detection boxes.
3. Analyze the listed labels of pedestirans through attribute recognition. They are the same as those in the PA100k dataset. The label list is as follows:
```
- Gender
- Age: Less than 18; 18-60; Over 60
- Orientation: Front; Back; Side
- Accessories: Glasses; Hat; None
- HoldObjectsInFront: Yes; No
- Bag: BackPack; ShoulderBag; HandBag
- TopStyle: UpperStride; UpperLogo; UpperPlaid; UpperSplice
- BottomStyle: LowerStripe; LowerPattern
- ShortSleeve: Yes; No
- LongSleeve: Yes; No
- LongCoat: Yes; No
- Trousers: Yes; No
- Shorts: Yes; No
- Skirt&Dress: Yes; No
- Boots: Yes; No
```
4. The model adopted in the attribute recognition is [StrongBaseline](https://arxiv.org/pdf/2107.03576.pdf), where the structure is the multi-class network structure based on PP-HGNet、PP-LCNet, and Weighted BCE loss is introduced for effect optimization.
## Reference
```
@article{jia2020rethinking,
title={Rethinking of pedestrian attribute recognition: Realistic datasets with efficient method},
author={Jia, Jian and Huang, Houjing and Yang, Wenjie and Chen, Xiaotang and Huang, Kaiqi},
journal={arXiv preprint arXiv:2005.11909},
year={2020}
}
```
| PaddleDetection/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md/0 | {
"file_path": "PaddleDetection/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md",
"repo_id": "PaddleDetection",
"token_count": 2662
} | 49 |
English | [简体中文](ppvehicle_retrograde.md)
# PP-Vehicle vehicle retrograde identification module
Vehicle reverse identification is widely used in smart cities, smart transportation and other directions. In PP-Vehicle, a vehicle retrograde identification module is integrated to identify whether the vehicle is retrograde.
| task | algorithm | precision | infer speed | download|
|-----------|------|-----------|----------|---------------|
| Vehicle detection/tracking | PP-YOLOE | mAP 63.9 | 38.67ms | [infer deploy model](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) |
| Lane line segmentation | PP-liteseg | mIou 32.69 | 47 ms | [infer deploy model](https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip) |
Notes:
1. The prediction speed of vehicle detection/tracking model is based on NVIDIA T4 and TensorRT FP16. The model prediction speed includes data preprocessing, model prediction and post-processing.
2. The training and precision test of vehicle detection/tracking model are based on [VeRi](https://www.v7labs.com/open-datasets/veri-dataset).
3. The predicted speed of lane line segmentation model is based on Tesla P40 and python prediction. The predicted speed of the model includes data preprocessing, model prediction and post-processing.
4. Lane line model training and precision testing are based on [BDD100K-LaneSeg](https://bdd-data.berkeley.edu/portal.html#download) and [Apollo Scape](http://apolloscape.auto/lane_segmentation.html#to_dataset_href),The label data of the two data sets is in [Lane_dataset_label](https://bj.bcebos.com/v1/paddledet/data/mot/bdd100k/lane_dataset_label.zip)
## Instructions
### Description of Configuration
[The parameters related to vehicle retrograde in [config file](../../config/infer_cfg_ppvehicle.yml) is as follows:
```
LANE_SEG:
lane_seg_config: deploy/pipeline/config/lane_seg_config.yml #lane line seg config file
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip #model path
VEHICLE_RETROGRADE:
frame_len: 8 #Number of sampling frames
sample_freq: 7 #sampling frequency
enable: True #Whether to enable the funcion
filter_horizontal_flag: False #Whether to filter vehicles in horizontal direction
keep_right_flag: True #According to the right driving rule, if the vehicle keeps left driving, it is set as False
deviation: 23 #Filter the horizontal angle vehicles threshold. If it is greater than this angle, filter
move_scale: 0.01 #Filter the threshold value of stationary vehicles. If the vehicle moving pixel is greater than the image diagonal * move_scale, the vehicle is considered moving, otherwise, the vehicle is stationary
fence_line: [] #Lane centerline coordinates, format[x1,y1,x2,y2] and y2>y1. If it is empty, the program will automatically judge according to the direction of traffic flow
```
The parameters related to Lane line segmentation in [lane line seg config file](../../config/lane_seg_config.yml)is as follows:
```
type: PLSLaneseg #Select segmentation Model
PLSLaneseg:
batch_size: 1 #image batch_size
device: gpu #device is gpu or cpu
filter_flag: True #Whether to filter the horizontal direction road route
horizontal_filtration_degree: 23 #Filter the threshold value of the lane line in the horizontal direction. When the difference between the maximum inclination angle and the minimum inclination angle of the segmented lane line is less than the threshold value, no filtering is performed
horizontal_filtering_threshold: 0.25 #Determine the threshold value for separating the vertical direction from the horizontal direction thr=(min_degree+max_degree) * 0.25 Divide the lane line into vertical direction and horizontal direction according to the comparison between the gradient angle of the lane line and thr
```
### How to Use
1. Download 'vehicle detection/tracking' and 'lane line recognition' two prediction deployment models from the model base and unzip them to '/ output_ Invitation ` under the path; By default, the model will be downloaded automatically. If you download it manually, you need to modify the model folder as the model storage path.
2. Modify Profile`VEHICLE_RETROGRADE`-`enable: True`, item to enable this function.
3. When video input is required for vehicle retrograde recognition function, the starting command is as follows:
```bash
#For single video
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
-o VEHICLE_RETROGRADE.enable=true \
--video_file=test_video.mp4 \
--device=gpu
#For folder contains one or multiple videos
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
-o VEHICLE_RETROGRADE.enable=true \
--video_dir=test_video \
--device=gpu
```
5. There are two ways to modify the model path:
- 1.Set paths of each model in `./deploy/pipeline/config/infer_cfg_ppvehicle.yml`,For Lane line segmentation, the path should be modified under the `LANE_SEG`
- 2.Directly add `-o` in command line to override the default model path in the configuration file:
```bash
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu \
-o LANE_SEG.model_dir=output_inference/
VEHICLE_RETROGRADE.enable=true
```
The result is shown as follow:
<div width="1000" align="center">
<img src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_retrograde.gif"/>
</div>
**Note:**
- Automatic judgment condition of lane line middle line: there are two vehicles in opposite directions in the sampled video segment, and the judgment is fixed after one time and will not be updated;
- Due to camera angle and 2d visual angle problems, the judgment of lane line middle line is inaccurate.
- You can manually enter the middle line coordinates in the configuration file.Example as [infer_cfg_vehicle_violation.yml](../../config/examples/infer_cfg_vehicle_violation.yml)
## Features to the Solution
1.In the sampling video segment, judge whether the vehicle is retrograde according to the location of the lane centerline and the vehicle track, and determine the flow chart:
<div width="1000" align="center">
<img src="https://raw.githubusercontent.com/LokeZhou/PaddleDetection/develop/deploy/pipeline/docs/images/vehicle_retrograde_en.png"/>
</div>
2.Lane line recognition model uses [PaddleSeg]( https://github.com/PaddlePaddle/PaddleSeg )Super lightweight segmentation scheme.Train [lable](https://bj.bcebos.com/v1/paddledet/data/mot/bdd100k/lane_dataset_label.zip)it is divided into four categories:
0 Background
1 Double yellow line
2 Solid line
3 Dashed line
Lane line recognition filtering Dashed lines;
3.Lane lines are obtained by clustering segmentation results, and the horizontal lane lines are filtered by default. If not, you can modify the `filter_flag` in [lane line seg config file](../../config/lane_seg_config.yml);
4.The vehicles in the horizontal direction are filtered by default when judging the vehicles in the reverse direction. If not, you can modify the `filter_horizontal_flag` in [config file](../../config/infer_cfg_ppvehicle.yml);
5.The vehicle will be judged according to the right driving rule by default.If not, you can modify the `keep_right_flag` in [config file](../../config/infer_cfg_ppvehicle.yml);
**Performance optimization measures:**
1.Due to the camera's viewing angle, it can be decided whether to filter the lane lines and vehicles in the horizontal direction according to the actual situation;
2.The lane middle line can be manually entered;
| PaddleDetection/deploy/pipeline/docs/tutorials/ppvehicle_retrograde_en.md/0 | {
"file_path": "PaddleDetection/deploy/pipeline/docs/tutorials/ppvehicle_retrograde_en.md",
"repo_id": "PaddleDetection",
"token_count": 2852
} | 50 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
import cv2
import numpy as np
# add deploy path of PaddleDetection to sys.path
parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
sys.path.insert(0, parent_path)
from python.infer import PredictConfig
from pptracking.python.det_infer import load_predictor
from python.utils import Timer
class ReID(object):
"""
ReID of SDE methods
Args:
pred_config (object): config of model, defined by `Config(model_dir)`
model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml
device (str): Choose the device you want to run, it can be: CPU/GPU/XPU/NPU, default is CPU
run_mode (str): mode of running(paddle/trt_fp32/trt_fp16)
batch_size (int): size of per batch in inference, default 50 means at most
50 sub images can be made a batch and send into ReID model
trt_min_shape (int): min shape for dynamic shape in trt
trt_max_shape (int): max shape for dynamic shape in trt
trt_opt_shape (int): opt shape for dynamic shape in trt
trt_calib_mode (bool): If the model is produced by TRT offline quantitative
calibration, trt_calib_mode need to set True
cpu_threads (int): cpu threads
enable_mkldnn (bool): whether to open MKLDNN
"""
def __init__(self,
model_dir,
device='CPU',
run_mode='paddle',
batch_size=50,
trt_min_shape=1,
trt_max_shape=1088,
trt_opt_shape=608,
trt_calib_mode=False,
cpu_threads=4,
enable_mkldnn=False):
self.pred_config = self.set_config(model_dir)
self.predictor, self.config = load_predictor(
model_dir,
run_mode=run_mode,
batch_size=batch_size,
min_subgraph_size=self.pred_config.min_subgraph_size,
device=device,
use_dynamic_shape=self.pred_config.use_dynamic_shape,
trt_min_shape=trt_min_shape,
trt_max_shape=trt_max_shape,
trt_opt_shape=trt_opt_shape,
trt_calib_mode=trt_calib_mode,
cpu_threads=cpu_threads,
enable_mkldnn=enable_mkldnn)
self.det_times = Timer()
self.cpu_mem, self.gpu_mem, self.gpu_util = 0, 0, 0
self.batch_size = batch_size
self.input_wh = (128, 256)
@classmethod
def init_with_cfg(cls, args, cfg):
return cls(model_dir=cfg['model_dir'],
batch_size=cfg['batch_size'],
device=args.device,
run_mode=args.run_mode,
trt_min_shape=args.trt_min_shape,
trt_max_shape=args.trt_max_shape,
trt_opt_shape=args.trt_opt_shape,
trt_calib_mode=args.trt_calib_mode,
cpu_threads=args.cpu_threads,
enable_mkldnn=args.enable_mkldnn)
def set_config(self, model_dir):
return PredictConfig(model_dir)
def check_img_quality(self, crop, bbox, xyxy):
if crop is None:
return None
#eclipse
eclipse_quality = 1.0
inner_rect = np.zeros(xyxy.shape)
inner_rect[:, :2] = np.maximum(xyxy[:, :2], bbox[None, :2])
inner_rect[:, 2:] = np.minimum(xyxy[:, 2:], bbox[None, 2:])
wh_array = inner_rect[:, 2:] - inner_rect[:, :2]
filt = np.logical_and(wh_array[:, 0] > 0, wh_array[:, 1] > 0)
wh_array = wh_array[filt]
if wh_array.shape[0] > 1:
eclipse_ratio = wh_array / (bbox[2:] - bbox[:2])
eclipse_area_ratio = eclipse_ratio[:, 0] * eclipse_ratio[:, 1]
ear_lst = eclipse_area_ratio.tolist()
ear_lst.sort(reverse=True)
eclipse_quality = 1.0 - ear_lst[1]
bbox_wh = (bbox[2:] - bbox[:2])
height_quality = bbox_wh[1] / (bbox_wh[0] * 2)
eclipse_quality = min(eclipse_quality, height_quality)
#definition
cropgray = cv2.cvtColor(crop, cv2.COLOR_BGR2GRAY)
definition = int(cv2.Laplacian(cropgray, cv2.CV_64F, ksize=3).var())
brightness = int(cropgray.mean())
bd_quality = min(1., brightness / 50.)
eclipse_weight = 0.7
return eclipse_quality * eclipse_weight + bd_quality * (1 -
eclipse_weight)
def normal_crop(self, image, rect):
imgh, imgw, c = image.shape
label, conf, xmin, ymin, xmax, ymax = [int(x) for x in rect.tolist()]
xmin = max(0, xmin)
ymin = max(0, ymin)
xmax = min(imgw, xmax)
ymax = min(imgh, ymax)
if label != 0 or xmax <= xmin or ymax <= ymin:
print("Warning! label missed!!")
return None, None, None
return image[ymin:ymax, xmin:xmax, :]
def crop_image_with_mot(self, image, mot_res):
res = mot_res['boxes']
crop_res = []
img_quality = []
rects = []
for box in res:
crop_image = self.normal_crop(image, box[1:])
quality_item = self.check_img_quality(crop_image, box[3:],
res[:, 3:])
if crop_image is not None:
crop_res.append(crop_image)
img_quality.append(quality_item)
rects.append(box)
return crop_res, img_quality, rects
def preprocess(self,
imgs,
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]):
im_batch = []
for img in imgs:
img = cv2.resize(img, self.input_wh)
img = img.astype('float32') / 255.
img -= np.array(mean)
img /= np.array(std)
im_batch.append(img.transpose((2, 0, 1)))
inputs = {}
inputs['x'] = np.array(im_batch).astype('float32')
return inputs
def predict(self, crops, repeats=1, add_timer=True, seq_name=''):
# preprocess
if add_timer:
self.det_times.preprocess_time_s.start()
inputs = self.preprocess(crops)
input_names = self.predictor.get_input_names()
for i in range(len(input_names)):
input_tensor = self.predictor.get_input_handle(input_names[i])
input_tensor.copy_from_cpu(inputs[input_names[i]])
if add_timer:
self.det_times.preprocess_time_s.end()
self.det_times.inference_time_s.start()
# model prediction
for i in range(repeats):
self.predictor.run()
output_names = self.predictor.get_output_names()
feature_tensor = self.predictor.get_output_handle(output_names[0])
pred_embs = feature_tensor.copy_to_cpu()
if add_timer:
self.det_times.inference_time_s.end(repeats=repeats)
self.det_times.postprocess_time_s.start()
if add_timer:
self.det_times.postprocess_time_s.end()
self.det_times.img_num += 1
return pred_embs
def predict_batch(self, imgs, batch_size=4):
batch_feat = []
for b in range(0, len(imgs), batch_size):
b_end = min(len(imgs), b + batch_size)
batch_imgs = imgs[b:b_end]
feat = self.predict(batch_imgs)
batch_feat.extend(feat.tolist())
return batch_feat
| PaddleDetection/deploy/pipeline/pphuman/reid.py/0 | {
"file_path": "PaddleDetection/deploy/pipeline/pphuman/reid.py",
"repo_id": "PaddleDetection",
"token_count": 4020
} | 51 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import math
class VehicleRetrogradeRecognizer(object):
def __init__(self, cfg):
self.cfg = cfg
self.filter_horizontal_flag = self.cfg['filter_horizontal_flag']
self.deviation = self.cfg['deviation']
self.move_scale = self.cfg['move_scale']
self.keep_right_flag = self.cfg['keep_right_flag']
self.center_traj_retrograde = [{}] #retrograde recognizer record use
self.fence_line = None if len(self.cfg[
'fence_line']) == 0 else self.cfg['fence_line']
def update_center_traj(self, mot_res, max_len):
from collections import deque, defaultdict
if mot_res is not None:
ids = mot_res['boxes'][:, 0]
scores = mot_res['boxes'][:, 2]
boxes = mot_res['boxes'][:, 3:]
boxes[:, 2] = boxes[:, 2] - boxes[:, 0]
boxes[:, 3] = boxes[:, 3] - boxes[:, 1]
else:
boxes = np.zeros([0, 4])
ids = np.zeros([0])
scores = np.zeros([0])
# single class, still need to be defaultdict type for ploting
num_classes = 1
online_tlwhs = defaultdict(list)
online_scores = defaultdict(list)
online_ids = defaultdict(list)
online_tlwhs[0] = boxes
online_ids[0] = ids
if mot_res is not None:
for cls_id in range(num_classes):
tlwhs = online_tlwhs[cls_id]
obj_ids = online_ids[cls_id]
for i, tlwh in enumerate(tlwhs):
x1, y1, w, h = tlwh
center = tuple(map(int, (x1 + w / 2., y1 + h)))
obj_id = int(obj_ids[i])
if self.center_traj_retrograde is not None:
if obj_id not in self.center_traj_retrograde[cls_id]:
self.center_traj_retrograde[cls_id][obj_id] = deque(
maxlen=max_len)
self.center_traj_retrograde[cls_id][obj_id].append(
center)
def get_angle(self, array):
x1, y1, x2, y2 = array
a_x = x2 - x1
a_y = y2 - y1
angle1 = math.atan2(a_y, a_x)
angle1 = int(angle1 * 180 / math.pi)
a_x = x2 - x1 if y2 >= y1 else x1 - x2
a_y = y2 - y1 if y2 >= y1 else y1 - y2
angle2 = math.atan2(a_y, a_x)
angle2 = int(angle2 * 180 / math.pi)
if angle2 > 90:
angle2 = 180 - angle2
return angle1, angle2
def is_move(self, array, frame_shape):
x1, y1, x2, y2 = array
h, w, _ = frame_shape
if abs(x1 - x2) > w * self.move_scale or abs(y1 -
y2) > h * self.move_scale:
return True
else:
return False
def get_distance_point2line(self, point, line):
line_point1, line_point2 = np.array(line[0:2]), np.array(line[2:])
vec1 = line_point1 - point
vec2 = line_point2 - point
distance = np.abs(np.cross(vec1, vec2)) / np.linalg.norm(line_point1 -
line_point2)
return distance
def driving_direction(self, line1, line2, is_init=False):
x1, y1 = line1[2] - line1[0], line1[3] - line1[1]
x2, y2 = line2[0] - line1[0], line2[1] - line1[1]
result = x1 * y2 - x2 * y1
distance = self.get_distance_point2line([x2, y2], line1)
if result < 0:
result = 1
elif result == 0:
if line2[3] >= line2[1]:
return -1
else:
return 1
else:
result = -1
return result, distance
def get_long_fence_line(self, h, w, line):
x1, y1, x2, y2 = line
if x1 == x2:
return [x1, 0, x1, h]
if y1 == y2:
return [0, y1, w, y1]
k = (y2 - y1) / (x2 - x1)
b = y1 - k * x1
if k == 1 and b == 0:
return [0, 0, w, h]
if k == -1 and b == 0:
return [w, 0, h, h]
top = [-b / k, 0]
left = [0, b]
right = [w, k * w + b]
bottom = [(h - b) / k, h]
candidate = np.array([top, left, right, bottom])
flag = np.array([0, 0, 0, 0])
if top[0] >= 0 and top[0] <= w:
flag[0] = 1
if left[1] > 0 and left[1] <= h:
flag[1] = 1
if right[1] > 0 and right[1] <= h:
flag[2] = 1
if bottom[0] > 0 and bottom[0] < w:
flag[3] = 1
ind = np.where(flag == 1)
candidate = candidate[ind]
candidate_sort = candidate[candidate[:, 1].argsort()]
return [
int(candidate_sort[0][0]), int(candidate_sort[0][1]),
int(candidate_sort[1][0]), int(candidate_sort[1][1])
]
def init_fence_line(self, lanes, pos_dir_traj, neg_dir_traj, frame_shape):
fence_lines_candidate = None
h, w, _ = frame_shape
abs_distance = h * h + w * w
for lane in lanes[0]:
pos_dir_distansce = h * h + w * w
neg_dir_distansce = h * h + w * w
pos_dir = 0
neg_dir = 0
for traj_line in pos_dir_traj:
dir_result, distansce = self.driving_direction(
lane, traj_line['traj_line'])
if dir_result > 0:
pos_dir_distansce = distansce if distansce < pos_dir_distansce else pos_dir_distansce
pos_dir = 1
else:
neg_dir_distansce = distansce if distansce < neg_dir_distansce else neg_dir_distansce
neg_dir = 1
if pos_dir > 0 and neg_dir > 0:
continue
for traj_line in neg_dir_traj:
dir_result, distansce = self.driving_direction(
lane, traj_line['traj_line'])
if dir_result > 0:
pos_dir_distansce = distansce if distansce < pos_dir_distansce else pos_dir_distansce
pos_dir = 1
else:
neg_dir_distansce = distansce if distansce < neg_dir_distansce else neg_dir_distansce
neg_dir = 1
if pos_dir > 0 and neg_dir > 0:
diff_dir_distance = abs(pos_dir_distansce - neg_dir_distansce)
if diff_dir_distance < abs_distance:
fence_lines_candidate = lane
abs_distance = diff_dir_distance
if fence_lines_candidate is None:
return None
fence_lines_candidate = self.get_long_fence_line(h, w,
fence_lines_candidate)
return fence_lines_candidate
def judge_retrograde(self, traj_line):
line1 = self.fence_line
x1, y1 = line1[2] - line1[0], line1[3] - line1[1]
line2 = traj_line['traj_line']
x2_start_point, y2_start_point = line2[0] - line1[0], line2[1] - line1[
1]
x2_end_point, y2_end_point = line2[2] - line1[0], line2[3] - line1[1]
start_point_dir = x1 * y2_start_point - x2_start_point * y1
end_point_dir = x1 * y2_end_point - x2_end_point * y1
if start_point_dir < 0:
start_point_dir = 1
elif start_point_dir == 0:
if line2[3] >= line2[1]:
start_point_dir = -1
else:
start_point_dir = 1
else:
start_point_dir = -1
if end_point_dir < 0:
end_point_dir = 1
elif end_point_dir == 0:
if line2[3] >= line2[1]:
end_point_dir = -1
else:
end_point_dir = 1
else:
end_point_dir = -1
if self.keep_right_flag:
driver_dir = -1 if (line2[3] - line2[1]) >= 0 else 1
else:
driver_dir = -1 if (line2[3] - line2[1]) <= 0 else 1
return start_point_dir == driver_dir and start_point_dir == end_point_dir
def mot_run(self, lanes_res, det_res, frame_shape):
det = det_res['boxes']
directions = lanes_res['directions']
lanes = lanes_res['output']
if len(directions) > 0:
direction = directions[0]
else:
return [], self.fence_line
if len(det) == 0:
return [], self.fence_line
traj_lines = []
pos_dir_traj = []
neg_dir_traj = []
for i in range(len(det)):
class_id = int(det[i][1])
mot_id = int(det[i][0])
traj_i = self.center_traj_retrograde[class_id][mot_id]
if len(traj_i) < 2:
continue
traj_line = {
'index': i,
'mot_id': mot_id,
'traj_line':
[traj_i[0][0], traj_i[0][1], traj_i[-1][0], traj_i[-1][1]]
}
if not self.is_move(traj_line['traj_line'], frame_shape):
continue
angle, angle_deviation = self.get_angle(traj_line['traj_line'])
if direction is not None and self.filter_horizontal_flag:
if abs(angle_deviation - direction) > self.deviation:
continue
traj_line['angle'] = angle
traj_lines.append(traj_line)
if self.fence_line is None:
if angle >= 0:
pos_dir_traj.append(traj_line)
else:
neg_dir_traj.append(traj_line)
if len(traj_lines) == 0:
return [], self.fence_line
if self.fence_line is None:
if len(pos_dir_traj) < 1 or len(neg_dir_traj) < 1:
return [], None
self.fence_line = self.init_fence_line(lanes, pos_dir_traj,
neg_dir_traj, frame_shape)
return [], self.fence_line
else:
retrograde_list = []
for traj_line in traj_lines:
if self.judge_retrograde(traj_line) == False:
retrograde_list.append(det[traj_line['index']][0])
return retrograde_list, self.fence_line
| PaddleDetection/deploy/pipeline/ppvehicle/vehicle_retrograde.py/0 | {
"file_path": "PaddleDetection/deploy/pipeline/ppvehicle/vehicle_retrograde.py",
"repo_id": "PaddleDetection",
"token_count": 5937
} | 52 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division
import os
import cv2
import numpy as np
import PIL
from PIL import Image, ImageDraw, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
from collections import deque
def imagedraw_textsize_c(draw, text):
if int(PIL.__version__.split('.')[0]) < 10:
tw, th = draw.textsize(text)
else:
left, top, right, bottom = draw.textbbox((0, 0), text)
tw, th = right - left, bottom - top
return tw, th
def imagedraw_textsize_c(draw, text):
if int(PIL.__version__.split('.')[0]) < 10:
tw, th = draw.textsize(text)
else:
left, top, right, bottom = draw.textbbox((0, 0), text)
tw, th = right - left, bottom - top
return tw, th
def visualize_box_mask(im, results, labels, threshold=0.5):
"""
Args:
im (str/np.ndarray): path of image/np.ndarray read by cv2
results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
labels (list): labels:['class1', ..., 'classn']
threshold (float): Threshold of score.
Returns:
im (PIL.Image.Image): visualized image
"""
if isinstance(im, str):
im = Image.open(im).convert('RGB')
else:
im = Image.fromarray(im)
if 'boxes' in results and len(results['boxes']) > 0:
im = draw_box(im, results['boxes'], labels, threshold=threshold)
return im
def get_color_map_list(num_classes):
"""
Args:
num_classes (int): number of class
Returns:
color_map (list): RGB color list
"""
color_map = num_classes * [0, 0, 0]
for i in range(0, num_classes):
j = 0
lab = i
while lab:
color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j))
color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j))
color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j))
j += 1
lab >>= 3
color_map = [color_map[i:i + 3] for i in range(0, len(color_map), 3)]
return color_map
def draw_box(im, np_boxes, labels, threshold=0.5):
"""
Args:
im (PIL.Image.Image): PIL image
np_boxes (np.ndarray): shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
labels (list): labels:['class1', ..., 'classn']
threshold (float): threshold of box
Returns:
im (PIL.Image.Image): visualized image
"""
draw_thickness = min(im.size) // 320
draw = ImageDraw.Draw(im)
clsid2color = {}
color_list = get_color_map_list(len(labels))
expect_boxes = (np_boxes[:, 1] > threshold) & (np_boxes[:, 0] > -1)
np_boxes = np_boxes[expect_boxes, :]
for dt in np_boxes:
clsid, bbox, score = int(dt[0]), dt[2:], dt[1]
if clsid not in clsid2color:
clsid2color[clsid] = color_list[clsid]
color = tuple(clsid2color[clsid])
if len(bbox) == 4:
xmin, ymin, xmax, ymax = bbox
print('class_id:{:d}, confidence:{:.4f}, left_top:[{:.2f},{:.2f}],'
'right_bottom:[{:.2f},{:.2f}]'.format(
int(clsid), score, xmin, ymin, xmax, ymax))
# draw bbox
draw.line(
[(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
(xmin, ymin)],
width=draw_thickness,
fill=color)
elif len(bbox) == 8:
x1, y1, x2, y2, x3, y3, x4, y4 = bbox
draw.line(
[(x1, y1), (x2, y2), (x3, y3), (x4, y4), (x1, y1)],
width=2,
fill=color)
xmin = min(x1, x2, x3, x4)
ymin = min(y1, y2, y3, y4)
# draw label
text = "{} {:.4f}".format(labels[clsid], score)
tw, th = imagedraw_textsize_c(draw, text)
draw.rectangle(
[(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill=color)
draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255))
return im
def get_color(idx):
idx = idx * 3
color = ((37 * idx) % 255, (17 * idx) % 255, (29 * idx) % 255)
return color
def plot_tracking(image,
tlwhs,
obj_ids,
scores=None,
frame_id=0,
fps=0.,
ids2names=[],
do_entrance_counting=False,
entrance=None):
im = np.ascontiguousarray(np.copy(image))
im_h, im_w = im.shape[:2]
text_scale = max(0.5, image.shape[1] / 3000.)
text_thickness = 2
line_thickness = max(1, int(image.shape[1] / 500.))
cv2.putText(
im,
'frame: %d fps: %.2f num: %d' % (frame_id, fps, len(tlwhs)),
(0, int(15 * text_scale) + 5),
cv2.FONT_ITALIC,
text_scale, (0, 0, 255),
thickness=text_thickness)
for i, tlwh in enumerate(tlwhs):
x1, y1, w, h = tlwh
intbox = tuple(map(int, (x1, y1, x1 + w, y1 + h)))
obj_id = int(obj_ids[i])
id_text = 'ID: {}'.format(int(obj_id))
if ids2names != []:
assert len(
ids2names) == 1, "plot_tracking only supports single classes."
id_text = 'ID: {}_'.format(ids2names[0]) + id_text
_line_thickness = 1 if obj_id <= 0 else line_thickness
color = get_color(abs(obj_id))
cv2.rectangle(
im, intbox[0:2], intbox[2:4], color=color, thickness=line_thickness)
cv2.putText(
im,
id_text, (intbox[0], intbox[1] - 25),
cv2.FONT_ITALIC,
text_scale, (0, 255, 255),
thickness=text_thickness)
if scores is not None:
text = 'score: {:.2f}'.format(float(scores[i]))
cv2.putText(
im,
text, (intbox[0], intbox[1] - 6),
cv2.FONT_ITALIC,
text_scale, (0, 255, 0),
thickness=text_thickness)
if do_entrance_counting:
entrance_line = tuple(map(int, entrance))
cv2.rectangle(
im,
entrance_line[0:2],
entrance_line[2:4],
color=(0, 255, 255),
thickness=line_thickness)
return im
def plot_tracking_dict(image,
num_classes,
tlwhs_dict,
obj_ids_dict,
scores_dict,
frame_id=0,
fps=0.,
ids2names=[],
do_entrance_counting=False,
do_break_in_counting=False,
do_illegal_parking_recognition=False,
illegal_parking_dict=None,
entrance=None,
records=None,
center_traj=None):
im = np.ascontiguousarray(np.copy(image))
im_h, im_w = im.shape[:2]
if do_break_in_counting or do_illegal_parking_recognition:
entrance = np.array(entrance[:-1]) # last pair is [im_w, im_h]
text_scale = max(0.5, image.shape[1] / 3000.)
text_thickness = 2
line_thickness = max(1, int(image.shape[1] / 500.))
if num_classes == 1:
if records is not None:
start = records[-1].find('Total')
end = records[-1].find('In')
cv2.putText(
im,
records[-1][start:end], (0, int(40 * text_scale) + 10),
cv2.FONT_ITALIC,
text_scale, (0, 0, 255),
thickness=text_thickness)
if num_classes == 1 and do_entrance_counting:
entrance_line = tuple(map(int, entrance))
cv2.rectangle(
im,
entrance_line[0:2],
entrance_line[2:4],
color=(0, 255, 255),
thickness=line_thickness)
# find start location for entrance counting data
start = records[-1].find('In')
cv2.putText(
im,
records[-1][start:-1], (0, int(60 * text_scale) + 10),
cv2.FONT_ITALIC,
text_scale, (0, 0, 255),
thickness=text_thickness)
if num_classes == 1 and (do_break_in_counting or
do_illegal_parking_recognition):
np_masks = np.zeros((im_h, im_w, 1), np.uint8)
cv2.fillPoly(np_masks, [entrance], 255)
# Draw region mask
alpha = 0.3
im = np.array(im).astype('float32')
mask = np_masks[:, :, 0]
color_mask = [0, 0, 255]
idx = np.nonzero(mask)
color_mask = np.array(color_mask)
im[idx[0], idx[1], :] *= 1.0 - alpha
im[idx[0], idx[1], :] += alpha * color_mask
im = np.array(im).astype('uint8')
if do_break_in_counting:
# find start location for break in counting data
start = records[-1].find('Break_in')
cv2.putText(
im,
records[-1][start:-1],
(entrance[0][0] - 10, entrance[0][1] - 10),
cv2.FONT_ITALIC,
text_scale, (0, 0, 255),
thickness=text_thickness)
if illegal_parking_dict is not None and len(illegal_parking_dict) != 0:
for key, value in illegal_parking_dict.items():
x1, y1, w, h = value['bbox']
plate = value['plate']
if plate is None:
plate = ""
# red box
cv2.rectangle(im, (int(x1), int(y1)),
(int(x1 + w), int(y1 + h)), (0, 0, 255), 2)
cv2.putText(
im,
"illegal_parking:" + plate,
(int(x1) + 5, int(16 * text_scale + y1 + 15)),
cv2.FONT_ITALIC,
text_scale * 1.5, (0, 0, 255),
thickness=text_thickness)
for cls_id in range(num_classes):
tlwhs = tlwhs_dict[cls_id]
obj_ids = obj_ids_dict[cls_id]
scores = scores_dict[cls_id]
cv2.putText(
im,
'frame: %d fps: %.2f num: %d' % (frame_id, fps, len(tlwhs)),
(0, int(15 * text_scale) + 5),
cv2.FONT_ITALIC,
text_scale, (0, 0, 255),
thickness=text_thickness)
record_id = set()
for i, tlwh in enumerate(tlwhs):
x1, y1, w, h = tlwh
intbox = tuple(map(int, (x1, y1, x1 + w, y1 + h)))
center = tuple(map(int, (x1 + w / 2., y1 + h / 2.)))
obj_id = int(obj_ids[i])
if center_traj is not None:
record_id.add(obj_id)
if obj_id not in center_traj[cls_id]:
center_traj[cls_id][obj_id] = deque(maxlen=30)
center_traj[cls_id][obj_id].append(center)
id_text = '{}'.format(int(obj_id))
if ids2names != []:
id_text = '{}_{}'.format(ids2names[cls_id], id_text)
else:
id_text = 'class{}_{}'.format(cls_id, id_text)
_line_thickness = 1 if obj_id <= 0 else line_thickness
in_region = False
if do_break_in_counting:
center_x = min(x1 + w / 2., im_w - 1)
center_down_y = min(y1 + h, im_h - 1)
if in_quadrangle([center_x, center_down_y], entrance, im_h,
im_w):
in_region = True
color = get_color(abs(obj_id)) if in_region == False else (0, 0,
255)
cv2.rectangle(
im,
intbox[0:2],
intbox[2:4],
color=color,
thickness=line_thickness)
cv2.putText(
im,
id_text, (intbox[0], intbox[1] - 25),
cv2.FONT_ITALIC,
text_scale,
color,
thickness=text_thickness)
if do_break_in_counting and in_region:
cv2.putText(
im,
'Break in now.', (intbox[0], intbox[1] - 50),
cv2.FONT_ITALIC,
text_scale, (0, 0, 255),
thickness=text_thickness)
if scores is not None:
text = 'score: {:.2f}'.format(float(scores[i]))
cv2.putText(
im,
text, (intbox[0], intbox[1] - 6),
cv2.FONT_ITALIC,
text_scale,
color,
thickness=text_thickness)
if center_traj is not None:
for traj in center_traj:
for i in traj.keys():
if i not in record_id:
continue
for point in traj[i]:
cv2.circle(im, point, 3, (0, 0, 255), -1)
return im
def in_quadrangle(point, entrance, im_h, im_w):
mask = np.zeros((im_h, im_w, 1), np.uint8)
cv2.fillPoly(mask, [entrance], 255)
p = tuple(map(int, point))
if mask[p[1], p[0], :] > 0:
return True
else:
return False
| PaddleDetection/deploy/pptracking/python/mot/visualize.py/0 | {
"file_path": "PaddleDetection/deploy/pptracking/python/mot/visualize.py",
"repo_id": "PaddleDetection",
"token_count": 7756
} | 53 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import copy
import math
import time
import yaml
import cv2
import numpy as np
from collections import defaultdict
import paddle
from benchmark_utils import PaddleInferBenchmark
from utils import gaussian_radius, gaussian2D, draw_umich_gaussian
from preprocess import preprocess, decode_image, WarpAffine, NormalizeImage, Permute
from utils import argsparser, Timer, get_current_memory_mb
from infer import Detector, get_test_images, print_arguments, bench_log, PredictConfig
from keypoint_preprocess import get_affine_transform
# add python path
import sys
parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
sys.path.insert(0, parent_path)
from pptracking.python.mot import CenterTracker
from pptracking.python.mot.utils import MOTTimer, write_mot_results
from pptracking.python.mot.visualize import plot_tracking
def transform_preds_with_trans(coords, trans):
target_coords = np.ones((coords.shape[0], 3), np.float32)
target_coords[:, :2] = coords
target_coords = np.dot(trans, target_coords.transpose()).transpose()
return target_coords[:, :2]
def affine_transform(pt, t):
new_pt = np.array([pt[0], pt[1], 1.]).T
new_pt = np.dot(t, new_pt)
return new_pt[:2]
def affine_transform_bbox(bbox, trans, width, height):
bbox = np.array(copy.deepcopy(bbox), dtype=np.float32)
bbox[:2] = affine_transform(bbox[:2], trans)
bbox[2:] = affine_transform(bbox[2:], trans)
bbox[[0, 2]] = np.clip(bbox[[0, 2]], 0, width - 1)
bbox[[1, 3]] = np.clip(bbox[[1, 3]], 0, height - 1)
return bbox
class CenterTrack(Detector):
"""
Args:
model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml
device (str): Choose the device you want to run, it can be: CPU/GPU/XPU/NPU, default is CPU
run_mode (str): mode of running(paddle/trt_fp32/trt_fp16)
batch_size (int): size of pre batch in inference
trt_min_shape (int): min shape for dynamic shape in trt
trt_max_shape (int): max shape for dynamic shape in trt
trt_opt_shape (int): opt shape for dynamic shape in trt
trt_calib_mode (bool): If the model is produced by TRT offline quantitative
calibration, trt_calib_mode need to set True
cpu_threads (int): cpu threads
enable_mkldnn (bool): whether to open MKLDNN
output_dir (string): The path of output, default as 'output'
threshold (float): Score threshold of the detected bbox, default as 0.5
save_images (bool): Whether to save visualization image results, default as False
save_mot_txts (bool): Whether to save tracking results (txt), default as False
"""
def __init__(
self,
model_dir,
tracker_config=None,
device='CPU',
run_mode='paddle',
batch_size=1,
trt_min_shape=1,
trt_max_shape=960,
trt_opt_shape=544,
trt_calib_mode=False,
cpu_threads=1,
enable_mkldnn=False,
output_dir='output',
threshold=0.5,
save_images=False,
save_mot_txts=False, ):
super(CenterTrack, self).__init__(
model_dir=model_dir,
device=device,
run_mode=run_mode,
batch_size=batch_size,
trt_min_shape=trt_min_shape,
trt_max_shape=trt_max_shape,
trt_opt_shape=trt_opt_shape,
trt_calib_mode=trt_calib_mode,
cpu_threads=cpu_threads,
enable_mkldnn=enable_mkldnn,
output_dir=output_dir,
threshold=threshold, )
self.save_images = save_images
self.save_mot_txts = save_mot_txts
assert batch_size == 1, "MOT model only supports batch_size=1."
self.det_times = Timer(with_tracker=True)
self.num_classes = len(self.pred_config.labels)
# tracker config
cfg = self.pred_config.tracker
min_box_area = cfg.get('min_box_area', -1)
vertical_ratio = cfg.get('vertical_ratio', -1)
track_thresh = cfg.get('track_thresh', 0.4)
pre_thresh = cfg.get('pre_thresh', 0.5)
self.tracker = CenterTracker(
num_classes=self.num_classes,
min_box_area=min_box_area,
vertical_ratio=vertical_ratio,
track_thresh=track_thresh,
pre_thresh=pre_thresh)
self.pre_image = None
def get_additional_inputs(self, dets, meta, with_hm=True):
# Render input heatmap from previous trackings.
trans_input = meta['trans_input']
inp_width, inp_height = int(meta['inp_width']), int(meta['inp_height'])
input_hm = np.zeros((1, inp_height, inp_width), dtype=np.float32)
for det in dets:
if det['score'] < self.tracker.pre_thresh:
continue
bbox = affine_transform_bbox(det['bbox'], trans_input, inp_width,
inp_height)
h, w = bbox[3] - bbox[1], bbox[2] - bbox[0]
if (h > 0 and w > 0):
radius = gaussian_radius(
(math.ceil(h), math.ceil(w)), min_overlap=0.7)
radius = max(0, int(radius))
ct = np.array(
[(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2],
dtype=np.float32)
ct_int = ct.astype(np.int32)
if with_hm:
input_hm[0] = draw_umich_gaussian(input_hm[0], ct_int,
radius)
if with_hm:
input_hm = input_hm[np.newaxis]
return input_hm
def preprocess(self, image_list):
preprocess_ops = []
for op_info in self.pred_config.preprocess_infos:
new_op_info = op_info.copy()
op_type = new_op_info.pop('type')
preprocess_ops.append(eval(op_type)(**new_op_info))
assert len(image_list) == 1, 'MOT only support bs=1'
im_path = image_list[0]
im, im_info = preprocess(im_path, preprocess_ops)
#inputs = create_inputs(im, im_info)
inputs = {}
inputs['image'] = np.array((im, )).astype('float32')
inputs['im_shape'] = np.array((im_info['im_shape'], )).astype('float32')
inputs['scale_factor'] = np.array(
(im_info['scale_factor'], )).astype('float32')
inputs['trans_input'] = im_info['trans_input']
inputs['inp_width'] = im_info['inp_width']
inputs['inp_height'] = im_info['inp_height']
inputs['center'] = im_info['center']
inputs['scale'] = im_info['scale']
inputs['out_height'] = im_info['out_height']
inputs['out_width'] = im_info['out_width']
if self.pre_image is None:
self.pre_image = inputs['image']
# initializing tracker for the first frame
self.tracker.init_track([])
inputs['pre_image'] = self.pre_image
self.pre_image = inputs['image'] # Note: update for next image
# render input heatmap from tracker status
pre_hm = self.get_additional_inputs(
self.tracker.tracks, inputs, with_hm=True)
inputs['pre_hm'] = pre_hm #.to_tensor(pre_hm)
input_names = self.predictor.get_input_names()
for i in range(len(input_names)):
input_tensor = self.predictor.get_input_handle(input_names[i])
if input_names[i] == 'x':
input_tensor.copy_from_cpu(inputs['image'])
else:
input_tensor.copy_from_cpu(inputs[input_names[i]])
return inputs
def postprocess(self, inputs, result):
# postprocess output of predictor
np_bboxes = result['bboxes']
if np_bboxes.shape[0] <= 0:
print('[WARNNING] No object detected and tracked.')
result = {'bboxes': np.zeros([0, 6]), 'cts': None, 'tracking': None}
return result
result = {k: v for k, v in result.items() if v is not None}
return result
def centertrack_post_process(self, dets, meta, out_thresh):
if not ('bboxes' in dets):
return [{}]
preds = []
c, s = meta['center'], meta['scale']
h, w = meta['out_height'], meta['out_width']
trans = get_affine_transform(
center=c,
input_size=s,
rot=0,
output_size=[w, h],
shift=(0., 0.),
inv=True).astype(np.float32)
for i, dets_bbox in enumerate(dets['bboxes']):
if dets_bbox[1] < out_thresh:
break
item = {}
item['score'] = dets_bbox[1]
item['class'] = int(dets_bbox[0]) + 1
item['ct'] = transform_preds_with_trans(
dets['cts'][i].reshape([1, 2]), trans).reshape(2)
if 'tracking' in dets:
tracking = transform_preds_with_trans(
(dets['tracking'][i] + dets['cts'][i]).reshape([1, 2]),
trans).reshape(2)
item['tracking'] = tracking - item['ct']
if 'bboxes' in dets:
bbox = transform_preds_with_trans(
dets_bbox[2:6].reshape([2, 2]), trans).reshape(4)
item['bbox'] = bbox
preds.append(item)
return preds
def tracking(self, inputs, det_results):
result = self.centertrack_post_process(det_results, inputs,
self.tracker.out_thresh)
online_targets = self.tracker.update(result)
online_tlwhs, online_scores, online_ids = [], [], []
for t in online_targets:
bbox = t['bbox']
tlwh = [bbox[0], bbox[1], bbox[2] - bbox[0], bbox[3] - bbox[1]]
tscore = float(t['score'])
tid = int(t['tracking_id'])
if tlwh[2] * tlwh[3] > 0:
online_tlwhs.append(tlwh)
online_ids.append(tid)
online_scores.append(tscore)
return online_tlwhs, online_scores, online_ids
def predict(self, repeats=1):
'''
Args:
repeats (int): repeats number for prediction
Returns:
result (dict): include 'bboxes', 'cts' and 'tracking':
np.ndarray: shape:[N,6],[N,2] and [N,2], N: number of box
'''
# model prediction
np_bboxes, np_cts, np_tracking = None, None, None
for i in range(repeats):
self.predictor.run()
output_names = self.predictor.get_output_names()
bboxes_tensor = self.predictor.get_output_handle(output_names[0])
np_bboxes = bboxes_tensor.copy_to_cpu()
cts_tensor = self.predictor.get_output_handle(output_names[1])
np_cts = cts_tensor.copy_to_cpu()
tracking_tensor = self.predictor.get_output_handle(output_names[2])
np_tracking = tracking_tensor.copy_to_cpu()
result = dict(bboxes=np_bboxes, cts=np_cts, tracking=np_tracking)
return result
def predict_image(self,
image_list,
run_benchmark=False,
repeats=1,
visual=True,
seq_name=None):
mot_results = []
num_classes = self.num_classes
image_list.sort()
ids2names = self.pred_config.labels
data_type = 'mcmot' if num_classes > 1 else 'mot'
for frame_id, img_file in enumerate(image_list):
batch_image_list = [img_file] # bs=1 in MOT model
if run_benchmark:
# preprocess
inputs = self.preprocess(batch_image_list) # warmup
self.det_times.preprocess_time_s.start()
inputs = self.preprocess(batch_image_list)
self.det_times.preprocess_time_s.end()
# model prediction
result_warmup = self.predict(repeats=repeats) # warmup
self.det_times.inference_time_s.start()
result = self.predict(repeats=repeats)
self.det_times.inference_time_s.end(repeats=repeats)
# postprocess
result_warmup = self.postprocess(inputs, result) # warmup
self.det_times.postprocess_time_s.start()
det_result = self.postprocess(inputs, result)
self.det_times.postprocess_time_s.end()
# tracking
result_warmup = self.tracking(inputs, det_result)
self.det_times.tracking_time_s.start()
online_tlwhs, online_scores, online_ids = self.tracking(
inputs, det_result)
self.det_times.tracking_time_s.end()
self.det_times.img_num += 1
cm, gm, gu = get_current_memory_mb()
self.cpu_mem += cm
self.gpu_mem += gm
self.gpu_util += gu
else:
self.det_times.preprocess_time_s.start()
inputs = self.preprocess(batch_image_list)
self.det_times.preprocess_time_s.end()
self.det_times.inference_time_s.start()
result = self.predict()
self.det_times.inference_time_s.end()
self.det_times.postprocess_time_s.start()
det_result = self.postprocess(inputs, result)
self.det_times.postprocess_time_s.end()
# tracking process
self.det_times.tracking_time_s.start()
online_tlwhs, online_scores, online_ids = self.tracking(
inputs, det_result)
self.det_times.tracking_time_s.end()
self.det_times.img_num += 1
if visual:
if len(image_list) > 1 and frame_id % 10 == 0:
print('Tracking frame {}'.format(frame_id))
frame, _ = decode_image(img_file, {})
im = plot_tracking(
frame,
online_tlwhs,
online_ids,
online_scores,
frame_id=frame_id,
ids2names=ids2names)
if seq_name is None:
seq_name = image_list[0].split('/')[-2]
save_dir = os.path.join(self.output_dir, seq_name)
if not os.path.exists(save_dir):
os.makedirs(save_dir)
cv2.imwrite(
os.path.join(save_dir, '{:05d}.jpg'.format(frame_id)), im)
mot_results.append([online_tlwhs, online_scores, online_ids])
return mot_results
def predict_video(self, video_file, camera_id):
video_out_name = 'mot_output.mp4'
if camera_id != -1:
capture = cv2.VideoCapture(camera_id)
else:
capture = cv2.VideoCapture(video_file)
video_out_name = os.path.split(video_file)[-1]
# Get Video info : resolution, fps, frame count
width = int(capture.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(capture.get(cv2.CAP_PROP_FPS))
frame_count = int(capture.get(cv2.CAP_PROP_FRAME_COUNT))
print("fps: %d, frame_count: %d" % (fps, frame_count))
if not os.path.exists(self.output_dir):
os.makedirs(self.output_dir)
out_path = os.path.join(self.output_dir, video_out_name)
video_format = 'mp4v'
fourcc = cv2.VideoWriter_fourcc(*video_format)
writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height))
frame_id = 1
timer = MOTTimer()
results = defaultdict(list) # centertrack onpy support single class
num_classes = self.num_classes
data_type = 'mcmot' if num_classes > 1 else 'mot'
ids2names = self.pred_config.labels
while (1):
ret, frame = capture.read()
if not ret:
break
if frame_id % 10 == 0:
print('Tracking frame: %d' % (frame_id))
frame_id += 1
timer.tic()
seq_name = video_out_name.split('.')[0]
mot_results = self.predict_image(
[frame[:, :, ::-1]], visual=False, seq_name=seq_name)
timer.toc()
fps = 1. / timer.duration
online_tlwhs, online_scores, online_ids = mot_results[0]
results[0].append(
(frame_id + 1, online_tlwhs, online_scores, online_ids))
im = plot_tracking(
frame,
online_tlwhs,
online_ids,
online_scores,
frame_id=frame_id,
fps=fps,
ids2names=ids2names)
writer.write(im)
if camera_id != -1:
cv2.imshow('Mask Detection', im)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if self.save_mot_txts:
result_filename = os.path.join(
self.output_dir, video_out_name.split('.')[-2] + '.txt')
write_mot_results(result_filename, results, data_type, num_classes)
writer.release()
def main():
detector = CenterTrack(
FLAGS.model_dir,
tracker_config=None,
device=FLAGS.device,
run_mode=FLAGS.run_mode,
batch_size=1,
trt_min_shape=FLAGS.trt_min_shape,
trt_max_shape=FLAGS.trt_max_shape,
trt_opt_shape=FLAGS.trt_opt_shape,
trt_calib_mode=FLAGS.trt_calib_mode,
cpu_threads=FLAGS.cpu_threads,
enable_mkldnn=FLAGS.enable_mkldnn,
output_dir=FLAGS.output_dir,
threshold=FLAGS.threshold,
save_images=FLAGS.save_images,
save_mot_txts=FLAGS.save_mot_txts)
# predict from video file or camera video stream
if FLAGS.video_file is not None or FLAGS.camera_id != -1:
detector.predict_video(FLAGS.video_file, FLAGS.camera_id)
else:
# predict from image
img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file)
detector.predict_image(img_list, FLAGS.run_benchmark, repeats=10)
if not FLAGS.run_benchmark:
detector.det_times.info(average=True)
else:
mode = FLAGS.run_mode
model_dir = FLAGS.model_dir
model_info = {
'model_name': model_dir.strip('/').split('/')[-1],
'precision': mode.split('_')[-1]
}
bench_log(detector, img_list, model_info, name='MOT')
if __name__ == '__main__':
paddle.enable_static()
parser = argsparser()
FLAGS = parser.parse_args()
print_arguments(FLAGS)
FLAGS.device = FLAGS.device.upper()
assert FLAGS.device in ['CPU', 'GPU', 'XPU', 'NPU'
], "device should be CPU, GPU, NPU or XPU"
main()
| PaddleDetection/deploy/python/mot_centertrack_infer.py/0 | {
"file_path": "PaddleDetection/deploy/python/mot_centertrack_infer.py",
"repo_id": "PaddleDetection",
"token_count": 9916
} | 54 |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
#include <stdio.h>
#include <tvm_runtime.h>
#include <tvmgen_picodet.h>
#include "uart.h"
// Header files generated by convert_image.py
#include "inputs.h"
#include "outputs.h"
int main(int argc, char **argv) {
uart_init();
printf("Starting PicoDet inference:\n");
struct tvmgen_picodet_outputs rec_outputs = {
.output0 = output0, .output1 = output1,
};
struct tvmgen_picodet_inputs rec_inputs = {
.image = input,
};
tvmgen_picodet_run(&rec_inputs, &rec_outputs);
// post process
for (int i = 0; i < output0_len / 4; i++) {
float score = 0;
int32_t class = 0;
for (int j = 0; j < 80; j++) {
if (output1[i + j * 2125] > score) {
score = output1[i + j * 2125];
class = j;
}
}
if (score > 0.1 && output0[i * 4] > 0 && output0[i * 4 + 1] > 0) {
printf("box: %f, %f, %f, %f, class: %d, score: %f\n", output0[i * 4] * 2,
output0[i * 4 + 1] * 2, output0[i * 4 + 2] * 2,
output0[i * 4 + 3] * 2, class, score);
}
}
return 0;
}
| PaddleDetection/deploy/third_engine/demo_avh/src/demo_bare_metal.c/0 | {
"file_path": "PaddleDetection/deploy/third_engine/demo_avh/src/demo_bare_metal.c",
"repo_id": "PaddleDetection",
"token_count": 693
} | 55 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// reference from https://github.com/RangiLyu/nanodet
#include "picodet_openvino.h"
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#define image_size 416
struct object_rect {
int x;
int y;
int width;
int height;
};
int resize_uniform(cv::Mat &src, cv::Mat &dst, cv::Size dst_size,
object_rect &effect_area) {
int w = src.cols;
int h = src.rows;
int dst_w = dst_size.width;
int dst_h = dst_size.height;
dst = cv::Mat(cv::Size(dst_w, dst_h), CV_8UC3, cv::Scalar(0));
float ratio_src = w * 1.0 / h;
float ratio_dst = dst_w * 1.0 / dst_h;
int tmp_w = 0;
int tmp_h = 0;
if (ratio_src > ratio_dst) {
tmp_w = dst_w;
tmp_h = floor((dst_w * 1.0 / w) * h);
} else if (ratio_src < ratio_dst) {
tmp_h = dst_h;
tmp_w = floor((dst_h * 1.0 / h) * w);
} else {
cv::resize(src, dst, dst_size);
effect_area.x = 0;
effect_area.y = 0;
effect_area.width = dst_w;
effect_area.height = dst_h;
return 0;
}
cv::Mat tmp;
cv::resize(src, tmp, cv::Size(tmp_w, tmp_h));
if (tmp_w != dst_w) {
int index_w = floor((dst_w - tmp_w) / 2.0);
for (int i = 0; i < dst_h; i++) {
memcpy(dst.data + i * dst_w * 3 + index_w * 3, tmp.data + i * tmp_w * 3,
tmp_w * 3);
}
effect_area.x = index_w;
effect_area.y = 0;
effect_area.width = tmp_w;
effect_area.height = tmp_h;
} else if (tmp_h != dst_h) {
int index_h = floor((dst_h - tmp_h) / 2.0);
memcpy(dst.data + index_h * dst_w * 3, tmp.data, tmp_w * tmp_h * 3);
effect_area.x = 0;
effect_area.y = index_h;
effect_area.width = tmp_w;
effect_area.height = tmp_h;
} else {
printf("error\n");
}
return 0;
}
const int color_list[80][3] = {
{216, 82, 24}, {236, 176, 31}, {125, 46, 141}, {118, 171, 47},
{76, 189, 237}, {238, 19, 46}, {76, 76, 76}, {153, 153, 153},
{255, 0, 0}, {255, 127, 0}, {190, 190, 0}, {0, 255, 0},
{0, 0, 255}, {170, 0, 255}, {84, 84, 0}, {84, 170, 0},
{84, 255, 0}, {170, 84, 0}, {170, 170, 0}, {170, 255, 0},
{255, 84, 0}, {255, 170, 0}, {255, 255, 0}, {0, 84, 127},
{0, 170, 127}, {0, 255, 127}, {84, 0, 127}, {84, 84, 127},
{84, 170, 127}, {84, 255, 127}, {170, 0, 127}, {170, 84, 127},
{170, 170, 127}, {170, 255, 127}, {255, 0, 127}, {255, 84, 127},
{255, 170, 127}, {255, 255, 127}, {0, 84, 255}, {0, 170, 255},
{0, 255, 255}, {84, 0, 255}, {84, 84, 255}, {84, 170, 255},
{84, 255, 255}, {170, 0, 255}, {170, 84, 255}, {170, 170, 255},
{170, 255, 255}, {255, 0, 255}, {255, 84, 255}, {255, 170, 255},
{42, 0, 0}, {84, 0, 0}, {127, 0, 0}, {170, 0, 0},
{212, 0, 0}, {255, 0, 0}, {0, 42, 0}, {0, 84, 0},
{0, 127, 0}, {0, 170, 0}, {0, 212, 0}, {0, 255, 0},
{0, 0, 42}, {0, 0, 84}, {0, 0, 127}, {0, 0, 170},
{0, 0, 212}, {0, 0, 255}, {0, 0, 0}, {36, 36, 36},
{72, 72, 72}, {109, 109, 109}, {145, 145, 145}, {182, 182, 182},
{218, 218, 218}, {0, 113, 188}, {80, 182, 188}, {127, 127, 0},
};
void draw_bboxes(const cv::Mat &bgr, const std::vector<BoxInfo> &bboxes,
object_rect effect_roi) {
static const char *class_names[] = {
"person", "bicycle", "car",
"motorcycle", "airplane", "bus",
"train", "truck", "boat",
"traffic light", "fire hydrant", "stop sign",
"parking meter", "bench", "bird",
"cat", "dog", "horse",
"sheep", "cow", "elephant",
"bear", "zebra", "giraffe",
"backpack", "umbrella", "handbag",
"tie", "suitcase", "frisbee",
"skis", "snowboard", "sports ball",
"kite", "baseball bat", "baseball glove",
"skateboard", "surfboard", "tennis racket",
"bottle", "wine glass", "cup",
"fork", "knife", "spoon",
"bowl", "banana", "apple",
"sandwich", "orange", "broccoli",
"carrot", "hot dog", "pizza",
"donut", "cake", "chair",
"couch", "potted plant", "bed",
"dining table", "toilet", "tv",
"laptop", "mouse", "remote",
"keyboard", "cell phone", "microwave",
"oven", "toaster", "sink",
"refrigerator", "book", "clock",
"vase", "scissors", "teddy bear",
"hair drier", "toothbrush"};
cv::Mat image = bgr.clone();
int src_w = image.cols;
int src_h = image.rows;
int dst_w = effect_roi.width;
int dst_h = effect_roi.height;
float width_ratio = (float)src_w / (float)dst_w;
float height_ratio = (float)src_h / (float)dst_h;
for (size_t i = 0; i < bboxes.size(); i++) {
const BoxInfo &bbox = bboxes[i];
cv::Scalar color =
cv::Scalar(color_list[bbox.label][0], color_list[bbox.label][1],
color_list[bbox.label][2]);
cv::rectangle(image,
cv::Rect(cv::Point((bbox.x1 - effect_roi.x) * width_ratio,
(bbox.y1 - effect_roi.y) * height_ratio),
cv::Point((bbox.x2 - effect_roi.x) * width_ratio,
(bbox.y2 - effect_roi.y) * height_ratio)),
color);
char text[256];
sprintf(text, "%s %.1f%%", class_names[bbox.label], bbox.score * 100);
int baseLine = 0;
cv::Size label_size =
cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.4, 1, &baseLine);
int x = (bbox.x1 - effect_roi.x) * width_ratio;
int y =
(bbox.y1 - effect_roi.y) * height_ratio - label_size.height - baseLine;
if (y < 0)
y = 0;
if (x + label_size.width > image.cols)
x = image.cols - label_size.width;
cv::rectangle(image, cv::Rect(cv::Point(x, y),
cv::Size(label_size.width,
label_size.height + baseLine)),
color, -1);
cv::putText(image, text, cv::Point(x, y + label_size.height),
cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255, 255, 255));
}
cv::imwrite("../predict.jpg", image);
}
int image_demo(PicoDet &detector, const char *imagepath) {
std::vector<std::string> filenames;
cv::glob(imagepath, filenames, false);
for (auto img_name : filenames) {
cv::Mat image = cv::imread(img_name);
if (image.empty()) {
return -1;
}
object_rect effect_roi;
cv::Mat resized_img;
resize_uniform(image, resized_img, cv::Size(image_size, image_size),
effect_roi);
auto results = detector.detect(resized_img, 0.4, 0.5);
draw_bboxes(image, results, effect_roi);
}
return 0;
}
int webcam_demo(PicoDet &detector, int cam_id) {
cv::Mat image;
cv::VideoCapture cap(cam_id);
while (true) {
cap >> image;
object_rect effect_roi;
cv::Mat resized_img;
resize_uniform(image, resized_img, cv::Size(image_size, image_size),
effect_roi);
auto results = detector.detect(resized_img, 0.4, 0.5);
draw_bboxes(image, results, effect_roi);
cv::waitKey(1);
}
return 0;
}
int video_demo(PicoDet &detector, const char *path) {
cv::Mat image;
cv::VideoCapture cap(path);
while (true) {
cap >> image;
object_rect effect_roi;
cv::Mat resized_img;
resize_uniform(image, resized_img, cv::Size(image_size, image_size),
effect_roi);
auto results = detector.detect(resized_img, 0.4, 0.5);
draw_bboxes(image, results, effect_roi);
cv::waitKey(1);
}
return 0;
}
int benchmark(PicoDet &detector) {
int loop_num = 100;
int warm_up = 8;
double time_min = DBL_MAX;
double time_max = -DBL_MAX;
double time_avg = 0;
cv::Mat image(image_size, image_size, CV_8UC3, cv::Scalar(1, 1, 1));
for (int i = 0; i < warm_up + loop_num; i++) {
auto start = std::chrono::steady_clock::now();
std::vector<BoxInfo> results;
results = detector.detect(image, 0.4, 0.5);
auto end = std::chrono::steady_clock::now();
double time =
std::chrono::duration<double, std::milli>(end - start).count();
if (i >= warm_up) {
time_min = (std::min)(time_min, time);
time_max = (std::max)(time_max, time);
time_avg += time;
}
}
time_avg /= loop_num;
fprintf(stderr, "%20s min = %7.2f max = %7.2f avg = %7.2f\n", "picodet",
time_min, time_max, time_avg);
return 0;
}
int main(int argc, char **argv) {
if (argc != 3) {
fprintf(stderr, "usage: %s [mode] [path]. \n For webcam mode=0, path is "
"cam id; \n For image demo, mode=1, path=xxx/xxx/*.jpg; \n "
"For video, mode=2; \n For benchmark, mode=3 path=0.\n",
argv[0]);
return -1;
}
std::cout << "start init model" << std::endl;
auto detector = PicoDet("../weight/picodet_m_416.xml");
std::cout << "success" << std::endl;
int mode = atoi(argv[1]);
switch (mode) {
case 0: {
int cam_id = atoi(argv[2]);
webcam_demo(detector, cam_id);
break;
}
case 1: {
const char *images = argv[2];
image_demo(detector, images);
break;
}
case 2: {
const char *path = argv[2];
video_demo(detector, path);
break;
}
case 3: {
benchmark(detector);
break;
}
default: {
fprintf(stderr, "usage: %s [mode] [path]. \n For webcam mode=0, path is "
"cam id; \n For image demo, mode=1, path=xxx/xxx/*.jpg; \n "
"For video, mode=2; \n For benchmark, mode=3 path=0.\n",
argv[0]);
break;
}
}
}
| PaddleDetection/deploy/third_engine/demo_openvino/main.cpp/0 | {
"file_path": "PaddleDetection/deploy/third_engine/demo_openvino/main.cpp",
"repo_id": "PaddleDetection",
"token_count": 5222
} | 56 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import yaml
import argparse
import numpy as np
import glob
from onnxruntime import InferenceSession
from preprocess import Compose
# Global dictionary
SUPPORT_MODELS = {
'YOLO', 'PPYOLOE', 'RCNN', 'SSD', 'Face', 'FCOS', 'SOLOv2', 'TTFNet',
'S2ANet', 'JDE', 'FairMOT', 'DeepSORT', 'GFL', 'PicoDet', 'CenterNet',
'TOOD', 'RetinaNet', 'StrongBaseline', 'STGCN', 'YOLOX', 'HRNet'
}
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--infer_cfg", type=str, help="infer_cfg.yml")
parser.add_argument(
'--onnx_file', type=str, default="model.onnx", help="onnx model file path")
parser.add_argument("--image_dir", type=str)
parser.add_argument("--image_file", type=str)
def get_test_images(infer_dir, infer_img):
"""
Get image path list in TEST mode
"""
assert infer_img is not None or infer_dir is not None, \
"--image_file or --image_dir should be set"
assert infer_img is None or os.path.isfile(infer_img), \
"{} is not a file".format(infer_img)
assert infer_dir is None or os.path.isdir(infer_dir), \
"{} is not a directory".format(infer_dir)
# infer_img has a higher priority
if infer_img and os.path.isfile(infer_img):
return [infer_img]
images = set()
infer_dir = os.path.abspath(infer_dir)
assert os.path.isdir(infer_dir), \
"infer_dir {} is not a directory".format(infer_dir)
exts = ['jpg', 'jpeg', 'png', 'bmp']
exts += [ext.upper() for ext in exts]
for ext in exts:
images.update(glob.glob('{}/*.{}'.format(infer_dir, ext)))
images = list(images)
assert len(images) > 0, "no image found in {}".format(infer_dir)
print("Found {} inference images in total.".format(len(images)))
return images
class PredictConfig(object):
"""set config of preprocess, postprocess and visualize
Args:
infer_config (str): path of infer_cfg.yml
"""
def __init__(self, infer_config):
# parsing Yaml config for Preprocess
with open(infer_config) as f:
yml_conf = yaml.safe_load(f)
self.check_model(yml_conf)
self.arch = yml_conf['arch']
self.preprocess_infos = yml_conf['Preprocess']
self.min_subgraph_size = yml_conf['min_subgraph_size']
self.label_list = yml_conf['label_list']
self.use_dynamic_shape = yml_conf['use_dynamic_shape']
self.draw_threshold = yml_conf.get("draw_threshold", 0.5)
self.mask = yml_conf.get("mask", False)
self.tracker = yml_conf.get("tracker", None)
self.nms = yml_conf.get("NMS", None)
self.fpn_stride = yml_conf.get("fpn_stride", None)
if self.arch == 'RCNN' and yml_conf.get('export_onnx', False):
print(
'The RCNN export model is used for ONNX and it only supports batch_size = 1'
)
self.print_config()
def check_model(self, yml_conf):
"""
Raises:
ValueError: loaded model not in supported model type
"""
for support_model in SUPPORT_MODELS:
if support_model in yml_conf['arch']:
return True
raise ValueError("Unsupported arch: {}, expect {}".format(yml_conf[
'arch'], SUPPORT_MODELS))
def print_config(self):
print('----------- Model Configuration -----------')
print('%s: %s' % ('Model Arch', self.arch))
print('%s: ' % ('Transform Order'))
for op_info in self.preprocess_infos:
print('--%s: %s' % ('transform op', op_info['type']))
print('--------------------------------------------')
def predict_image(infer_config, predictor, img_list):
# load preprocess transforms
transforms = Compose(infer_config.preprocess_infos)
# predict image
for img_path in img_list:
inputs = transforms(img_path)
inputs_name = [var.name for var in predictor.get_inputs()]
inputs = {k: inputs[k][None, ] for k in inputs_name}
outputs = predictor.run(output_names=None, input_feed=inputs)
print("ONNXRuntime predict: ")
if infer_config.arch in ["HRNet"]:
print(np.array(outputs[0]))
else:
bboxes = np.array(outputs[0])
for bbox in bboxes:
if bbox[0] > -1 and bbox[1] > infer_config.draw_threshold:
print(f"{int(bbox[0])} {bbox[1]} "
f"{bbox[2]} {bbox[3]} {bbox[4]} {bbox[5]}")
if __name__ == '__main__':
FLAGS = parser.parse_args()
# load image list
img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file)
# load predictor
predictor = InferenceSession(FLAGS.onnx_file)
# load infer config
infer_config = PredictConfig(FLAGS.infer_cfg)
predict_image(infer_config, predictor, img_list)
| PaddleDetection/deploy/third_engine/onnx/infer.py/0 | {
"file_path": "PaddleDetection/deploy/third_engine/onnx/infer.py",
"repo_id": "PaddleDetection",
"token_count": 2275
} | 57 |
简体中文 | [English](./idbased_det_en.md)
# 基于人体id的检测模型开发
## 环境准备
基于人体id的检测方案是直接使用[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)的功能进行模型训练的。请按照[安装说明](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL_cn.md)完成环境安装,以进行后续的模型训练及使用流程。
## 数据准备
基于检测的行为识别方案中,数据准备的流程与一般的检测模型一致,详情可参考[目标检测数据准备](../../../tutorials/data/PrepareDetDataSet.md)。将图像和标注数据组织成PaddleDetection中支持的格式之一即可。
**注意** : 在实际使用的预测过程中,使用的是单人图像进行预测,因此在训练过程中建议将图像裁剪为单人图像,再进行烟头检测框的标注,以提升准确率。
## 模型优化
### 检测-跟踪模型优化
基于检测的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。
### 更大的分辨率
烟头的检测在监控视角下是一个典型的小目标检测问题,使用更大的分辨率有助于提升模型整体的识别率
### 预训练模型
加入小目标场景数据集VisDrone下的预训练模型进行训练,模型mAP由38.1提升到39.7。
## 新增行为
### 数据准备
参考[目标检测数据准备](../../../tutorials/data/PrepareDetDataSet.md)完成训练数据准备。
准备完成后,数据路径为
```
dataset/smoking
├── smoking # 存放所有的图片
│ ├── 1.jpg
│ ├── 2.jpg
├── smoking_test_cocoformat.json # 测试标注文件
├── smoking_train_cocoformat.json # 训练标注文件
```
以`COCO`格式为例,完成后的json标注文件内容如下:
```json
# images字段下包含了图像的路径,id及对应宽高信息
"images": [
{
"file_name": "smoking/1.jpg",
"id": 0, # 此处id为图片id序号,不要重复
"height": 437,
"width": 212
},
{
"file_name": "smoking/2.jpg",
"id": 1,
"height": 655,
"width": 365
},
...
# categories 字段下包含所有类别信息,如果希望新增更多的检测类别,请在这里增加, 示例如下。
"categories": [
{
"supercategory": "cigarette",
"id": 1,
"name": "cigarette"
},
{
"supercategory": "Class_Defined_by_Yourself",
"id": 2,
"name": "Class_Defined_by_Yourself"
},
...
# annotations 字段下包含了所有目标实例的信息,包括类别,检测框坐标, id, 所属图像id等信息
"annotations": [
{
"category_id": 1, # 对应定义的类别,在这里1代表cigarette
"bbox": [
97.0181345931,
332.7033243081,
7.5943999555,
16.4545332369
],
"id": 0, # 此处id为实例的id序号,不要重复
"image_id": 0, # 此处为实例所在图片的id序号,可能重复,此时即一张图片上有多个实例对象
"iscrowd": 0,
"area": 124.96230648208665
},
{
"category_id": 2, # 对应定义的类别,在这里2代表Class_Defined_by_Yourself
"bbox": [
114.3895698372,
221.9131122343,
25.9530363697,
50.5401234568
],
"id": 1,
"image_id": 1,
"iscrowd": 0,
"area": 1311.6696622034585
```
### 配置文件设置
参考[配置文件](../../../../configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml), 其中需要关注重点如下:
```yaml
metric: COCO
num_classes: 1 # 如果新增了更多的类别,请对应修改此处
# 正确设置image_dir,anno_path,dataset_dir
# 保证dataset_dir + anno_path 能正确对应标注文件的路径
# 保证dataset_dir + image_dir + 标注文件中的图片路径可以正确对应到图片路径
TrainDataset:
!COCODataSet
image_dir: ""
anno_path: smoking_train_cocoformat.json
dataset_dir: dataset/smoking
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
EvalDataset:
!COCODataSet
image_dir: ""
anno_path: smoking_test_cocoformat.json
dataset_dir: dataset/smoking
TestDataset:
!ImageFolder
anno_path: smoking_test_cocoformat.json
dataset_dir: dataset/smoking
```
### 模型训练及评估
#### 模型训练
参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),执行下列步骤实现
```bash
# At Root of PaddleDetection
python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml --eval
```
#### 模型评估
训练好模型之后,可以通过以下命令实现对模型指标的评估
```bash
# At Root of PaddleDetection
python tools/eval.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml
```
### 模型导出
注意:如果在Tensor-RT环境下预测, 请开启`-o trt=True`以获得更好的性能
```bash
# At Root of PaddleDetection
python tools/export_model.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml -o weights=output/ppyoloe_crn_s_80e_smoking_visdrone/best_model trt=True
```
导出模型后,可以得到:
```
ppyoloe_crn_s_80e_smoking_visdrone/
├── infer_cfg.yml
├── model.pdiparams
├── model.pdiparams.info
└── model.pdmodel
```
至此,即可使用PP-Human进行实际预测了。
### 自定义行为输出
基于人体id的检测的行为识别方案中,将任务转化为在对应人物的图像中检测目标特征对象。当目标特征对象被检测到时,则视为行为正在发生。因此在完成自定义模型的训练及部署的基础上,还需要将检测模型结果转化为最终的行为识别结果作为输出,并修改可视化的显示结果。
#### 转换为行为识别结果
请对应修改[后处理函数](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L338)。
核心代码为:
```python
# 解析检测模型输出,并筛选出置信度高于阈值的有效检测框。
# Current now, class 0 is positive, class 1 is negative.
action_ret = {'class': 1.0, 'score': -1.0}
box_num = np_boxes_num[idx]
boxes = det_result['boxes'][cur_box_idx:cur_box_idx + box_num]
cur_box_idx += box_num
isvalid = (boxes[:, 1] > self.threshold) & (boxes[:, 0] == 0)
valid_boxes = boxes[isvalid, :]
if valid_boxes.shape[0] >= 1:
# 存在有效检测框时,行为识别结果的类别和分数对应修改
action_ret['class'] = valid_boxes[0, 0]
action_ret['score'] = valid_boxes[0, 1]
# 由于动作的持续性,有效检测结果可复用一定帧数
self.result_history[
tracker_id] = [0, self.frame_life, valid_boxes[0, 1]]
else:
# 不存在有效检测框,则根据历史检测数据确定当前帧的结果
...
```
#### 修改可视化输出
目前基于ID的行为识别,是根据行为识别的结果及预定义的类别名称进行展示的。详细逻辑请见[此处](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043)。如果自定义的行为需要修改为其他的展示名称,请对应修改此处,以正确输出对应结果。
| PaddleDetection/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md/0 | {
"file_path": "PaddleDetection/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md",
"repo_id": "PaddleDetection",
"token_count": 4581
} | 58 |
[简体中文](ppvehicle_attribute.md) | English
# Customized Vehicle Attribute Recognition
## Data Preparation
### Data Format
We use the VeRi attribute annotation format, with a total of 10 color and 9 model attributes shown as follows.
```
# colors
- "yellow"
- "orange"
- "green"
- "gray"
- "red"
- "blue"
- "white"
- "golden"
- "brown"
- "black"
# models
- "sedan"
- "suv"
- "van"
- "hatchback"
- "mpv"
- "pickup"
- "bus"
- "truck"
- "estate"
```
A sequence of length 19 is used in the annotation file to represent the above attributes.
Examples:
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
In the first 10 bits, the value of bit index 0 is 1, indicating that the vehicle color is `"yellow"`.
In the last 9 bits, the value of bit index 11 is 1, indicating that the model is `"suv"`.
### Data Annotation
After knowing the purpose of the above `Data format`, we can start to annotate data. The essence is that each single-vehicle image creates a set of 19 annotation items, corresponding to the attribute values at 19 positions.
Examples:
For an original image:
1) Using bounding boxes to annotate the position of each vehicle in the picture.
2) Each detection box (corresponding to each vehicle) contains 19 attribute values which are represented by 0 or 1. It corresponds to the above 19 attributes. For example, if the color is 'orange', then the index 1 bit of the array is 1. If the model is 'sedan', then the index 10 bit of the array is 1.
After the annotation is completed, the model will use the detection box to intercept each vehicle into a single-vehicle picture, and its picture establishes a corresponding relationship with the 19 attribute annotation. It is also possible to cut into a single-vehicle image first and then annotate it. The results are the same.
## Model Training
Once the data is annotated, it can be used for model training to complete the optimization of the customized model.
There are two main steps: 1) Organize the data and annotated data into the training format. 2) Modify the configuration file to start training.
### Training Data Format
The training data includes the images used for training and a training list called train.txt. Its location is specified in the training configuration, with the following example:
```
Attribute/
|-- data Training images folder
|-- 00001.jpg
|-- 00002.jpg
| `-- 0000x.jpg
train.txt List of training data
```
train.txt file contains the names of all training images (file path relative to the root path) + 19 annotation values
Each line of it represents a vehicle's image and annotation result. The format is as follows:
```
00001.jpg 0,0,1,0,....
```
Note 1) The images are separated by Tab[\t], 2) The annotated values are separated by commas [,]. If the format is wrong, the parsing will fail.
### Modify The Configuration To Start Training
First run the following command to download the training code (for more environmental issues, please refer to [Install_PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/ install_paddleclas_en.md)):
```
git clone https://github.com/PaddlePaddle/PaddleClas
```
You need to modify the following configuration in the configuration file `PaddleClas/blob/develop/ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml`
```yaml
DataLoader:
Train:
dataset:
name: MultiLabelDataset
image_root: "dataset/VeRi/" # the root path of training images
cls_label_path: "dataset/VeRi/train_list.txt" # the location of the training list file
label_ratio: True
transform_ops:
...
Eval:
dataset:
name: MultiLabelDataset
image_root: "dataset/VeRi/" # the root path of evaluation images
cls_label_path: "dataset/VeRi/val_list.txt" # the location of the training list file
label_ratio: True
transform_ops:
...
```
Note:
1. here image_root path and the relative path of the image in train.txt, corresponding to the full path of the image.
2. If you modify the number of attributes, the number of attribute types in the content configuration item should also be modified accordingly.
```yaml
# model architecture
Arch:
name: "PPLCNet_x1_0"
pretrained: True
use_ssld: True
class_num: 19 # Number of attribute classes
```
Then run the following command to start training:
```bash
#Multi-card training
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml
#Single card training
python3 tools/train.py \
-c ./ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml
```
You can run the following commands for performance evaluation after the training is completed:
```
#Multi-card evaluation
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/eval.py \
-c ./ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=./output/PPLCNet_x1_0/best_model
#Single card evaluation
python3 tools/eval.py \
-c ./ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=./output/PPLCNet_x1_0/best_model
```
### Model Export
Use the following command to export the trained model as an inference deployment model.
```
python3 tools/export_model.py \
-c ./ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=output/PPLCNet_x1_0/best_model \
-o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_vehicle_attribute_model
```
After exporting the model, if want to use it in PP-Vehicle, you need to download the [deploy infer model](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) and copy `infer_cfg.yml` into the exported model folder `PPLCNet_x1_0_vehicle_attribute_model` .
When you use the model, you need to modify the new model path `model_dir` entry and set `enable: True` in the configuration file of PP-Vehicle `. /deploy/pipeline/config/infer_cfg_ppvehicle.yml` .
```
VEHICLE_ATTR:
model_dir: [YOUR_DEPLOY_MODEL_DIR]/PPLCNet_x1_0_vehicle_attribute_infer/ #The exported model location
enable: True #Whether to enable the function
```
To this point, a new attribute category recognition task is completed.
## Adding or deleting attributes
This is similar to the increase and decrease process of pedestrian attributes.
If the attributes need to be added or deleted, you need to
1) New attribute category information needs to be added or deleted when annotating the data.
2) Modify the number and name of attributes used in train.txt corresponding to the training.
3) Modify the training configuration, for example, the number of attributes in the ``PaddleClas/blob/develop/ppcls/configs/PULC/vehicle_attribute/PPLCNet_x1_0.yaml`` file, for details, please see the ``Modify configuration to start training`` section above.
Example of adding attributes.
1. Continue to add new attribute annotation values after 19 values when annotating the data.
2. Add new attribute values to the annotated values in the train.txt file as well.
3. The above is the annotation and training process with 19 attributes.
<div width="500" align="center">
<img src="../../images/add_attribute.png"/>
</div>
The same applies to the deletion of attributes.
## Modifications to post-processing code
After modifying the attribute definition, the post-processing part of the pipeline also needs to be modified accordingly, which mainly affects the display results when the results are visualized.
The code is at [file](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/ppvehicle/vehicle_attr.py#L108), that is, the `postprocess` function.
The function implementation is described as follows:
```python
# The name of the color/model is defined in the initialization function of the class
self.color_list = [
"yellow", "orange", "green", "gray", "red", "blue", "white",
"golden", "brown", "black"
]
self.type_list = [
"sedan", "suv", "van", "hatchback", "mpv", "pickup", "bus", "truck",
"estate"
]
...
def postprocess(self, inputs, result):
# postprocess output of predictor
im_results = result['output']
batch_res = []
for res in im_results:
res = res.tolist()
attr_res = []
color_res_str = "Color: "
type_res_str = "Type: "
color_idx = np.argmax(res[:10]) # The first 10 items represent the color scores, and the item with the largest score is used as the color result
type_idx = np.argmax(res[10:]) # The last 9 items represent the model scores, and the item with the largest score is used as the model result.
# The score of color and model need to be larger than the corresponding threshold, otherwise it will be regarded as 'UnKnown'
if res[color_idx] >= self.color_threshold:
color_res_str += self.color_list[color_idx]
else:
color_res_str += "Unknown"
attr_res.append(color_res_str)
if res[type_idx + 10] >= self.type_threshold:
type_res_str += self.type_list[type_idx]
else:
type_res_str += "Unknown"
attr_res.append(type_res_str)
batch_res.append(attr_res)
result = {'output': batch_res}
return result
```
| PaddleDetection/docs/advanced_tutorials/customization/ppvehicle_attribute_en.md/0 | {
"file_path": "PaddleDetection/docs/advanced_tutorials/customization/ppvehicle_attribute_en.md",
"repo_id": "PaddleDetection",
"token_count": 3375
} | 59 |
# Object detection grad_cam heatmap
## 1.Introduction
Calculate the cam (class activation map) of the object predict bbox based on the backbone/roi feature map, currently supports networks based on FasterRCNN/MaskRCNN series, PPYOLOE series and BlazeFace, SSD, Retinanet.
## 2.Usage
* Taking PP-YOLOE as an example, after preparing the data, specify the network configuration file, model weight address, image path and output folder path, and then use the script to call tools/cam_ppdet.py to calculate the grad_cam heat map of the prediction box. Below is an example run script.
```shell
python tools/cam_ppdet.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --infer_img demo/000000014439.jpg --cam_out cam_ppyoloe --target_feature_layer_name model.backbone -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
```
* **Arguments**
| FLAG | description |
| :----------------------: |:---------------------------------------------------------------------------------------------------------------------------------:|
| -c | Select config file |
| --infer_img | Image path |
| --cam_out | Directory for output |
| --target_feature_layer_name | The position of featuremap to do gradcam, for example:model.backbone, model.bbox_head.roi_extractor |
| -o | Set parameters in configure file, for example: -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams |
* result
<center>
<img src="../images/grad_cam_ppyoloe_demo.jpg" width="500" >
</center>
<br><center>cam_ppyoloe/225.jpg</center></br>
## 3.Currently supports networks based on FasterRCNN/MaskRCNN series, PPYOLOE series and BlazeFace, SSD, Retinanet.
* PPYOLOE bbox heat map visualization script (with backbone featuremap)
```bash
python tools/cam_ppdet.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --infer_img demo/000000014439.jpg --cam_out cam_ppyoloe -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
```
* MaskRCNN bbox heat map visualization script (with roi featuremap)
```bash
python tools/cam_ppdet.py -c configs/mask_rcnn/mask_rcnn_r50_vd_fpn_2x_coco.yml --infer_img demo/000000014439.jpg --cam_out cam_mask_rcnn_roi --target_feature_layer_name model.bbox_head.roi_extractor -o weights=https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_2x_coco.pdparams
```
* MaskRCNN bbox heat map visualization script (with backbone featuremap)
```bash
python tools/cam_ppdet.py -c configs/mask_rcnn/mask_rcnn_r50_vd_fpn_2x_coco.yml --infer_img demo/000000014439.jpg --cam_out cam_mask_rcnn_backbone --target_feature_layer_name model.backbone -o weights=https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_2x_coco.pdparams
```
* FasterRCNN bbox heat map visualization script (with roi featuremap)
```bash
python tools/cam_ppdet.py -c configs/faster_rcnn/faster_rcnn_r50_vd_fpn_2x_coco.yml --infer_img demo/000000014439.jpg --cam_out cam_faster_rcnn_roi --target_feature_layer_name model.bbox_head.roi_extractor -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams
```
* FasterRCNN bbox heat map visualization script (with backbone featuremap)
```bash
python tools/cam_ppdet.py -c configs/faster_rcnn/faster_rcnn_r50_vd_fpn_2x_coco.yml --infer_img demo/000000014439.jpg --cam_out cam_faster_rcnn_backbone --target_feature_layer_name model.backbone -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams
```
* BlaczeFace bbox heat map visualization script (with backbone featuremap)
```bash
python tools/cam_ppdet.py -c configs/face_detection/blazeface_1000e.yml --infer_img demo/hrnet_demo.jpg --cam_out cam_blazeface --target_feature_layer_name model.backbone -o weights=https://paddledet.bj.bcebos.com/models/blazeface_1000e.pdparams
```
* SSD bbox heat map visualization script (with backbone featuremap)
```bash
python tools/cam_ppdet.py -c configs/ssd/ssd_mobilenet_v1_300_120e_voc.yml --infer_img demo/000000014439.jpg --cam_out cam_ssd --target_feature_layer_name model.backbone -o weights=https://paddledet.bj.bcebos.com/models/ssd_mobilenet_v1_300_120e_voc.pdparams
```
* Retinanet bbox heat map visualization script (with backbone featuremap)
```bash
python tools/cam_ppdet.py -c configs/retinanet/retinanet_r50_fpn_2x_coco.yml --infer_img demo/000000014439.jpg --cam_out cam_retinanet --target_feature_layer_name model.backbone -o weights=https://bj.bcebos.com/v1/paddledet/models/retinanet_r50_fpn_2x_coco.pdparams
```
| PaddleDetection/docs/tutorials/GradCAM_en.md/0 | {
"file_path": "PaddleDetection/docs/tutorials/GradCAM_en.md",
"repo_id": "PaddleDetection",
"token_count": 2254
} | 60 |
[简体中文](KeyPointAnnoTools.md) | English
# Key Points Detection Annotation Tool
## Concents
[LabelMe](#LabelMe)
- [Instruction](#Instruction)
- [Installation](#Installation)
- [Notes of Key Points Data](#Notes-of-Key-Points-Data)
- [Annotation of LabelMe](#Annotation-of-LabelMe)
- [Annotation Format](#Annotation-Format)
- [Data Export Format](#Data-Export-Format)
- [Summary of Format Conversion](#Summary-of-Format-Conversion)
- [Annotation file(json)—>COCO Dataset](#annotation-filejsoncoco-dataset)
## [LabelMe](https://github.com/wkentaro/labelme)
### Instruction
#### Installation
Please refer to [The github of LabelMe](https://github.com/wkentaro/labelme) for installation details.
<details>
<summary><b> Ubuntu</b></summary>
```
sudo apt-get install labelme
# or
sudo pip3 install labelme
# or install standalone executable from:
# https://github.com/wkentaro/labelme/releases
```
</details>
<details>
<summary><b> macOS</b></summary>
```
brew install pyqt # maybe pyqt5
pip install labelme
# or
brew install wkentaro/labelme/labelme # command line interface
# brew install --cask wkentaro/labelme/labelme # app
# or install standalone executable/app from:
# https://github.com/wkentaro/labelme/releases
```
</details>
We recommend installing by Anoncanda.
```
conda create –name=labelme python=3
conda activate labelme
pip install pyqt5
pip install labelme
```
#### Notes of Key Points Data
COCO dataset needs to collect 17 key points.
```
keypoint indexes:
0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
```
#### Annotation of LabelMe
After starting labelme, select an image or an folder with images.
Select `create polygons` in the formula bar. Draw an annotation area as shown in the following GIF. You can right-click on the image to select different shape. When finished, press the Enter/Return key, then fill the corresponding label in the popup box, such as, people.
Click the save button in the formula bar,it will generate an annotation file in json.

### Annotation Format
#### Data Export Format
```
#generate an annotation file
png/jpeg/jpg-->labelme-->json
```
#### Summary of Format Conversion
```
#convert annotation file to COCO dataset format
json-->labelme2coco.py-->COCO dataset
```
#### Annotation file(json)—>COCO Dataset
Convert the data annotated by LabelMe to COCO dataset by this script [x2coco.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/tools/x2coco.py).
```bash
python tools/x2coco.py \
--dataset_type labelme \
--json_input_dir ./labelme_annos/ \
--image_input_dir ./labelme_imgs/ \
--output_dir ./cocome/ \
--train_proportion 0.8 \
--val_proportion 0.2 \
--test_proportion 0.0
```
After the user dataset is converted to COCO data, the directory structure is as follows (note that the path name and file name in the dataset should not use Chinese as far as possible to avoid errors caused by Chinese coding problems):
```
dataset/xxx/
├── annotations
│ ├── train.json # Annotation file of coco data
│ ├── valid.json # Annotation file of coco data
├── images
│ ├── xxx1.jpg
│ ├── xxx2.jpg
│ ├── xxx3.jpg
│ | ...
...
```
| PaddleDetection/docs/tutorials/data/KeyPointAnnoTools_en.md/0 | {
"file_path": "PaddleDetection/docs/tutorials/data/KeyPointAnnoTools_en.md",
"repo_id": "PaddleDetection",
"token_count": 1465
} | 61 |
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import inspect
import importlib
import re
try:
from docstring_parser import parse as doc_parse
except Exception:
def doc_parse(*args):
pass
try:
from typeguard import check_type
except Exception:
def check_type(*args):
pass
__all__ = ['SchemaValue', 'SchemaDict', 'SharedConfig', 'extract_schema']
class SchemaValue(object):
def __init__(self, name, doc='', type=None):
super(SchemaValue, self).__init__()
self.name = name
self.doc = doc
self.type = type
def set_default(self, value):
self.default = value
def has_default(self):
return hasattr(self, 'default')
class SchemaDict(dict):
def __init__(self, **kwargs):
super(SchemaDict, self).__init__()
self.schema = {}
self.strict = False
self.doc = ""
self.update(kwargs)
def __setitem__(self, key, value):
# XXX also update regular dict to SchemaDict??
if isinstance(value, dict) and key in self and isinstance(self[key],
SchemaDict):
self[key].update(value)
else:
super(SchemaDict, self).__setitem__(key, value)
def __missing__(self, key):
if self.has_default(key):
return self.schema[key].default
elif key in self.schema:
return self.schema[key]
else:
raise KeyError(key)
def copy(self):
newone = SchemaDict()
newone.__dict__.update(self.__dict__)
newone.update(self)
return newone
def set_schema(self, key, value):
assert isinstance(value, SchemaValue)
self.schema[key] = value
def set_strict(self, strict):
self.strict = strict
def has_default(self, key):
return key in self.schema and self.schema[key].has_default()
def is_default(self, key):
if not self.has_default(key):
return False
if hasattr(self[key], '__dict__'):
return True
else:
return key not in self or self[key] == self.schema[key].default
def find_default_keys(self):
return [
k for k in list(self.keys()) + list(self.schema.keys())
if self.is_default(k)
]
def mandatory(self):
return any([k for k in self.schema.keys() if not self.has_default(k)])
def find_missing_keys(self):
missing = [
k for k in self.schema.keys()
if k not in self and not self.has_default(k)
]
placeholders = [k for k in self if self[k] in ('<missing>', '<value>')]
return missing + placeholders
def find_extra_keys(self):
return list(set(self.keys()) - set(self.schema.keys()))
def find_mismatch_keys(self):
mismatch_keys = []
for arg in self.schema.values():
if arg.type is not None:
try:
check_type("{}.{}".format(self.name, arg.name),
self[arg.name], arg.type)
except Exception:
mismatch_keys.append(arg.name)
return mismatch_keys
def validate(self):
missing_keys = self.find_missing_keys()
if missing_keys:
raise ValueError("Missing param for class<{}>: {}".format(
self.name, ", ".join(missing_keys)))
extra_keys = self.find_extra_keys()
if extra_keys and self.strict:
raise ValueError("Extraneous param for class<{}>: {}".format(
self.name, ", ".join(extra_keys)))
mismatch_keys = self.find_mismatch_keys()
if mismatch_keys:
raise TypeError("Wrong param type for class<{}>: {}".format(
self.name, ", ".join(mismatch_keys)))
class SharedConfig(object):
"""
Representation class for `__shared__` annotations, which work as follows:
- if `key` is set for the module in config file, its value will take
precedence
- if `key` is not set for the module but present in the config file, its
value will be used
- otherwise, use the provided `default_value` as fallback
Args:
key: config[key] will be injected
default_value: fallback value
"""
def __init__(self, key, default_value=None):
super(SharedConfig, self).__init__()
self.key = key
self.default_value = default_value
def extract_schema(cls):
"""
Extract schema from a given class
Args:
cls (type): Class from which to extract.
Returns:
schema (SchemaDict): Extracted schema.
"""
ctor = cls.__init__
# python 2 compatibility
if hasattr(inspect, 'getfullargspec'):
argspec = inspect.getfullargspec(ctor)
annotations = argspec.annotations
has_kwargs = argspec.varkw is not None
else:
argspec = inspect.getfullargspec(ctor)
# python 2 type hinting workaround, see pep-3107
# however, since `typeguard` does not support python 2, type checking
# is still python 3 only for now
annotations = getattr(ctor, '__annotations__', {})
has_kwargs = argspec.varkw is not None
names = [arg for arg in argspec.args if arg != 'self']
defaults = argspec.defaults
num_defaults = argspec.defaults is not None and len(argspec.defaults) or 0
num_required = len(names) - num_defaults
docs = cls.__doc__
if docs is None and getattr(cls, '__category__', None) == 'op':
docs = cls.__call__.__doc__
try:
docstring = doc_parse(docs)
except Exception:
docstring = None
if docstring is None:
comments = {}
else:
comments = {}
for p in docstring.params:
match_obj = re.match('^([a-zA-Z_]+[a-zA-Z_0-9]*).*', p.arg_name)
if match_obj is not None:
comments[match_obj.group(1)] = p.description
schema = SchemaDict()
schema.name = cls.__name__
schema.doc = ""
if docs is not None:
start_pos = docs[0] == '\n' and 1 or 0
schema.doc = docs[start_pos:].split("\n")[0].strip()
# XXX handle paddle's weird doc convention
if '**' == schema.doc[:2] and '**' == schema.doc[-2:]:
schema.doc = schema.doc[2:-2].strip()
schema.category = hasattr(cls, '__category__') and getattr(
cls, '__category__') or 'module'
schema.strict = not has_kwargs
schema.pymodule = importlib.import_module(cls.__module__)
schema.inject = getattr(cls, '__inject__', [])
schema.shared = getattr(cls, '__shared__', [])
for idx, name in enumerate(names):
comment = name in comments and comments[name] or name
if name in schema.inject:
type_ = None
else:
type_ = name in annotations and annotations[name] or None
value_schema = SchemaValue(name, comment, type_)
if name in schema.shared:
assert idx >= num_required, "shared config must have default value"
default = defaults[idx - num_required]
value_schema.set_default(SharedConfig(name, default))
elif idx >= num_required:
default = defaults[idx - num_required]
value_schema.set_default(default)
schema.set_schema(name, value_schema)
return schema
| PaddleDetection/ppdet/core/config/schema.py/0 | {
"file_path": "PaddleDetection/ppdet/core/config/schema.py",
"repo_id": "PaddleDetection",
"token_count": 3456
} | 62 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
import cv2
import glob
import numpy as np
from collections import OrderedDict, defaultdict
try:
from collections.abc import Sequence
except Exception:
from collections import Sequence
from .dataset import DetDataset, _make_dataset, _is_valid_file
from ppdet.core.workspace import register, serializable
from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
@register
@serializable
class MOTDataSet(DetDataset):
"""
Load dataset with MOT format, only support single class MOT.
Args:
dataset_dir (str): root directory for dataset.
image_lists (str|list): mot data image lists, muiti-source mot dataset.
data_fields (list): key name of data dictionary, at least have 'image'.
sample_num (int): number of samples to load, -1 means all.
repeat (int): repeat times for dataset, use in benchmark.
Notes:
MOT datasets root directory following this:
dataset/mot
|——————image_lists
| |——————caltech.train
| |——————caltech.val
| |——————mot16.train
| |——————mot17.train
| ......
|——————Caltech
|——————MOT17
|——————......
All the MOT datasets have the following structure:
Caltech
|——————images
| └——————00001.jpg
| |—————— ...
| └——————0000N.jpg
└——————labels_with_ids
└——————00001.txt
|—————— ...
└——————0000N.txt
or
MOT17
|——————images
| └——————train
| └——————test
└——————labels_with_ids
└——————train
"""
def __init__(self,
dataset_dir=None,
image_lists=[],
data_fields=['image'],
sample_num=-1,
repeat=1):
super(MOTDataSet, self).__init__(
dataset_dir=dataset_dir,
data_fields=data_fields,
sample_num=sample_num,
repeat=repeat)
self.dataset_dir = dataset_dir
self.image_lists = image_lists
if isinstance(self.image_lists, str):
self.image_lists = [self.image_lists]
self.roidbs = None
self.cname2cid = None
def get_anno(self):
if self.image_lists == []:
return
# only used to get categories and metric
# only check first data, but the label_list of all data should be same.
first_mot_data = self.image_lists[0].split('.')[0]
anno_file = os.path.join(self.dataset_dir, first_mot_data,
'label_list.txt')
return anno_file
def parse_dataset(self):
self.img_files = OrderedDict()
self.img_start_index = OrderedDict()
self.label_files = OrderedDict()
self.tid_num = OrderedDict()
self.tid_start_index = OrderedDict()
img_index = 0
for data_name in self.image_lists:
# check every data image list
image_lists_dir = os.path.join(self.dataset_dir, 'image_lists')
assert os.path.isdir(image_lists_dir), \
"The {} is not a directory.".format(image_lists_dir)
list_path = os.path.join(image_lists_dir, data_name)
assert os.path.exists(list_path), \
"The list path {} does not exist.".format(list_path)
# record img_files, filter out empty ones
with open(list_path, 'r') as file:
self.img_files[data_name] = file.readlines()
self.img_files[data_name] = [
os.path.join(self.dataset_dir, x.strip())
for x in self.img_files[data_name]
]
self.img_files[data_name] = list(
filter(lambda x: len(x) > 0, self.img_files[data_name]))
self.img_start_index[data_name] = img_index
img_index += len(self.img_files[data_name])
# record label_files
self.label_files[data_name] = [
x.replace('images', 'labels_with_ids').replace(
'.png', '.txt').replace('.jpg', '.txt')
for x in self.img_files[data_name]
]
for data_name, label_paths in self.label_files.items():
max_index = -1
for lp in label_paths:
lb = np.loadtxt(lp)
if len(lb) < 1:
continue
if len(lb.shape) < 2:
img_max = lb[1]
else:
img_max = np.max(lb[:, 1])
if img_max > max_index:
max_index = img_max
self.tid_num[data_name] = int(max_index + 1)
last_index = 0
for i, (k, v) in enumerate(self.tid_num.items()):
self.tid_start_index[k] = last_index
last_index += v
self.num_identities_dict = defaultdict(int)
self.num_identities_dict[0] = int(last_index + 1) # single class
self.num_imgs_each_data = [len(x) for x in self.img_files.values()]
self.total_imgs = sum(self.num_imgs_each_data)
logger.info('MOT dataset summary: ')
logger.info(self.tid_num)
logger.info('Total images: {}'.format(self.total_imgs))
logger.info('Image start index: {}'.format(self.img_start_index))
logger.info('Total identities: {}'.format(self.num_identities_dict[0]))
logger.info('Identity start index: {}'.format(self.tid_start_index))
records = []
cname2cid = mot_label()
for img_index in range(self.total_imgs):
for i, (k, v) in enumerate(self.img_start_index.items()):
if img_index >= v:
data_name = list(self.label_files.keys())[i]
start_index = v
img_file = self.img_files[data_name][img_index - start_index]
lbl_file = self.label_files[data_name][img_index - start_index]
if not os.path.exists(img_file):
logger.warning('Illegal image file: {}, and it will be ignored'.
format(img_file))
continue
if not os.path.isfile(lbl_file):
logger.warning('Illegal label file: {}, and it will be ignored'.
format(lbl_file))
continue
labels = np.loadtxt(lbl_file, dtype=np.float32).reshape(-1, 6)
# each row in labels (N, 6) is [gt_class, gt_identity, cx, cy, w, h]
cx, cy = labels[:, 2], labels[:, 3]
w, h = labels[:, 4], labels[:, 5]
gt_bbox = np.stack((cx, cy, w, h)).T.astype('float32')
gt_class = labels[:, 0:1].astype('int32')
gt_score = np.ones((len(labels), 1)).astype('float32')
gt_ide = labels[:, 1:2].astype('int32')
for i, _ in enumerate(gt_ide):
if gt_ide[i] > -1:
gt_ide[i] += self.tid_start_index[data_name]
mot_rec = {
'im_file': img_file,
'im_id': img_index,
} if 'image' in self.data_fields else {}
gt_rec = {
'gt_class': gt_class,
'gt_score': gt_score,
'gt_bbox': gt_bbox,
'gt_ide': gt_ide,
}
for k, v in gt_rec.items():
if k in self.data_fields:
mot_rec[k] = v
records.append(mot_rec)
if self.sample_num > 0 and img_index >= self.sample_num:
break
assert len(records) > 0, 'not found any mot record in %s' % (
self.image_lists)
self.roidbs, self.cname2cid = records, cname2cid
@register
@serializable
class MCMOTDataSet(DetDataset):
"""
Load dataset with MOT format, support multi-class MOT.
Args:
dataset_dir (str): root directory for dataset.
image_lists (list(str)): mcmot data image lists, muiti-source mcmot dataset.
data_fields (list): key name of data dictionary, at least have 'image'.
label_list (str): if use_default_label is False, will load
mapping between category and class index.
sample_num (int): number of samples to load, -1 means all.
Notes:
MCMOT datasets root directory following this:
dataset/mot
|——————image_lists
| |——————visdrone_mcmot.train
| |——————visdrone_mcmot.val
visdrone_mcmot
|——————images
| └——————train
| └——————val
└——————labels_with_ids
└——————train
"""
def __init__(self,
dataset_dir=None,
image_lists=[],
data_fields=['image'],
label_list=None,
sample_num=-1):
super(MCMOTDataSet, self).__init__(
dataset_dir=dataset_dir,
data_fields=data_fields,
sample_num=sample_num)
self.dataset_dir = dataset_dir
self.image_lists = image_lists
if isinstance(self.image_lists, str):
self.image_lists = [self.image_lists]
self.label_list = label_list
self.roidbs = None
self.cname2cid = None
def get_anno(self):
if self.image_lists == []:
return
# only used to get categories and metric
# only check first data, but the label_list of all data should be same.
first_mot_data = self.image_lists[0].split('.')[0]
anno_file = os.path.join(self.dataset_dir, first_mot_data,
'label_list.txt')
return anno_file
def parse_dataset(self):
self.img_files = OrderedDict()
self.img_start_index = OrderedDict()
self.label_files = OrderedDict()
self.tid_num = OrderedDict()
self.tid_start_idx_of_cls_ids = defaultdict(dict) # for MCMOT
img_index = 0
for data_name in self.image_lists:
# check every data image list
image_lists_dir = os.path.join(self.dataset_dir, 'image_lists')
assert os.path.isdir(image_lists_dir), \
"The {} is not a directory.".format(image_lists_dir)
list_path = os.path.join(image_lists_dir, data_name)
assert os.path.exists(list_path), \
"The list path {} does not exist.".format(list_path)
# record img_files, filter out empty ones
with open(list_path, 'r') as file:
self.img_files[data_name] = file.readlines()
self.img_files[data_name] = [
os.path.join(self.dataset_dir, x.strip())
for x in self.img_files[data_name]
]
self.img_files[data_name] = list(
filter(lambda x: len(x) > 0, self.img_files[data_name]))
self.img_start_index[data_name] = img_index
img_index += len(self.img_files[data_name])
# record label_files
self.label_files[data_name] = [
x.replace('images', 'labels_with_ids').replace(
'.png', '.txt').replace('.jpg', '.txt')
for x in self.img_files[data_name]
]
for data_name, label_paths in self.label_files.items():
# using max_ids_dict rather than max_index
max_ids_dict = defaultdict(int)
for lp in label_paths:
lb = np.loadtxt(lp)
if len(lb) < 1:
continue
lb = lb.reshape(-1, 6)
for item in lb:
if item[1] > max_ids_dict[int(item[0])]:
# item[0]: cls_id
# item[1]: track id
max_ids_dict[int(item[0])] = int(item[1])
# track id number
self.tid_num[data_name] = max_ids_dict
last_idx_dict = defaultdict(int)
for i, (k, v) in enumerate(self.tid_num.items()): # each sub dataset
for cls_id, id_num in v.items(): # v is a max_ids_dict
self.tid_start_idx_of_cls_ids[k][cls_id] = last_idx_dict[cls_id]
last_idx_dict[cls_id] += id_num
self.num_identities_dict = defaultdict(int)
for k, v in last_idx_dict.items():
self.num_identities_dict[k] = int(v) # total ids of each category
self.num_imgs_each_data = [len(x) for x in self.img_files.values()]
self.total_imgs = sum(self.num_imgs_each_data)
# cname2cid and cid2cname
cname2cid = {}
if self.label_list is not None:
# if use label_list for multi source mix dataset,
# please make sure label_list in the first sub_dataset at least.
sub_dataset = self.image_lists[0].split('.')[0]
label_path = os.path.join(self.dataset_dir, sub_dataset,
self.label_list)
if not os.path.exists(label_path):
logger.info(
"Note: label_list {} does not exists, use VisDrone 10 classes labels as default.".
format(label_path))
cname2cid = visdrone_mcmot_label()
else:
with open(label_path, 'r') as fr:
label_id = 0
for line in fr.readlines():
cname2cid[line.strip()] = label_id
label_id += 1
else:
cname2cid = visdrone_mcmot_label()
cid2cname = dict([(v, k) for (k, v) in cname2cid.items()])
logger.info('MCMOT dataset summary: ')
logger.info(self.tid_num)
logger.info('Total images: {}'.format(self.total_imgs))
logger.info('Image start index: {}'.format(self.img_start_index))
logger.info('Total identities of each category: ')
num_identities_dict = sorted(
self.num_identities_dict.items(), key=lambda x: x[0])
total_IDs_all_cats = 0
for (k, v) in num_identities_dict:
logger.info('Category {} [{}] has {} IDs.'.format(k, cid2cname[k],
v))
total_IDs_all_cats += v
logger.info('Total identities of all categories: {}'.format(
total_IDs_all_cats))
logger.info('Identity start index of each category: ')
for k, v in self.tid_start_idx_of_cls_ids.items():
sorted_v = sorted(v.items(), key=lambda x: x[0])
for (cls_id, start_idx) in sorted_v:
logger.info('Start index of dataset {} category {:d} is {:d}'
.format(k, cls_id, start_idx))
records = []
for img_index in range(self.total_imgs):
for i, (k, v) in enumerate(self.img_start_index.items()):
if img_index >= v:
data_name = list(self.label_files.keys())[i]
start_index = v
img_file = self.img_files[data_name][img_index - start_index]
lbl_file = self.label_files[data_name][img_index - start_index]
if not os.path.exists(img_file):
logger.warning('Illegal image file: {}, and it will be ignored'.
format(img_file))
continue
if not os.path.isfile(lbl_file):
logger.warning('Illegal label file: {}, and it will be ignored'.
format(lbl_file))
continue
labels = np.loadtxt(lbl_file, dtype=np.float32).reshape(-1, 6)
# each row in labels (N, 6) is [gt_class, gt_identity, cx, cy, w, h]
cx, cy = labels[:, 2], labels[:, 3]
w, h = labels[:, 4], labels[:, 5]
gt_bbox = np.stack((cx, cy, w, h)).T.astype('float32')
gt_class = labels[:, 0:1].astype('int32')
gt_score = np.ones((len(labels), 1)).astype('float32')
gt_ide = labels[:, 1:2].astype('int32')
for i, _ in enumerate(gt_ide):
if gt_ide[i] > -1:
cls_id = int(gt_class[i])
start_idx = self.tid_start_idx_of_cls_ids[data_name][cls_id]
gt_ide[i] += start_idx
mot_rec = {
'im_file': img_file,
'im_id': img_index,
} if 'image' in self.data_fields else {}
gt_rec = {
'gt_class': gt_class,
'gt_score': gt_score,
'gt_bbox': gt_bbox,
'gt_ide': gt_ide,
}
for k, v in gt_rec.items():
if k in self.data_fields:
mot_rec[k] = v
records.append(mot_rec)
if self.sample_num > 0 and img_index >= self.sample_num:
break
assert len(records) > 0, 'not found any mot record in %s' % (
self.image_lists)
self.roidbs, self.cname2cid = records, cname2cid
@register
@serializable
class MOTImageFolder(DetDataset):
"""
Load MOT dataset with MOT format from image folder or video .
Args:
video_file (str): path of the video file, default ''.
frame_rate (int): frame rate of the video, use cv2 VideoCapture if not set.
dataset_dir (str): root directory for dataset.
keep_ori_im (bool): whether to keep original image, default False.
Set True when used during MOT model inference while saving
images or video, or used in DeepSORT.
"""
def __init__(self,
video_file=None,
frame_rate=-1,
dataset_dir=None,
data_root=None,
image_dir=None,
sample_num=-1,
keep_ori_im=False,
anno_path=None,
**kwargs):
super(MOTImageFolder, self).__init__(
dataset_dir, image_dir, sample_num=sample_num)
self.video_file = video_file
self.data_root = data_root
self.keep_ori_im = keep_ori_im
self._imid2path = {}
self.roidbs = None
self.frame_rate = frame_rate
self.anno_path = anno_path
def check_or_download_dataset(self):
return
def parse_dataset(self, ):
if not self.roidbs:
if self.video_file is None:
self.frame_rate = 30 # set as default if infer image folder
self.roidbs = self._load_images()
else:
self.roidbs = self._load_video_images()
def _load_video_images(self):
if self.frame_rate == -1:
# if frame_rate is not set for video, use cv2.VideoCapture
cap = cv2.VideoCapture(self.video_file)
self.frame_rate = int(cap.get(cv2.CAP_PROP_FPS))
extension = self.video_file.split('.')[-1]
output_path = self.video_file.replace('.{}'.format(extension), '')
frames_path = video2frames(self.video_file, output_path,
self.frame_rate)
self.video_frames = sorted(
glob.glob(os.path.join(frames_path, '*.png')))
self.video_length = len(self.video_frames)
logger.info('Length of the video: {:d} frames.'.format(
self.video_length))
ct = 0
records = []
for image in self.video_frames:
assert image != '' and os.path.isfile(image), \
"Image {} not found".format(image)
if self.sample_num > 0 and ct >= self.sample_num:
break
rec = {'im_id': np.array([ct]), 'im_file': image}
if self.keep_ori_im:
rec.update({'keep_ori_im': 1})
self._imid2path[ct] = image
ct += 1
records.append(rec)
assert len(records) > 0, "No image file found"
return records
def _find_images(self):
image_dir = self.image_dir
if not isinstance(image_dir, Sequence):
image_dir = [image_dir]
images = []
for im_dir in image_dir:
if os.path.isdir(im_dir):
im_dir = os.path.join(self.dataset_dir, im_dir)
images.extend(_make_dataset(im_dir))
elif os.path.isfile(im_dir) and _is_valid_file(im_dir):
images.append(im_dir)
return images
def _load_images(self):
images = self._find_images()
ct = 0
records = []
for image in images:
assert image != '' and os.path.isfile(image), \
"Image {} not found".format(image)
if self.sample_num > 0 and ct >= self.sample_num:
break
rec = {'im_id': np.array([ct]), 'im_file': image}
if self.keep_ori_im:
rec.update({'keep_ori_im': 1})
self._imid2path[ct] = image
ct += 1
records.append(rec)
assert len(records) > 0, "No image file found"
return records
def get_imid2path(self):
return self._imid2path
def set_images(self, images):
self.image_dir = images
self.roidbs = self._load_images()
def set_video(self, video_file, frame_rate):
# update video_file and frame_rate by command line of tools/infer_mot.py
self.video_file = video_file
self.frame_rate = frame_rate
assert os.path.isfile(self.video_file) and _is_valid_video(self.video_file), \
"wrong or unsupported file format: {}".format(self.video_file)
self.roidbs = self._load_video_images()
def get_anno(self):
return self.anno_path
def _is_valid_video(f, extensions=('.mp4', '.avi', '.mov', '.rmvb', 'flv')):
return f.lower().endswith(extensions)
def video2frames(video_path, outpath, frame_rate, **kargs):
def _dict2str(kargs):
cmd_str = ''
for k, v in kargs.items():
cmd_str += (' ' + str(k) + ' ' + str(v))
return cmd_str
ffmpeg = ['ffmpeg ', ' -y -loglevel ', ' error ']
vid_name = os.path.basename(video_path).split('.')[0]
out_full_path = os.path.join(outpath, vid_name)
if not os.path.exists(out_full_path):
os.makedirs(out_full_path)
# video file name
outformat = os.path.join(out_full_path, '%08d.png')
cmd = ffmpeg
cmd = ffmpeg + [
' -i ', video_path, ' -r ', str(frame_rate), ' -f image2 ', outformat
]
cmd = ''.join(cmd) + _dict2str(kargs)
if os.system(cmd) != 0:
raise RuntimeError('ffmpeg process video: {} error'.format(video_path))
sys.exit(-1)
sys.stdout.flush()
return out_full_path
def mot_label():
labels_map = {'person': 0}
return labels_map
def visdrone_mcmot_label():
labels_map = {
'pedestrian': 0,
'people': 1,
'bicycle': 2,
'car': 3,
'van': 4,
'truck': 5,
'tricycle': 6,
'awning-tricycle': 7,
'bus': 8,
'motor': 9,
}
return labels_map
| PaddleDetection/ppdet/data/source/mot.py/0 | {
"file_path": "PaddleDetection/ppdet/data/source/mot.py",
"repo_id": "PaddleDetection",
"token_count": 12445
} | 63 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
try:
from collections.abc import Sequence
except Exception:
from collections import Sequence
from numbers import Number, Integral
import cv2
import numpy as np
import math
import copy
from .operators import register_op, BaseOperator
from ppdet.modeling.rbox_utils import poly2rbox_le135_np, poly2rbox_oc_np, rbox2poly_np
from ppdet.utils.logger import setup_logger
from ppdet.utils.compact import imagedraw_textsize_c
logger = setup_logger(__name__)
@register_op
class RRotate(BaseOperator):
""" Rotate Image, Polygon, Box
Args:
scale (float): rotate scale
angle (float): rotate angle
fill_value (int, tuple): fill color
auto_bound (bool): whether auto bound or not
"""
def __init__(self, scale=1.0, angle=0., fill_value=0., auto_bound=True):
super(RRotate, self).__init__()
self.scale = scale
self.angle = angle
self.fill_value = fill_value
self.auto_bound = auto_bound
def get_rotated_matrix(self, angle, scale, h, w):
center = ((w - 1) * 0.5, (h - 1) * 0.5)
matrix = cv2.getRotationMatrix2D(center, -angle, scale)
# calculate the new size
cos = np.abs(matrix[0, 0])
sin = np.abs(matrix[0, 1])
new_w = h * sin + w * cos
new_h = h * cos + w * sin
# calculate offset
n_w = int(np.round(new_w))
n_h = int(np.round(new_h))
if self.auto_bound:
ratio = min(w / n_w, h / n_h)
matrix = cv2.getRotationMatrix2D(center, -angle, ratio)
else:
matrix[0, 2] += (new_w - w) * 0.5
matrix[1, 2] += (new_h - h) * 0.5
w = n_w
h = n_h
return matrix, h, w
def get_rect_from_pts(self, pts, h, w):
""" get minimum rectangle of points
"""
assert pts.shape[-1] % 2 == 0, 'the dim of input [pts] is not correct'
min_x, min_y = np.min(pts[:, 0::2], axis=1), np.min(pts[:, 1::2],
axis=1)
max_x, max_y = np.max(pts[:, 0::2], axis=1), np.max(pts[:, 1::2],
axis=1)
min_x, min_y = np.clip(min_x, 0, w), np.clip(min_y, 0, h)
max_x, max_y = np.clip(max_x, 0, w), np.clip(max_y, 0, h)
boxes = np.stack([min_x, min_y, max_x, max_y], axis=-1)
return boxes
def apply_image(self, image, matrix, h, w):
return cv2.warpAffine(
image, matrix, (w, h), borderValue=self.fill_value)
def apply_pts(self, pts, matrix, h, w):
assert pts.shape[-1] % 2 == 0, 'the dim of input [pts] is not correct'
# n is number of samples and m is two times the number of points due to (x, y)
_, m = pts.shape
# transpose points
pts_ = pts.reshape(-1, 2).T
# pad 1 to convert the points to homogeneous coordinates
padding = np.ones((1, pts_.shape[1]), pts.dtype)
rotated_pts = np.matmul(matrix, np.concatenate((pts_, padding), axis=0))
return rotated_pts[:2, :].T.reshape(-1, m)
def apply(self, sample, context=None):
image = sample['image']
h, w = image.shape[:2]
matrix, h, w = self.get_rotated_matrix(self.angle, self.scale, h, w)
sample['image'] = self.apply_image(image, matrix, h, w)
polys = sample['gt_poly']
# TODO: segment or keypoint to be processed
if len(polys) > 0:
pts = self.apply_pts(polys, matrix, h, w)
sample['gt_poly'] = pts
sample['gt_bbox'] = self.get_rect_from_pts(pts, h, w)
return sample
@register_op
class RandomRRotate(BaseOperator):
""" Random Rotate Image
Args:
scale (float, tuple, list): rotate scale
scale_mode (str): mode of scale, [range, value, None]
angle (float, tuple, list): rotate angle
angle_mode (str): mode of angle, [range, value, None]
fill_value (float, tuple, list): fill value
rotate_prob (float): probability of rotation
auto_bound (bool): whether auto bound or not
"""
def __init__(self,
scale=1.0,
scale_mode=None,
angle=0.,
angle_mode=None,
fill_value=0.,
rotate_prob=1.0,
auto_bound=True):
super(RandomRRotate, self).__init__()
self.scale = scale
self.scale_mode = scale_mode
self.angle = angle
self.angle_mode = angle_mode
self.fill_value = fill_value
self.rotate_prob = rotate_prob
self.auto_bound = auto_bound
def get_angle(self, angle, angle_mode):
assert not angle_mode or angle_mode in [
'range', 'value'
], 'angle mode should be in [range, value, None]'
if not angle_mode:
return angle
elif angle_mode == 'range':
low, high = angle
return np.random.rand() * (high - low) + low
elif angle_mode == 'value':
return np.random.choice(angle)
def get_scale(self, scale, scale_mode):
assert not scale_mode or scale_mode in [
'range', 'value'
], 'scale mode should be in [range, value, None]'
if not scale_mode:
return scale
elif scale_mode == 'range':
low, high = scale
return np.random.rand() * (high - low) + low
elif scale_mode == 'value':
return np.random.choice(scale)
def apply(self, sample, context=None):
if np.random.rand() > self.rotate_prob:
return sample
angle = self.get_angle(self.angle, self.angle_mode)
scale = self.get_scale(self.scale, self.scale_mode)
rotator = RRotate(scale, angle, self.fill_value, self.auto_bound)
return rotator(sample)
@register_op
class Poly2RBox(BaseOperator):
""" Polygon to Rotated Box, using new OpenCV definition since 4.5.1
Args:
filter_threshold (int, float): threshold to filter annotations
filter_mode (str): filter mode, ['area', 'edge']
rbox_type (str): rbox type, ['le135', 'oc']
"""
def __init__(self, filter_threshold=4, filter_mode=None, rbox_type='le135'):
super(Poly2RBox, self).__init__()
self.filter_fn = lambda size: self.filter(size, filter_threshold, filter_mode)
self.rbox_fn = poly2rbox_le135_np if rbox_type == 'le135' else poly2rbox_oc_np
def filter(self, size, threshold, mode):
if mode == 'area':
if size[0] * size[1] < threshold:
return True
elif mode == 'edge':
if min(size) < threshold:
return True
return False
def get_rbox(self, polys):
valid_ids, rboxes, bboxes = [], [], []
for i, poly in enumerate(polys):
cx, cy, w, h, angle = self.rbox_fn(poly)
if self.filter_fn((w, h)):
continue
rboxes.append(np.array([cx, cy, w, h, angle], dtype=np.float32))
valid_ids.append(i)
xmin, ymin = min(poly[0::2]), min(poly[1::2])
xmax, ymax = max(poly[0::2]), max(poly[1::2])
bboxes.append(np.array([xmin, ymin, xmax, ymax], dtype=np.float32))
if len(valid_ids) == 0:
rboxes = np.zeros((0, 5), dtype=np.float32)
bboxes = np.zeros((0, 4), dtype=np.float32)
else:
rboxes = np.stack(rboxes)
bboxes = np.stack(bboxes)
return rboxes, bboxes, valid_ids
def apply(self, sample, context=None):
rboxes, bboxes, valid_ids = self.get_rbox(sample['gt_poly'])
sample['gt_rbox'] = rboxes
sample['gt_bbox'] = bboxes
for k in ['gt_class', 'gt_score', 'gt_poly', 'is_crowd', 'difficult']:
if k in sample:
sample[k] = sample[k][valid_ids]
return sample
@register_op
class Poly2Array(BaseOperator):
""" convert gt_poly to np.array for rotated bboxes
"""
def __init__(self):
super(Poly2Array, self).__init__()
def apply(self, sample, context=None):
if 'gt_poly' in sample:
sample['gt_poly'] = np.array(
sample['gt_poly'], dtype=np.float32).reshape((-1, 8))
return sample
@register_op
class RResize(BaseOperator):
def __init__(self, target_size, keep_ratio, interp=cv2.INTER_LINEAR):
"""
Resize image to target size. if keep_ratio is True,
resize the image's long side to the maximum of target_size
if keep_ratio is False, resize the image to target size(h, w)
Args:
target_size (int|list): image target size
keep_ratio (bool): whether keep_ratio or not, default true
interp (int): the interpolation method
"""
super(RResize, self).__init__()
self.keep_ratio = keep_ratio
self.interp = interp
if not isinstance(target_size, (Integral, Sequence)):
raise TypeError(
"Type of target_size is invalid. Must be Integer or List or Tuple, now is {}".
format(type(target_size)))
if isinstance(target_size, Integral):
target_size = [target_size, target_size]
self.target_size = target_size
def apply_image(self, image, scale):
im_scale_x, im_scale_y = scale
return cv2.resize(
image,
None,
None,
fx=im_scale_x,
fy=im_scale_y,
interpolation=self.interp)
def apply_pts(self, pts, scale, size):
im_scale_x, im_scale_y = scale
resize_w, resize_h = size
pts[:, 0::2] *= im_scale_x
pts[:, 1::2] *= im_scale_y
pts[:, 0::2] = np.clip(pts[:, 0::2], 0, resize_w)
pts[:, 1::2] = np.clip(pts[:, 1::2], 0, resize_h)
return pts
def apply(self, sample, context=None):
""" Resize the image numpy.
"""
im = sample['image']
if not isinstance(im, np.ndarray):
raise TypeError("{}: image type is not numpy.".format(self))
if len(im.shape) != 3:
raise ImageError('{}: image is not 3-dimensional.'.format(self))
# apply image
im_shape = im.shape
if self.keep_ratio:
im_size_min = np.min(im_shape[0:2])
im_size_max = np.max(im_shape[0:2])
target_size_min = np.min(self.target_size)
target_size_max = np.max(self.target_size)
im_scale = min(target_size_min / im_size_min,
target_size_max / im_size_max)
resize_h = im_scale * float(im_shape[0])
resize_w = im_scale * float(im_shape[1])
im_scale_x = im_scale
im_scale_y = im_scale
else:
resize_h, resize_w = self.target_size
im_scale_y = resize_h / im_shape[0]
im_scale_x = resize_w / im_shape[1]
im = self.apply_image(sample['image'], [im_scale_x, im_scale_y])
sample['image'] = im.astype(np.float32)
sample['im_shape'] = np.asarray([resize_h, resize_w], dtype=np.float32)
if 'scale_factor' in sample:
scale_factor = sample['scale_factor']
sample['scale_factor'] = np.asarray(
[scale_factor[0] * im_scale_y, scale_factor[1] * im_scale_x],
dtype=np.float32)
else:
sample['scale_factor'] = np.asarray(
[im_scale_y, im_scale_x], dtype=np.float32)
# apply bbox
if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
sample['gt_bbox'] = self.apply_pts(sample['gt_bbox'],
[im_scale_x, im_scale_y],
[resize_w, resize_h])
# apply polygon
if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
sample['gt_poly'] = self.apply_pts(sample['gt_poly'],
[im_scale_x, im_scale_y],
[resize_w, resize_h])
return sample
@register_op
class RandomRFlip(BaseOperator):
def __init__(self, prob=0.5):
"""
Args:
prob (float): the probability of flipping image
"""
super(RandomRFlip, self).__init__()
self.prob = prob
if not (isinstance(self.prob, float)):
raise TypeError("{}: input type is invalid.".format(self))
def apply_image(self, image):
return image[:, ::-1, :]
def apply_pts(self, pts, width):
oldx = pts[:, 0::2].copy()
pts[:, 0::2] = width - oldx - 1
return pts
def apply(self, sample, context=None):
"""Filp the image and bounding box.
Operators:
1. Flip the image numpy.
2. Transform the bboxes' x coordinates.
(Must judge whether the coordinates are normalized!)
3. Transform the segmentations' x coordinates.
(Must judge whether the coordinates are normalized!)
Output:
sample: the image, bounding box and segmentation part
in sample are flipped.
"""
if np.random.uniform(0, 1) < self.prob:
im = sample['image']
height, width = im.shape[:2]
im = self.apply_image(im)
if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
sample['gt_bbox'] = self.apply_pts(sample['gt_bbox'], width)
if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
sample['gt_poly'] = self.apply_pts(sample['gt_poly'], width)
sample['flipped'] = True
sample['image'] = im
return sample
@register_op
class VisibleRBox(BaseOperator):
"""
In debug mode, visualize images according to `gt_box`.
(Currently only supported when not cropping and flipping image.)
"""
def __init__(self, output_dir='debug'):
super(VisibleRBox, self).__init__()
self.output_dir = output_dir
if not os.path.isdir(output_dir):
os.makedirs(output_dir)
def apply(self, sample, context=None):
image = Image.fromarray(sample['image'].astype(np.uint8))
out_file_name = '{:012d}.jpg'.format(sample['im_id'][0])
width = sample['w']
height = sample['h']
# gt_poly = sample['gt_rbox']
gt_poly = sample['gt_poly']
gt_class = sample['gt_class']
draw = ImageDraw.Draw(image)
for i in range(gt_poly.shape[0]):
x1, y1, x2, y2, x3, y3, x4, y4 = gt_poly[i]
draw.line(
[(x1, y1), (x2, y2), (x3, y3), (x4, y4), (x1, y1)],
width=2,
fill='green')
# draw label
xmin = min(x1, x2, x3, x4)
ymin = min(y1, y2, y3, y4)
text = str(gt_class[i][0])
tw, th = imagedraw_textsize_c(draw, text)
draw.rectangle(
[(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill='green')
draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255))
if 'gt_keypoint' in sample.keys():
gt_keypoint = sample['gt_keypoint']
if self.is_normalized:
for i in range(gt_keypoint.shape[1]):
if i % 2:
gt_keypoint[:, i] = gt_keypoint[:, i] * height
else:
gt_keypoint[:, i] = gt_keypoint[:, i] * width
for i in range(gt_keypoint.shape[0]):
keypoint = gt_keypoint[i]
for j in range(int(keypoint.shape[0] / 2)):
x1 = round(keypoint[2 * j]).astype(np.int32)
y1 = round(keypoint[2 * j + 1]).astype(np.int32)
draw.ellipse(
(x1, y1, x1 + 5, y1 + 5), fill='green', outline='green')
save_path = os.path.join(self.output_dir, out_file_name)
image.save(save_path, quality=95)
return sample
@register_op
class Rbox2Poly(BaseOperator):
"""
Convert rbbox format to poly format.
"""
def __init__(self):
super(Rbox2Poly, self).__init__()
def apply(self, sample, context=None):
assert 'gt_rbox' in sample
assert sample['gt_rbox'].shape[1] == 5
rboxes = sample['gt_rbox']
polys = rbox2poly_np(rboxes)
sample['gt_poly'] = polys
xmin, ymin = polys[:, 0::2].min(1), polys[:, 1::2].min(1)
xmax, ymax = polys[:, 0::2].max(1), polys[:, 1::2].max(1)
sample['gt_bbox'] = np.stack([xmin, ymin, xmin, ymin], axis=1)
return sample
| PaddleDetection/ppdet/data/transform/rotated_operators.py/0 | {
"file_path": "PaddleDetection/ppdet/data/transform/rotated_operators.py",
"repo_id": "PaddleDetection",
"token_count": 8585
} | 64 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// The code is based on
// https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/csrc/box_iou_rotated/
#include "paddle/extension.h"
#include "rbox_iou_utils.h"
// 2D block with 32 * 16 = 512 threads per block
const int BLOCK_DIM_X = 32;
const int BLOCK_DIM_Y = 16;
template <typename T>
__global__ void rbox_iou_cuda_kernel(const int rbox1_num, const int rbox2_num,
const T *rbox1_data_ptr,
const T *rbox2_data_ptr,
T *output_data_ptr) {
// get row_start and col_start
const int rbox1_block_idx = blockIdx.x * blockDim.x;
const int rbox2_block_idx = blockIdx.y * blockDim.y;
const int rbox1_thread_num = min(rbox1_num - rbox1_block_idx, blockDim.x);
const int rbox2_thread_num = min(rbox2_num - rbox2_block_idx, blockDim.y);
__shared__ T block_boxes1[BLOCK_DIM_X * 5];
__shared__ T block_boxes2[BLOCK_DIM_Y * 5];
// It's safe to copy using threadIdx.x since BLOCK_DIM_X >= BLOCK_DIM_Y
if (threadIdx.x < rbox1_thread_num && threadIdx.y == 0) {
block_boxes1[threadIdx.x * 5 + 0] =
rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 0];
block_boxes1[threadIdx.x * 5 + 1] =
rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 1];
block_boxes1[threadIdx.x * 5 + 2] =
rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 2];
block_boxes1[threadIdx.x * 5 + 3] =
rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 3];
block_boxes1[threadIdx.x * 5 + 4] =
rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 4];
}
// threadIdx.x < BLOCK_DIM_Y=rbox2_thread_num, just use same condition as
// above: threadIdx.y == 0
if (threadIdx.x < rbox2_thread_num && threadIdx.y == 0) {
block_boxes2[threadIdx.x * 5 + 0] =
rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 0];
block_boxes2[threadIdx.x * 5 + 1] =
rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 1];
block_boxes2[threadIdx.x * 5 + 2] =
rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 2];
block_boxes2[threadIdx.x * 5 + 3] =
rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 3];
block_boxes2[threadIdx.x * 5 + 4] =
rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 4];
}
// sync
__syncthreads();
if (threadIdx.x < rbox1_thread_num && threadIdx.y < rbox2_thread_num) {
int offset = (rbox1_block_idx + threadIdx.x) * rbox2_num + rbox2_block_idx +
threadIdx.y;
output_data_ptr[offset] = rbox_iou_single<T>(
block_boxes1 + threadIdx.x * 5, block_boxes2 + threadIdx.y * 5);
}
}
#define CHECK_INPUT_GPU(x) \
PD_CHECK(x.is_gpu(), #x " must be a GPU Tensor.")
std::vector<paddle::Tensor> RboxIouCUDAForward(const paddle::Tensor &rbox1,
const paddle::Tensor &rbox2) {
CHECK_INPUT_GPU(rbox1);
CHECK_INPUT_GPU(rbox2);
auto rbox1_num = rbox1.shape()[0];
auto rbox2_num = rbox2.shape()[0];
auto output =
paddle::empty({rbox1_num, rbox2_num}, rbox1.dtype(), paddle::GPUPlace());
const int blocks_x = CeilDiv(rbox1_num, BLOCK_DIM_X);
const int blocks_y = CeilDiv(rbox2_num, BLOCK_DIM_Y);
dim3 blocks(blocks_x, blocks_y);
dim3 threads(BLOCK_DIM_X, BLOCK_DIM_Y);
PD_DISPATCH_FLOATING_TYPES(
rbox1.type(), "rbox_iou_cuda_kernel", ([&] {
rbox_iou_cuda_kernel<data_t><<<blocks, threads, 0, rbox1.stream()>>>(
rbox1_num, rbox2_num, rbox1.data<data_t>(), rbox2.data<data_t>(),
output.data<data_t>());
}));
return {output};
}
| PaddleDetection/ppdet/ext_op/csrc/rbox_iou/rbox_iou.cu/0 | {
"file_path": "PaddleDetection/ppdet/ext_op/csrc/rbox_iou/rbox_iou.cu",
"repo_id": "PaddleDetection",
"token_count": 2034
} | 65 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import cv2
import numpy as np
from collections import OrderedDict
import paddle
from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
__all__ = ['face_eval_run', 'lmk2out']
def face_eval_run(model,
image_dir,
gt_file,
pred_dir='output/pred',
eval_mode='widerface',
multi_scale=False):
# load ground truth files
with open(gt_file, 'r') as f:
gt_lines = f.readlines()
imid2path = []
pos_gt = 0
while pos_gt < len(gt_lines):
name_gt = gt_lines[pos_gt].strip('\n\t').split()[0]
imid2path.append(name_gt)
pos_gt += 1
n_gt = int(gt_lines[pos_gt].strip('\n\t').split()[0])
pos_gt += 1 + n_gt
logger.info('The ground truth file load {} images'.format(len(imid2path)))
dets_dist = OrderedDict()
for iter_id, im_path in enumerate(imid2path):
image_path = os.path.join(image_dir, im_path)
if eval_mode == 'fddb':
image_path += '.jpg'
assert os.path.exists(image_path)
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if multi_scale:
shrink, max_shrink = get_shrink(image.shape[0], image.shape[1])
det0 = detect_face(model, image, shrink)
det1 = flip_test(model, image, shrink)
[det2, det3] = multi_scale_test(model, image, max_shrink)
det4 = multi_scale_test_pyramid(model, image, max_shrink)
det = np.row_stack((det0, det1, det2, det3, det4))
dets = bbox_vote(det)
else:
dets = detect_face(model, image, 1)
if eval_mode == 'widerface':
save_widerface_bboxes(image_path, dets, pred_dir)
else:
dets_dist[im_path] = dets
if iter_id % 100 == 0:
logger.info('Test iter {}'.format(iter_id))
if eval_mode == 'fddb':
save_fddb_bboxes(dets_dist, pred_dir)
logger.info("Finish evaluation.")
def detect_face(model, image, shrink):
image_shape = [image.shape[0], image.shape[1]]
if shrink != 1:
h, w = int(image_shape[0] * shrink), int(image_shape[1] * shrink)
image = cv2.resize(image, (w, h))
image_shape = [h, w]
img = face_img_process(image)
image_shape = np.asarray([image_shape])
scale_factor = np.asarray([[shrink, shrink]])
data = {
"image": paddle.to_tensor(
img, dtype='float32'),
"im_shape": paddle.to_tensor(
image_shape, dtype='float32'),
"scale_factor": paddle.to_tensor(
scale_factor, dtype='float32')
}
model.eval()
detection = model(data)
detection = detection['bbox'].numpy()
# layout: xmin, ymin, xmax. ymax, score
if np.prod(detection.shape) == 1:
logger.info("No face detected")
return np.array([[0, 0, 0, 0, 0]])
det_conf = detection[:, 1]
det_xmin = detection[:, 2]
det_ymin = detection[:, 3]
det_xmax = detection[:, 4]
det_ymax = detection[:, 5]
det = np.column_stack((det_xmin, det_ymin, det_xmax, det_ymax, det_conf))
return det
def flip_test(model, image, shrink):
img = cv2.flip(image, 1)
det_f = detect_face(model, img, shrink)
det_t = np.zeros(det_f.shape)
img_width = image.shape[1]
det_t[:, 0] = img_width - det_f[:, 2]
det_t[:, 1] = det_f[:, 1]
det_t[:, 2] = img_width - det_f[:, 0]
det_t[:, 3] = det_f[:, 3]
det_t[:, 4] = det_f[:, 4]
return det_t
def multi_scale_test(model, image, max_shrink):
# Shrink detecting is only used to detect big faces
st = 0.5 if max_shrink >= 0.75 else 0.5 * max_shrink
det_s = detect_face(model, image, st)
index = np.where(
np.maximum(det_s[:, 2] - det_s[:, 0] + 1, det_s[:, 3] - det_s[:, 1] + 1)
> 30)[0]
det_s = det_s[index, :]
# Enlarge one times
bt = min(2, max_shrink) if max_shrink > 1 else (st + max_shrink) / 2
det_b = detect_face(model, image, bt)
# Enlarge small image x times for small faces
if max_shrink > 2:
bt *= 2
while bt < max_shrink:
det_b = np.row_stack((det_b, detect_face(model, image, bt)))
bt *= 2
det_b = np.row_stack((det_b, detect_face(model, image, max_shrink)))
# Enlarged images are only used to detect small faces.
if bt > 1:
index = np.where(
np.minimum(det_b[:, 2] - det_b[:, 0] + 1,
det_b[:, 3] - det_b[:, 1] + 1) < 100)[0]
det_b = det_b[index, :]
# Shrinked images are only used to detect big faces.
else:
index = np.where(
np.maximum(det_b[:, 2] - det_b[:, 0] + 1,
det_b[:, 3] - det_b[:, 1] + 1) > 30)[0]
det_b = det_b[index, :]
return det_s, det_b
def multi_scale_test_pyramid(model, image, max_shrink):
# Use image pyramids to detect faces
det_b = detect_face(model, image, 0.25)
index = np.where(
np.maximum(det_b[:, 2] - det_b[:, 0] + 1, det_b[:, 3] - det_b[:, 1] + 1)
> 30)[0]
det_b = det_b[index, :]
st = [0.75, 1.25, 1.5, 1.75]
for i in range(len(st)):
if st[i] <= max_shrink:
det_temp = detect_face(model, image, st[i])
# Enlarged images are only used to detect small faces.
if st[i] > 1:
index = np.where(
np.minimum(det_temp[:, 2] - det_temp[:, 0] + 1,
det_temp[:, 3] - det_temp[:, 1] + 1) < 100)[0]
det_temp = det_temp[index, :]
# Shrinked images are only used to detect big faces.
else:
index = np.where(
np.maximum(det_temp[:, 2] - det_temp[:, 0] + 1,
det_temp[:, 3] - det_temp[:, 1] + 1) > 30)[0]
det_temp = det_temp[index, :]
det_b = np.row_stack((det_b, det_temp))
return det_b
def to_chw(image):
"""
Transpose image from HWC to CHW.
Args:
image (np.array): an image with HWC layout.
"""
# HWC to CHW
if len(image.shape) == 3:
image = np.swapaxes(image, 1, 2)
image = np.swapaxes(image, 1, 0)
return image
def face_img_process(image,
mean=[104., 117., 123.],
std=[127.502231, 127.502231, 127.502231]):
img = np.array(image)
img = to_chw(img)
img = img.astype('float32')
img -= np.array(mean)[:, np.newaxis, np.newaxis].astype('float32')
img /= np.array(std)[:, np.newaxis, np.newaxis].astype('float32')
img = [img]
img = np.array(img)
return img
def get_shrink(height, width):
"""
Args:
height (int): image height.
width (int): image width.
"""
# avoid out of memory
max_shrink_v1 = (0x7fffffff / 577.0 / (height * width))**0.5
max_shrink_v2 = ((678 * 1024 * 2.0 * 2.0) / (height * width))**0.5
def get_round(x, loc):
str_x = str(x)
if '.' in str_x:
str_before, str_after = str_x.split('.')
len_after = len(str_after)
if len_after >= 3:
str_final = str_before + '.' + str_after[0:loc]
return float(str_final)
else:
return x
max_shrink = get_round(min(max_shrink_v1, max_shrink_v2), 2) - 0.3
if max_shrink >= 1.5 and max_shrink < 2:
max_shrink = max_shrink - 0.1
elif max_shrink >= 2 and max_shrink < 3:
max_shrink = max_shrink - 0.2
elif max_shrink >= 3 and max_shrink < 4:
max_shrink = max_shrink - 0.3
elif max_shrink >= 4 and max_shrink < 5:
max_shrink = max_shrink - 0.4
elif max_shrink >= 5:
max_shrink = max_shrink - 0.5
elif max_shrink <= 0.1:
max_shrink = 0.1
shrink = max_shrink if max_shrink < 1 else 1
return shrink, max_shrink
def bbox_vote(det):
order = det[:, 4].ravel().argsort()[::-1]
det = det[order, :]
if det.shape[0] == 0:
dets = np.array([[10, 10, 20, 20, 0.002]])
det = np.empty(shape=[0, 5])
while det.shape[0] > 0:
# IOU
area = (det[:, 2] - det[:, 0] + 1) * (det[:, 3] - det[:, 1] + 1)
xx1 = np.maximum(det[0, 0], det[:, 0])
yy1 = np.maximum(det[0, 1], det[:, 1])
xx2 = np.minimum(det[0, 2], det[:, 2])
yy2 = np.minimum(det[0, 3], det[:, 3])
w = np.maximum(0.0, xx2 - xx1 + 1)
h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h
o = inter / (area[0] + area[:] - inter)
# nms
merge_index = np.where(o >= 0.3)[0]
det_accu = det[merge_index, :]
det = np.delete(det, merge_index, 0)
if merge_index.shape[0] <= 1:
if det.shape[0] == 0:
try:
dets = np.row_stack((dets, det_accu))
except:
dets = det_accu
continue
det_accu[:, 0:4] = det_accu[:, 0:4] * np.tile(det_accu[:, -1:], (1, 4))
max_score = np.max(det_accu[:, 4])
det_accu_sum = np.zeros((1, 5))
det_accu_sum[:, 0:4] = np.sum(det_accu[:, 0:4],
axis=0) / np.sum(det_accu[:, -1:])
det_accu_sum[:, 4] = max_score
try:
dets = np.row_stack((dets, det_accu_sum))
except:
dets = det_accu_sum
dets = dets[0:750, :]
keep_index = np.where(dets[:, 4] >= 0.01)[0]
dets = dets[keep_index, :]
return dets
def save_widerface_bboxes(image_path, bboxes_scores, output_dir):
image_name = image_path.split('/')[-1]
image_class = image_path.split('/')[-2]
odir = os.path.join(output_dir, image_class)
if not os.path.exists(odir):
os.makedirs(odir)
ofname = os.path.join(odir, '%s.txt' % (image_name[:-4]))
f = open(ofname, 'w')
f.write('{:s}\n'.format(image_class + '/' + image_name))
f.write('{:d}\n'.format(bboxes_scores.shape[0]))
for box_score in bboxes_scores:
xmin, ymin, xmax, ymax, score = box_score
f.write('{:.1f} {:.1f} {:.1f} {:.1f} {:.3f}\n'.format(xmin, ymin, (
xmax - xmin + 1), (ymax - ymin + 1), score))
f.close()
logger.info("The predicted result is saved as {}".format(ofname))
def save_fddb_bboxes(bboxes_scores,
output_dir,
output_fname='pred_fddb_res.txt'):
if not os.path.exists(output_dir):
os.makedirs(output_dir)
predict_file = os.path.join(output_dir, output_fname)
f = open(predict_file, 'w')
for image_path, dets in bboxes_scores.iteritems():
f.write('{:s}\n'.format(image_path))
f.write('{:d}\n'.format(dets.shape[0]))
for box_score in dets:
xmin, ymin, xmax, ymax, score = box_score
width, height = xmax - xmin, ymax - ymin
f.write('{:.1f} {:.1f} {:.1f} {:.1f} {:.3f}\n'
.format(xmin, ymin, width, height, score))
logger.info("The predicted result is saved as {}".format(predict_file))
return predict_file
def lmk2out(results, is_bbox_normalized=False):
"""
Args:
results: request a dict, should include: `landmark`, `im_id`,
if is_bbox_normalized=True, also need `im_shape`.
is_bbox_normalized: whether or not landmark is normalized.
"""
xywh_res = []
for t in results:
bboxes = t['bbox'][0]
lengths = t['bbox'][1][0]
im_ids = np.array(t['im_id'][0]).flatten()
if bboxes.shape == (1, 1) or bboxes is None:
continue
face_index = t['face_index'][0]
prior_box = t['prior_boxes'][0]
predict_lmk = t['landmark'][0]
prior = np.reshape(prior_box, (-1, 4))
predictlmk = np.reshape(predict_lmk, (-1, 10))
k = 0
for a in range(len(lengths)):
num = lengths[a]
im_id = int(im_ids[a])
for i in range(num):
score = bboxes[k][1]
theindex = face_index[i][0]
me_prior = prior[theindex, :]
lmk_pred = predictlmk[theindex, :]
prior_w = me_prior[2] - me_prior[0]
prior_h = me_prior[3] - me_prior[1]
prior_w_center = (me_prior[2] + me_prior[0]) / 2
prior_h_center = (me_prior[3] + me_prior[1]) / 2
lmk_decode = np.zeros((10))
for j in [0, 2, 4, 6, 8]:
lmk_decode[j] = lmk_pred[j] * 0.1 * prior_w + prior_w_center
for j in [1, 3, 5, 7, 9]:
lmk_decode[j] = lmk_pred[j] * 0.1 * prior_h + prior_h_center
im_shape = t['im_shape'][0][a].tolist()
image_h, image_w = int(im_shape[0]), int(im_shape[1])
if is_bbox_normalized:
lmk_decode = lmk_decode * np.array([
image_w, image_h, image_w, image_h, image_w, image_h,
image_w, image_h, image_w, image_h
])
lmk_res = {
'image_id': im_id,
'landmark': lmk_decode,
'score': score,
}
xywh_res.append(lmk_res)
k += 1
return xywh_res
| PaddleDetection/ppdet/metrics/widerface_utils.py/0 | {
"file_path": "PaddleDetection/ppdet/metrics/widerface_utils.py",
"repo_id": "PaddleDetection",
"token_count": 7277
} | 66 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
from .meta_arch import BaseArch
from ppdet.core.workspace import register, create
__all__ = ['DETR']
# Deformable DETR, DINO use the same architecture as DETR
@register
class DETR(BaseArch):
__category__ = 'architecture'
__inject__ = ['post_process', 'post_process_semi']
__shared__ = ['with_mask', 'exclude_post_process']
def __init__(self,
backbone,
transformer='DETRTransformer',
detr_head='DETRHead',
neck=None,
post_process='DETRPostProcess',
post_process_semi=None,
with_mask=False,
exclude_post_process=False):
super(DETR, self).__init__()
self.backbone = backbone
self.transformer = transformer
self.detr_head = detr_head
self.neck = neck
self.post_process = post_process
self.with_mask = with_mask
self.exclude_post_process = exclude_post_process
self.post_process_semi = post_process_semi
@classmethod
def from_config(cls, cfg, *args, **kwargs):
# backbone
backbone = create(cfg['backbone'])
# neck
kwargs = {'input_shape': backbone.out_shape}
neck = create(cfg['neck'], **kwargs) if cfg['neck'] else None
# transformer
if neck is not None:
kwargs = {'input_shape': neck.out_shape}
transformer = create(cfg['transformer'], **kwargs)
# head
kwargs = {
'hidden_dim': transformer.hidden_dim,
'nhead': transformer.nhead,
'input_shape': backbone.out_shape
}
detr_head = create(cfg['detr_head'], **kwargs)
return {
'backbone': backbone,
'transformer': transformer,
"detr_head": detr_head,
"neck": neck
}
def _forward(self):
# Backbone
body_feats = self.backbone(self.inputs)
# Neck
if self.neck is not None:
body_feats = self.neck(body_feats)
# Transformer
pad_mask = self.inputs.get('pad_mask', None)
out_transformer = self.transformer(body_feats, pad_mask, self.inputs)
# DETR Head
if self.training:
detr_losses = self.detr_head(out_transformer, body_feats,
self.inputs)
detr_losses.update({
'loss': paddle.add_n(
[v for k, v in detr_losses.items() if 'log' not in k])
})
return detr_losses
else:
preds = self.detr_head(out_transformer, body_feats)
if self.exclude_post_process:
bbox, bbox_num, mask = preds
else:
bbox, bbox_num, mask = self.post_process(
preds, self.inputs['im_shape'], self.inputs['scale_factor'],
paddle.shape(self.inputs['image'])[2:])
output = {'bbox': bbox, 'bbox_num': bbox_num}
if self.with_mask:
output['mask'] = mask
return output
def get_loss(self):
return self._forward()
def get_pred(self):
return self._forward()
| PaddleDetection/ppdet/modeling/architectures/detr.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/architectures/detr.py",
"repo_id": "PaddleDetection",
"token_count": 1825
} | 67 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import paddle
from ppdet.core.workspace import register, create
from .meta_arch import BaseArch
__all__ = ['PPYOLOE', 'PPYOLOEWithAuxHead']
# PP-YOLOE and PP-YOLOE+ are recommended to use this architecture, especially when use distillation or aux head
# PP-YOLOE and PP-YOLOE+ can also use the same architecture of YOLOv3 in yolo.py when not use distillation or aux head
@register
class PPYOLOE(BaseArch):
"""
PPYOLOE network, see https://arxiv.org/abs/2203.16250
Args:
backbone (nn.Layer): backbone instance
neck (nn.Layer): neck instance
yolo_head (nn.Layer): anchor_head instance
post_process (object): `BBoxPostProcess` instance
ssod_loss (object): 'SSODPPYOLOELoss' instance, only used for semi-det(ssod)
for_distill (bool): whether for distillation
feat_distill_place (str): distill which feature for distillation
for_mot (bool): whether return other features for multi-object tracking
models, default False in pure object detection models.
"""
__category__ = 'architecture'
__shared__ = ['for_distill']
__inject__ = ['post_process', 'ssod_loss']
def __init__(self,
backbone='CSPResNet',
neck='CustomCSPPAN',
yolo_head='PPYOLOEHead',
post_process='BBoxPostProcess',
ssod_loss='SSODPPYOLOELoss',
for_distill=False,
feat_distill_place='neck_feats',
for_mot=False):
super(PPYOLOE, self).__init__()
self.backbone = backbone
self.neck = neck
self.yolo_head = yolo_head
self.post_process = post_process
self.for_mot = for_mot
# for ssod, semi-det
self.is_teacher = False
self.ssod_loss = ssod_loss
# distill
self.for_distill = for_distill
self.feat_distill_place = feat_distill_place
if for_distill:
assert feat_distill_place in ['backbone_feats', 'neck_feats']
@classmethod
def from_config(cls, cfg, *args, **kwargs):
backbone = create(cfg['backbone'])
kwargs = {'input_shape': backbone.out_shape}
neck = create(cfg['neck'], **kwargs)
kwargs = {'input_shape': neck.out_shape}
yolo_head = create(cfg['yolo_head'], **kwargs)
return {
'backbone': backbone,
'neck': neck,
"yolo_head": yolo_head,
}
def _forward(self):
body_feats = self.backbone(self.inputs)
neck_feats = self.neck(body_feats, self.for_mot)
self.is_teacher = self.inputs.get('is_teacher', False) # for semi-det
if self.training or self.is_teacher:
yolo_losses = self.yolo_head(neck_feats, self.inputs)
if self.for_distill:
if self.feat_distill_place == 'backbone_feats':
self.yolo_head.distill_pairs['backbone_feats'] = body_feats
elif self.feat_distill_place == 'neck_feats':
self.yolo_head.distill_pairs['neck_feats'] = neck_feats
else:
raise ValueError
return yolo_losses
else:
yolo_head_outs = self.yolo_head(neck_feats)
if self.post_process is not None:
bbox, bbox_num, nms_keep_idx = self.post_process(
yolo_head_outs, self.yolo_head.mask_anchors,
self.inputs['im_shape'], self.inputs['scale_factor'])
else:
bbox, bbox_num, nms_keep_idx = self.yolo_head.post_process(
yolo_head_outs, self.inputs['scale_factor'])
if self.use_extra_data:
extra_data = {} # record the bbox output before nms, such like scores and nms_keep_idx
"""extra_data:{
'scores': predict scores,
'nms_keep_idx': bbox index before nms,
}
"""
extra_data['scores'] = yolo_head_outs[0] # predict scores (probability)
extra_data['nms_keep_idx'] = nms_keep_idx
output = {'bbox': bbox, 'bbox_num': bbox_num, 'extra_data': extra_data}
else:
output = {'bbox': bbox, 'bbox_num': bbox_num}
return output
def get_loss(self):
return self._forward()
def get_pred(self):
return self._forward()
def get_loss_keys(self):
return ['loss_cls', 'loss_iou', 'loss_dfl', 'loss_contrast']
def get_ssod_loss(self, student_head_outs, teacher_head_outs, train_cfg):
ssod_losses = self.ssod_loss(student_head_outs, teacher_head_outs,
train_cfg)
return ssod_losses
@register
class PPYOLOEWithAuxHead(BaseArch):
__category__ = 'architecture'
__inject__ = ['post_process']
def __init__(self,
backbone='CSPResNet',
neck='CustomCSPPAN',
yolo_head='PPYOLOEHead',
aux_head='SimpleConvHead',
post_process='BBoxPostProcess',
for_mot=False,
detach_epoch=5):
"""
PPYOLOE network, see https://arxiv.org/abs/2203.16250
Args:
backbone (nn.Layer): backbone instance
neck (nn.Layer): neck instance
yolo_head (nn.Layer): anchor_head instance
post_process (object): `BBoxPostProcess` instance
for_mot (bool): whether return other features for multi-object tracking
models, default False in pure object detection models.
"""
super(PPYOLOEWithAuxHead, self).__init__()
self.backbone = backbone
self.neck = neck
self.aux_neck = copy.deepcopy(self.neck)
self.yolo_head = yolo_head
self.aux_head = aux_head
self.post_process = post_process
self.for_mot = for_mot
self.detach_epoch = detach_epoch
@classmethod
def from_config(cls, cfg, *args, **kwargs):
# backbone
backbone = create(cfg['backbone'])
# fpn
kwargs = {'input_shape': backbone.out_shape}
neck = create(cfg['neck'], **kwargs)
aux_neck = copy.deepcopy(neck)
# head
kwargs = {'input_shape': neck.out_shape}
yolo_head = create(cfg['yolo_head'], **kwargs)
aux_head = create(cfg['aux_head'], **kwargs)
return {
'backbone': backbone,
'neck': neck,
"yolo_head": yolo_head,
'aux_head': aux_head,
}
def _forward(self):
body_feats = self.backbone(self.inputs)
neck_feats = self.neck(body_feats, self.for_mot)
if self.training:
if self.inputs['epoch_id'] >= self.detach_epoch:
aux_neck_feats = self.aux_neck([f.detach() for f in body_feats])
dual_neck_feats = (paddle.concat(
[f.detach(), aux_f], axis=1) for f, aux_f in
zip(neck_feats, aux_neck_feats))
else:
aux_neck_feats = self.aux_neck(body_feats)
dual_neck_feats = (paddle.concat(
[f, aux_f], axis=1) for f, aux_f in
zip(neck_feats, aux_neck_feats))
aux_cls_scores, aux_bbox_preds = self.aux_head(dual_neck_feats)
loss = self.yolo_head(
neck_feats,
self.inputs,
aux_pred=[aux_cls_scores, aux_bbox_preds])
return loss
else:
yolo_head_outs = self.yolo_head(neck_feats)
if self.post_process is not None:
bbox, bbox_num, nms_keep_idx = self.post_process(
yolo_head_outs, self.yolo_head.mask_anchors,
self.inputs['im_shape'], self.inputs['scale_factor'])
else:
bbox, bbox_num, nms_keep_idx = self.yolo_head.post_process(
yolo_head_outs, self.inputs['scale_factor'])
if self.use_extra_data:
extra_data = {} # record the bbox output before nms, such like scores and nms_keep_idx
"""extra_data:{
'scores': predict scores,
'nms_keep_idx': bbox index before nms,
}
"""
extra_data['scores'] = yolo_head_outs[0] # predict scores (probability)
# Todo: get logits output
extra_data['nms_keep_idx'] = nms_keep_idx
output = {'bbox': bbox, 'bbox_num': bbox_num, 'extra_data': extra_data}
else:
output = {'bbox': bbox, 'bbox_num': bbox_num}
return output
def get_loss(self):
return self._forward()
def get_pred(self):
return self._forward()
| PaddleDetection/ppdet/modeling/architectures/ppyoloe.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/architectures/ppyoloe.py",
"repo_id": "PaddleDetection",
"token_count": 4867
} | 68 |
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
try:
from scipy.optimize import linear_sum_assignment
except ImportError:
linear_sum_assignment = None
import paddle
from ppdet.core.workspace import register
__all__ = ['PoseHungarianAssigner', 'PseudoSampler']
class AssignResult:
"""Stores assignments between predicted and truth boxes.
Attributes:
num_gts (int): the number of truth boxes considered when computing this
assignment
gt_inds (LongTensor): for each predicted box indicates the 1-based
index of the assigned truth box. 0 means unassigned and -1 means
ignore.
max_overlaps (FloatTensor): the iou between the predicted box and its
assigned truth box.
labels (None | LongTensor): If specified, for each predicted box
indicates the category label of the assigned truth box.
"""
def __init__(self, num_gts, gt_inds, max_overlaps, labels=None):
self.num_gts = num_gts
self.gt_inds = gt_inds
self.max_overlaps = max_overlaps
self.labels = labels
# Interface for possible user-defined properties
self._extra_properties = {}
@property
def num_preds(self):
"""int: the number of predictions in this assignment"""
return len(self.gt_inds)
def set_extra_property(self, key, value):
"""Set user-defined new property."""
assert key not in self.info
self._extra_properties[key] = value
def get_extra_property(self, key):
"""Get user-defined property."""
return self._extra_properties.get(key, None)
@property
def info(self):
"""dict: a dictionary of info about the object"""
basic_info = {
'num_gts': self.num_gts,
'num_preds': self.num_preds,
'gt_inds': self.gt_inds,
'max_overlaps': self.max_overlaps,
'labels': self.labels,
}
basic_info.update(self._extra_properties)
return basic_info
@register
class PoseHungarianAssigner:
"""Computes one-to-one matching between predictions and ground truth.
This class computes an assignment between the targets and the predictions
based on the costs. The costs are weighted sum of three components:
classification cost, regression L1 cost and regression oks cost. The
targets don't include the no_object, so generally there are more
predictions than targets. After the one-to-one matching, the un-matched
are treated as backgrounds. Thus each query prediction will be assigned
with `0` or a positive integer indicating the ground truth index:
- 0: negative sample, no assigned gt.
- positive integer: positive sample, index (1-based) of assigned gt.
Args:
cls_weight (int | float, optional): The scale factor for classification
cost. Default 1.0.
kpt_weight (int | float, optional): The scale factor for regression
L1 cost. Default 1.0.
oks_weight (int | float, optional): The scale factor for regression
oks cost. Default 1.0.
"""
__inject__ = ['cls_cost', 'kpt_cost', 'oks_cost']
def __init__(self,
cls_cost='ClassificationCost',
kpt_cost='KptL1Cost',
oks_cost='OksCost'):
self.cls_cost = cls_cost
self.kpt_cost = kpt_cost
self.oks_cost = oks_cost
def assign(self,
cls_pred,
kpt_pred,
gt_labels,
gt_keypoints,
gt_areas,
img_meta,
eps=1e-7):
"""Computes one-to-one matching based on the weighted costs.
This method assign each query prediction to a ground truth or
background. The `assigned_gt_inds` with -1 means don't care,
0 means negative sample, and positive number is the index (1-based)
of assigned gt.
The assignment is done in the following steps, the order matters.
1. assign every prediction to -1
2. compute the weighted costs
3. do Hungarian matching on CPU based on the costs
4. assign all to 0 (background) first, then for each matched pair
between predictions and gts, treat this prediction as foreground
and assign the corresponding gt index (plus 1) to it.
Args:
cls_pred (Tensor): Predicted classification logits, shape
[num_query, num_class].
kpt_pred (Tensor): Predicted keypoints with normalized coordinates
(x_{i}, y_{i}), which are all in range [0, 1]. Shape
[num_query, K*2].
gt_labels (Tensor): Label of `gt_keypoints`, shape (num_gt,).
gt_keypoints (Tensor): Ground truth keypoints with unnormalized
coordinates [p^{1}_x, p^{1}_y, p^{1}_v, ..., \
p^{K}_x, p^{K}_y, p^{K}_v]. Shape [num_gt, K*3].
gt_areas (Tensor): Ground truth mask areas, shape (num_gt,).
img_meta (dict): Meta information for current image.
eps (int | float, optional): A value added to the denominator for
numerical stability. Default 1e-7.
Returns:
:obj:`AssignResult`: The assigned result.
"""
num_gts, num_kpts = gt_keypoints.shape[0], kpt_pred.shape[0]
if not gt_keypoints.astype('bool').any():
num_gts = 0
# 1. assign -1 by default
assigned_gt_inds = paddle.full((num_kpts, ), -1, dtype="int64")
assigned_labels = paddle.full((num_kpts, ), -1, dtype="int64")
if num_gts == 0 or num_kpts == 0:
# No ground truth or keypoints, return empty assignment
if num_gts == 0:
# No ground truth, assign all to background
assigned_gt_inds[:] = 0
return AssignResult(
num_gts, assigned_gt_inds, None, labels=assigned_labels)
img_h, img_w, _ = img_meta['img_shape']
factor = paddle.to_tensor(
[img_w, img_h, img_w, img_h], dtype=gt_keypoints.dtype).reshape(
(1, -1))
# 2. compute the weighted costs
# classification cost
cls_cost = self.cls_cost(cls_pred, gt_labels)
# keypoint regression L1 cost
gt_keypoints_reshape = gt_keypoints.reshape((gt_keypoints.shape[0], -1,
3))
valid_kpt_flag = gt_keypoints_reshape[..., -1]
kpt_pred_tmp = kpt_pred.clone().detach().reshape((kpt_pred.shape[0], -1,
2))
normalize_gt_keypoints = gt_keypoints_reshape[
..., :2] / factor[:, :2].unsqueeze(0)
kpt_cost = self.kpt_cost(kpt_pred_tmp, normalize_gt_keypoints,
valid_kpt_flag)
# keypoint OKS cost
kpt_pred_tmp = kpt_pred.clone().detach().reshape((kpt_pred.shape[0], -1,
2))
kpt_pred_tmp = kpt_pred_tmp * factor[:, :2].unsqueeze(0)
oks_cost = self.oks_cost(kpt_pred_tmp, gt_keypoints_reshape[..., :2],
valid_kpt_flag, gt_areas)
# weighted sum of above three costs
cost = cls_cost + kpt_cost + oks_cost
# 3. do Hungarian matching on CPU using linear_sum_assignment
cost = cost.detach().cpu()
if linear_sum_assignment is None:
raise ImportError('Please run "pip install scipy" '
'to install scipy first.')
matched_row_inds, matched_col_inds = linear_sum_assignment(cost)
matched_row_inds = paddle.to_tensor(matched_row_inds)
matched_col_inds = paddle.to_tensor(matched_col_inds)
# 4. assign backgrounds and foregrounds
# assign all indices to backgrounds first
assigned_gt_inds[:] = 0
# assign foregrounds based on matching results
assigned_gt_inds[matched_row_inds] = matched_col_inds + 1
assigned_labels[matched_row_inds] = gt_labels[matched_col_inds][
..., 0].astype("int64")
return AssignResult(
num_gts, assigned_gt_inds, None, labels=assigned_labels)
class SamplingResult:
"""Bbox sampling result.
"""
def __init__(self, pos_inds, neg_inds, bboxes, gt_bboxes, assign_result,
gt_flags):
self.pos_inds = pos_inds
self.neg_inds = neg_inds
if pos_inds.size > 0:
self.pos_bboxes = bboxes[pos_inds]
self.neg_bboxes = bboxes[neg_inds]
self.pos_is_gt = gt_flags[pos_inds]
self.num_gts = gt_bboxes.shape[0]
self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1
if gt_bboxes.numel() == 0:
# hack for index error case
assert self.pos_assigned_gt_inds.numel() == 0
self.pos_gt_bboxes = paddle.zeros(
gt_bboxes.shape, dtype=gt_bboxes.dtype).reshape((-1, 4))
else:
if len(gt_bboxes.shape) < 2:
gt_bboxes = gt_bboxes.reshape((-1, 4))
self.pos_gt_bboxes = paddle.index_select(
gt_bboxes,
self.pos_assigned_gt_inds.astype('int64'),
axis=0)
if assign_result.labels is not None:
self.pos_gt_labels = assign_result.labels[pos_inds]
else:
self.pos_gt_labels = None
@property
def bboxes(self):
"""paddle.Tensor: concatenated positive and negative boxes"""
return paddle.concat([self.pos_bboxes, self.neg_bboxes])
def __nice__(self):
data = self.info.copy()
data['pos_bboxes'] = data.pop('pos_bboxes').shape
data['neg_bboxes'] = data.pop('neg_bboxes').shape
parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())]
body = ' ' + ',\n '.join(parts)
return '{\n' + body + '\n}'
@property
def info(self):
"""Returns a dictionary of info about the object."""
return {
'pos_inds': self.pos_inds,
'neg_inds': self.neg_inds,
'pos_bboxes': self.pos_bboxes,
'neg_bboxes': self.neg_bboxes,
'pos_is_gt': self.pos_is_gt,
'num_gts': self.num_gts,
'pos_assigned_gt_inds': self.pos_assigned_gt_inds,
}
@register
class PseudoSampler:
"""A pseudo sampler that does not do sampling actually."""
def __init__(self, **kwargs):
pass
def _sample_pos(self, **kwargs):
"""Sample positive samples."""
raise NotImplementedError
def _sample_neg(self, **kwargs):
"""Sample negative samples."""
raise NotImplementedError
def sample(self, assign_result, bboxes, gt_bboxes, *args, **kwargs):
"""Directly returns the positive and negative indices of samples.
Args:
assign_result (:obj:`AssignResult`): Assigned results
bboxes (paddle.Tensor): Bounding boxes
gt_bboxes (paddle.Tensor): Ground truth boxes
Returns:
:obj:`SamplingResult`: sampler results
"""
pos_inds = paddle.nonzero(
assign_result.gt_inds > 0, as_tuple=False).squeeze(-1)
neg_inds = paddle.nonzero(
assign_result.gt_inds == 0, as_tuple=False).squeeze(-1)
gt_flags = paddle.zeros([bboxes.shape[0]], dtype='int32')
sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes,
assign_result, gt_flags)
return sampling_result
| PaddleDetection/ppdet/modeling/assigners/hungarian_assigner.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/assigners/hungarian_assigner.py",
"repo_id": "PaddleDetection",
"token_count": 5721
} | 69 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from ppdet.core.workspace import register, serializable
from ppdet.modeling.layers import ConvNormLayer
from ..shape_spec import ShapeSpec
DLA_cfg = {34: ([1, 1, 1, 2, 2, 1], [16, 32, 64, 128, 256, 512]), }
class BasicBlock(nn.Layer):
def __init__(self, ch_in, ch_out, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = ConvNormLayer(
ch_in,
ch_out,
filter_size=3,
stride=stride,
bias_on=False,
norm_decay=None)
self.conv2 = ConvNormLayer(
ch_out,
ch_out,
filter_size=3,
stride=1,
bias_on=False,
norm_decay=None)
def forward(self, inputs, residual=None):
if residual is None:
residual = inputs
out = self.conv1(inputs)
out = F.relu(out)
out = self.conv2(out)
out = paddle.add(x=out, y=residual)
out = F.relu(out)
return out
class Root(nn.Layer):
def __init__(self, ch_in, ch_out, kernel_size, residual):
super(Root, self).__init__()
self.conv = ConvNormLayer(
ch_in,
ch_out,
filter_size=1,
stride=1,
bias_on=False,
norm_decay=None)
self.residual = residual
def forward(self, inputs):
children = inputs
out = self.conv(paddle.concat(inputs, axis=1))
if self.residual:
out = paddle.add(x=out, y=children[0])
out = F.relu(out)
return out
class Tree(nn.Layer):
def __init__(self,
level,
block,
ch_in,
ch_out,
stride=1,
level_root=False,
root_dim=0,
root_kernel_size=1,
root_residual=False):
super(Tree, self).__init__()
if root_dim == 0:
root_dim = 2 * ch_out
if level_root:
root_dim += ch_in
if level == 1:
self.tree1 = block(ch_in, ch_out, stride)
self.tree2 = block(ch_out, ch_out, 1)
else:
self.tree1 = Tree(
level - 1,
block,
ch_in,
ch_out,
stride,
root_dim=0,
root_kernel_size=root_kernel_size,
root_residual=root_residual)
self.tree2 = Tree(
level - 1,
block,
ch_out,
ch_out,
1,
root_dim=root_dim + ch_out,
root_kernel_size=root_kernel_size,
root_residual=root_residual)
if level == 1:
self.root = Root(root_dim, ch_out, root_kernel_size, root_residual)
self.level_root = level_root
self.root_dim = root_dim
self.downsample = None
self.project = None
self.level = level
if stride > 1:
self.downsample = nn.MaxPool2D(stride, stride=stride)
if ch_in != ch_out:
self.project = ConvNormLayer(
ch_in,
ch_out,
filter_size=1,
stride=1,
bias_on=False,
norm_decay=None)
def forward(self, x, residual=None, children=None):
children = [] if children is None else children
bottom = self.downsample(x) if self.downsample else x
residual = self.project(bottom) if self.project else bottom
if self.level_root:
children.append(bottom)
x1 = self.tree1(x, residual)
if self.level == 1:
x2 = self.tree2(x1)
x = self.root([x2, x1] + children)
else:
children.append(x1)
x = self.tree2(x1, children=children)
return x
@register
@serializable
class DLA(nn.Layer):
"""
DLA, see https://arxiv.org/pdf/1707.06484.pdf
Args:
depth (int): DLA depth, only support 34 now.
residual_root (bool): whether use a reidual layer in the root block
pre_img (bool): add pre_img, only used in CenterTrack
pre_hm (bool): add pre_hm, only used in CenterTrack
"""
def __init__(self,
depth=34,
residual_root=False,
pre_img=False,
pre_hm=False):
super(DLA, self).__init__()
assert depth == 34, 'Only support DLA with depth of 34 now.'
if depth == 34:
block = BasicBlock
levels, channels = DLA_cfg[depth]
self.channels = channels
self.num_levels = len(levels)
self.base_layer = nn.Sequential(
ConvNormLayer(
3,
channels[0],
filter_size=7,
stride=1,
bias_on=False,
norm_decay=None),
nn.ReLU())
self.level0 = self._make_conv_level(channels[0], channels[0], levels[0])
self.level1 = self._make_conv_level(
channels[0], channels[1], levels[1], stride=2)
self.level2 = Tree(
levels[2],
block,
channels[1],
channels[2],
2,
level_root=False,
root_residual=residual_root)
self.level3 = Tree(
levels[3],
block,
channels[2],
channels[3],
2,
level_root=True,
root_residual=residual_root)
self.level4 = Tree(
levels[4],
block,
channels[3],
channels[4],
2,
level_root=True,
root_residual=residual_root)
self.level5 = Tree(
levels[5],
block,
channels[4],
channels[5],
2,
level_root=True,
root_residual=residual_root)
if pre_img:
self.pre_img_layer = nn.Sequential(
ConvNormLayer(
3,
channels[0],
filter_size=7,
stride=1,
bias_on=False,
norm_decay=None),
nn.ReLU())
if pre_hm:
self.pre_hm_layer = nn.Sequential(
ConvNormLayer(
1,
channels[0],
filter_size=7,
stride=1,
bias_on=False,
norm_decay=None),
nn.ReLU())
self.pre_img = pre_img
self.pre_hm = pre_hm
def _make_conv_level(self, ch_in, ch_out, conv_num, stride=1):
modules = []
for i in range(conv_num):
modules.extend([
ConvNormLayer(
ch_in,
ch_out,
filter_size=3,
stride=stride if i == 0 else 1,
bias_on=False,
norm_decay=None), nn.ReLU()
])
ch_in = ch_out
return nn.Sequential(*modules)
@property
def out_shape(self):
return [
ShapeSpec(channels=self.channels[i]) for i in range(self.num_levels)
]
def forward(self, inputs):
outs = []
feats = self.base_layer(inputs['image'])
if self.pre_img and 'pre_image' in inputs and inputs[
'pre_image'] is not None:
feats = feats + self.pre_img_layer(inputs['pre_image'])
if self.pre_hm and 'pre_hm' in inputs and inputs['pre_hm'] is not None:
feats = feats + self.pre_hm_layer(inputs['pre_hm'])
for i in range(self.num_levels):
feats = getattr(self, 'level{}'.format(i))(feats)
outs.append(feats)
return outs
| PaddleDetection/ppdet/modeling/backbones/dla.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/backbones/dla.py",
"repo_id": "PaddleDetection",
"token_count": 4722
} | 70 |
Subsets and Splits