text
stringlengths 3
11.2M
| id
stringlengths 15
188
| metadata
dict | __index_level_0__
int64 0
275
|
---|---|---|---|
{"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/open_data/mmlu/data.tar", "etag": null} | swift/benchmarks/modelscope_mmlu/downloads/d64b26c10c449adcb348e2555cb778bdcec57076d255c5cd72f1ac43a98c3da3.json/0 | {
"file_path": "swift/benchmarks/modelscope_mmlu/downloads/d64b26c10c449adcb348e2555cb778bdcec57076d255c5cd72f1ac43a98c3da3.json",
"repo_id": "swift",
"token_count": 40
} | 171 |
MIT | swift/benchmarks/modelscope_mmlu/mmlu/anatomy/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE/0 | {
"file_path": "swift/benchmarks/modelscope_mmlu/mmlu/anatomy/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE",
"repo_id": "swift",
"token_count": 1
} | 172 |
MIT | swift/benchmarks/modelscope_mmlu/mmlu/global_facts/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE/0 | {
"file_path": "swift/benchmarks/modelscope_mmlu/mmlu/global_facts/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE",
"repo_id": "swift",
"token_count": 1
} | 173 |
MIT | swift/benchmarks/modelscope_mmlu/mmlu/human_sexuality/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE/0 | {
"file_path": "swift/benchmarks/modelscope_mmlu/mmlu/human_sexuality/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE",
"repo_id": "swift",
"token_count": 1
} | 174 |
MIT | swift/benchmarks/modelscope_mmlu/mmlu/professional_medicine/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE/0 | {
"file_path": "swift/benchmarks/modelscope_mmlu/mmlu/professional_medicine/1.0.0/fedfb5e4f551779e93567fbaaa992d74323de5ed8041b2a38b33dc9af632e3f5/LICENSE",
"repo_id": "swift",
"token_count": 1
} | 175 |
# 界面训练推理
目前SWIFT已经支持了界面化的训练和推理,参数支持和脚本训练相同。在安装SWIFT后,使用如下命令:
```shell
swift web-ui
```
开启界面训练和推理。
web-ui没有传入参数,所有可控部分都在界面中。但是有几个环境变量可以使用:
> WEBUI_SHARE=1/0 默认为0 控制gradio是否是share状态
>
> SWIFT_UI_LANG=en/zh 控制web-ui界面语言
>
> WEBUI_SERVER server_name参数,web-ui host ip,0.0.0.0代表所有ip均可访问,127.0.0.1代表只允许本机访问
>
> WEBUI_PORT web-ui的端口号
>
> USE_INFERENCE=1/0 默认0. 控制gradio的推理页面是直接加载模型推理或者部署(USE_INFERENCE=0)
| swift/docs/source/GetStarted/界面训练推理.md/0 | {
"file_path": "swift/docs/source/GetStarted/界面训练推理.md",
"repo_id": "swift",
"token_count": 489
} | 176 |
# VLLM推理加速与部署
## 目录
- [环境准备](#环境准备)
- [推理加速](#推理加速)
- [Web-UI加速](#web-ui加速)
- [部署](#部署)
- [VLLM & LoRA](#vllm--lora)
## 环境准备
GPU设备: A10, 3090, V100, A100均可.
```bash
# 设置pip全局镜像 (加速下载)
pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/
# 安装ms-swift
pip install 'ms-swift[llm]' -U
# vllm与cuda版本有对应关系,请按照`https://docs.vllm.ai/en/latest/getting_started/installation.html`选择版本
pip install vllm
pip install openai -U
# 环境对齐 (通常不需要运行. 如果你运行错误, 可以跑下面的代码, 仓库使用最新环境测试)
pip install -r requirements/framework.txt -U
pip install -r requirements/llm.txt -U
```
## 推理加速
vllm不支持bnb量化的模型. vllm支持的模型可以查看[支持的模型](支持的模型和数据集.md#模型).
### qwen-7b-chat
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from swift.llm import (
ModelType, get_vllm_engine, get_default_template_type,
get_template, inference_vllm
)
model_type = ModelType.qwen_7b_chat
llm_engine = get_vllm_engine(model_type)
template_type = get_default_template_type(model_type)
template = get_template(template_type, llm_engine.hf_tokenizer)
# 与`transformers.GenerationConfig`类似的接口
llm_engine.generation_config.max_new_tokens = 256
request_list = [{'query': '你好!'}, {'query': '浙江的省会在哪?'}]
resp_list = inference_vllm(llm_engine, template, request_list)
for request, resp in zip(request_list, resp_list):
print(f"query: {request['query']}")
print(f"response: {resp['response']}")
history1 = resp_list[1]['history']
request_list = [{'query': '这有什么好吃的', 'history': history1}]
resp_list = inference_vllm(llm_engine, template, request_list)
for request, resp in zip(request_list, resp_list):
print(f"query: {request['query']}")
print(f"response: {resp['response']}")
print(f"history: {resp['history']}")
"""Out[0]
query: 你好!
response: 你好!很高兴为你服务。有什么我可以帮助你的吗?
query: 浙江的省会在哪?
response: 浙江省会是杭州市。
query: 这有什么好吃的
response: 杭州是一个美食之城,拥有许多著名的菜肴和小吃,例如西湖醋鱼、东坡肉、叫化童子鸡等。此外,杭州还有许多小吃店,可以品尝到各种各样的本地美食。
history: [('浙江的省会在哪?', '浙江省会是杭州市。'), ('这有什么好吃的', '杭州是一个美食之城,拥有许多著名的菜肴和小吃,例如西湖醋鱼、东坡肉、叫化童子鸡等。此外,杭州还有许多小吃店,可以品尝到各种各样的本地美食。')]
"""
```
### 流式输出
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from swift.llm import (
ModelType, get_vllm_engine, get_default_template_type,
get_template, inference_stream_vllm
)
model_type = ModelType.qwen_7b_chat
llm_engine = get_vllm_engine(model_type)
template_type = get_default_template_type(model_type)
template = get_template(template_type, llm_engine.hf_tokenizer)
# 与`transformers.GenerationConfig`类似的接口
llm_engine.generation_config.max_new_tokens = 256
request_list = [{'query': '你好!'}, {'query': '浙江的省会在哪?'}]
gen = inference_stream_vllm(llm_engine, template, request_list)
query_list = [request['query'] for request in request_list]
print(f"query_list: {query_list}")
for resp_list in gen:
response_list = [resp['response'] for resp in resp_list]
print(f'response_list: {response_list}')
history1 = resp_list[1]['history']
request_list = [{'query': '这有什么好吃的', 'history': history1}]
gen = inference_stream_vllm(llm_engine, template, request_list)
query = request_list[0]['query']
print(f"query: {query}")
for resp_list in gen:
response = resp_list[0]['response']
print(f'response: {response}')
history = resp_list[0]['history']
print(f'history: {history}')
"""Out[0]
query_list: ['你好!', '浙江的省会在哪?']
...
response_list: ['你好!很高兴为你服务。有什么我可以帮助你的吗?', '浙江省会是杭州市。']
query: 这有什么好吃的
...
response: 杭州是一个美食之城,拥有许多著名的菜肴和小吃,例如西湖醋鱼、东坡肉、叫化童子鸡等。此外,杭州还有许多小吃店,可以品尝到各种各样的本地美食。
history: [('浙江的省会在哪?', '浙江省会是杭州市。'), ('这有什么好吃的', '杭州是一个美食之城,拥有许多著名的菜肴和小吃,例如西湖醋鱼、东坡肉、叫化童子鸡等。此外,杭州还有许多小吃店,可以品尝到各种各样的本地美食。')]
"""
```
### chatglm3
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from swift.llm import (
ModelType, get_vllm_engine, get_default_template_type,
get_template, inference_vllm
)
model_type = ModelType.chatglm3_6b
llm_engine = get_vllm_engine(model_type)
template_type = get_default_template_type(model_type)
template = get_template(template_type, llm_engine.hf_tokenizer)
# 与`transformers.GenerationConfig`类似的接口
llm_engine.generation_config.max_new_tokens = 256
request_list = [{'query': '你好!'}, {'query': '浙江的省会在哪?'}]
resp_list = inference_vllm(llm_engine, template, request_list)
for request, resp in zip(request_list, resp_list):
print(f"query: {request['query']}")
print(f"response: {resp['response']}")
history1 = resp_list[1]['history']
request_list = [{'query': '这有什么好吃的', 'history': history1}]
resp_list = inference_vllm(llm_engine, template, request_list)
for request, resp in zip(request_list, resp_list):
print(f"query: {request['query']}")
print(f"response: {resp['response']}")
print(f"history: {resp['history']}")
"""Out[0]
query: 你好!
response: 您好,我是人工智能助手。很高兴为您服务!请问有什么问题我可以帮您解答?
query: 浙江的省会在哪?
response: 浙江的省会是杭州。
query: 这有什么好吃的
response: 浙江有很多美食,其中一些非常有名的包括杭州的龙井虾仁、东坡肉、西湖醋鱼、叫化童子鸡等。另外,浙江还有很多特色小吃和糕点,比如宁波的汤团、年糕,温州的炒螃蟹、温州肉圆等。
history: [('浙江的省会在哪?', '浙江的省会是杭州。'), ('这有什么好吃的', '浙江有很多美食,其中一些非常有名的包括杭州的龙井虾仁、东坡肉、西湖醋鱼、叫化童子鸡等。另外,浙江还有很多特色小吃和糕点,比如宁波的汤团、年糕,温州的炒螃蟹、温州肉圆等。')]
"""
```
### 使用CLI
```bash
# qwen
CUDA_VISIBLE_DEVICES=0 swift infer --model_type qwen-7b-chat --infer_backend vllm
# yi
CUDA_VISIBLE_DEVICES=0 swift infer --model_type yi-6b-chat --infer_backend vllm
# gptq
CUDA_VISIBLE_DEVICES=0 swift infer --model_type qwen1half-7b-chat-int4 --infer_backend vllm
```
### 微调后的模型
**单样本推理**:
使用LoRA进行微调的模型你需要先[merge-lora](LLM微调文档.md#merge-lora), 产生完整的checkpoint目录.
使用全参数微调的模型可以无缝使用VLLM进行推理加速.
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from swift.llm import (
ModelType, get_vllm_engine, get_default_template_type,
get_template, inference_vllm
)
ckpt_dir = 'vx-xxx/checkpoint-100-merged'
model_type = ModelType.qwen_7b_chat
template_type = get_default_template_type(model_type)
llm_engine = get_vllm_engine(model_type, model_id_or_path=ckpt_dir)
tokenizer = llm_engine.hf_tokenizer
template = get_template(template_type, tokenizer)
query = '你好'
resp = inference_vllm(llm_engine, template, [{'query': query}])[0]
print(f"response: {resp['response']}")
print(f"history: {resp['history']}")
```
**使用CLI**:
```bash
# merge LoRA增量权重并使用vllm进行推理加速
# 如果你需要量化, 可以指定`--quant_bits 4`.
CUDA_VISIBLE_DEVICES=0 swift export \
--ckpt_dir 'xxx/vx-xxx/checkpoint-xxx' --merge_lora true
# 使用数据集评估
CUDA_VISIBLE_DEVICES=0 swift infer \
--ckpt_dir 'xxx/vx-xxx/checkpoint-xxx-merged' \
--infer_backend vllm \
--load_dataset_config true \
# 人工评估
CUDA_VISIBLE_DEVICES=0 swift infer \
--ckpt_dir 'xxx/vx-xxx/checkpoint-xxx-merged' \
--infer_backend vllm \
```
## Web-UI加速
### 原始模型
```bash
CUDA_VISIBLE_DEVICES=0 swift app-ui --model_type qwen-7b-chat --infer_backend vllm
```
### 微调后模型
```bash
# merge LoRA增量权重并使用vllm作为backend构建app-ui
# 如果你需要量化, 可以指定`--quant_bits 4`.
CUDA_VISIBLE_DEVICES=0 swift export \
--ckpt_dir 'xxx/vx-xxx/checkpoint-xxx' --merge_lora true
CUDA_VISIBLE_DEVICES=0 swift app-ui --ckpt_dir 'xxx/vx-xxx/checkpoint-xxx-merged' --infer_backend vllm
```
## 部署
swift使用VLLM作为推理后端, 并兼容openai的API样式.
服务端的部署命令行参数可以参考: [deploy命令行参数](命令行参数.md#deploy-参数).
客户端的openai的API参数可以参考: https://platform.openai.com/docs/api-reference/introduction.
### 原始模型
#### qwen-7b-chat
**服务端:**
```bash
CUDA_VISIBLE_DEVICES=0 swift deploy --model_type qwen-7b-chat
# 多卡部署
RAY_memory_monitor_refresh_ms=0 CUDA_VISIBLE_DEVICES=0,1,2,3 swift deploy --model_type qwen-7b-chat --tensor_parallel_size 4
```
**客户端:**
测试:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-7b-chat",
"messages": [{"role": "user", "content": "晚上睡不着觉怎么办?"}],
"max_tokens": 256,
"temperature": 0
}'
```
使用swift的同步客户端接口:
```python
from swift.llm import get_model_list_client, XRequestConfig, inference_client
model_list = get_model_list_client()
model_type = model_list.data[0].id
print(f'model_type: {model_type}')
query = '浙江的省会在哪里?'
request_config = XRequestConfig(seed=42)
resp = inference_client(model_type, query, request_config=request_config)
response = resp.choices[0].message.content
print(f'query: {query}')
print(f'response: {response}')
history = [(query, response)]
query = '这有什么好吃的?'
request_config = XRequestConfig(stream=True, seed=42)
stream_resp = inference_client(model_type, query, history, request_config=request_config)
print(f'query: {query}')
print('response: ', end='')
for chunk in stream_resp:
print(chunk.choices[0].delta.content, end='', flush=True)
print()
"""Out[0]
model_type: qwen-7b-chat
query: 浙江的省会在哪里?
response: 浙江省的省会是杭州市。
query: 这有什么好吃的?
response: 杭州有许多美食,例如西湖醋鱼、东坡肉、龙井虾仁、叫化童子鸡等。此外,杭州还有许多特色小吃,如西湖藕粉、杭州小笼包、杭州油条等。
"""
```
使用swift的异步客户端接口:
```python
import asyncio
from swift.llm import get_model_list_client, XRequestConfig, inference_client_async
model_list = get_model_list_client()
model_type = model_list.data[0].id
print(f'model_type: {model_type}')
query = '浙江的省会在哪里?'
request_config = XRequestConfig(seed=42)
resp = asyncio.run(inference_client_async(model_type, query, request_config=request_config))
response = resp.choices[0].message.content
print(f'query: {query}')
print(f'response: {response}')
async def _stream():
global query
history = [(query, response)]
query = '这有什么好吃的?'
request_config = XRequestConfig(stream=True, seed=42)
stream_resp = await inference_client_async(model_type, query, history, request_config=request_config)
print(f'query: {query}')
print('response: ', end='')
async for chunk in stream_resp:
print(chunk.choices[0].delta.content, end='', flush=True)
print()
asyncio.run(_stream())
"""Out[0]
model_type: qwen-7b-chat
query: 浙江的省会在哪里?
response: 浙江省的省会是杭州市。
query: 这有什么好吃的?
response: 杭州有许多美食,例如西湖醋鱼、东坡肉、龙井虾仁、叫化童子鸡等。此外,杭州还有许多特色小吃,如西湖藕粉、杭州小笼包、杭州油条等。
"""
```
使用openai(同步):
```python
from openai import OpenAI
client = OpenAI(
api_key='EMPTY',
base_url='http://localhost:8000/v1',
)
model_type = client.models.list().data[0].id
print(f'model_type: {model_type}')
query = '浙江的省会在哪里?'
messages = [{
'role': 'user',
'content': query
}]
resp = client.chat.completions.create(
model=model_type,
messages=messages,
seed=42)
response = resp.choices[0].message.content
print(f'query: {query}')
print(f'response: {response}')
# 流式
messages.append({'role': 'assistant', 'content': response})
query = '这有什么好吃的?'
messages.append({'role': 'user', 'content': query})
stream_resp = client.chat.completions.create(
model=model_type,
messages=messages,
stream=True,
seed=42)
print(f'query: {query}')
print('response: ', end='')
for chunk in stream_resp:
print(chunk.choices[0].delta.content, end='', flush=True)
print()
"""Out[0]
model_type: qwen-7b-chat
query: 浙江的省会在哪里?
response: 浙江省的省会是杭州市。
query: 这有什么好吃的?
response: 杭州有许多美食,例如西湖醋鱼、东坡肉、龙井虾仁、叫化童子鸡等。此外,杭州还有许多特色小吃,如西湖藕粉、杭州小笼包、杭州油条等。
"""
```
#### qwen-7b
**服务端:**
```bash
CUDA_VISIBLE_DEVICES=0 swift deploy --model_type qwen-7b
# 多卡部署
RAY_memory_monitor_refresh_ms=0 CUDA_VISIBLE_DEVICES=0,1,2,3 swift deploy --model_type qwen-7b --tensor_parallel_size 4
```
**客户端:**
测试:
```bash
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-7b",
"prompt": "浙江 -> 杭州\n安徽 -> 合肥\n四川 ->",
"max_tokens": 32,
"temperature": 0.1,
"seed": 42
}'
```
使用swift的同步客户端接口:
```python
from swift.llm import get_model_list_client, XRequestConfig, inference_client
model_list = get_model_list_client()
model_type = model_list.data[0].id
print(f'model_type: {model_type}')
query = '浙江 -> 杭州\n安徽 -> 合肥\n四川 ->'
request_config = XRequestConfig(max_tokens=32, temperature=0.1, seed=42)
resp = inference_client(model_type, query, request_config=request_config)
response = resp.choices[0].text
print(f'query: {query}')
print(f'response: {response}')
request_config.stream = True
stream_resp = inference_client(model_type, query, request_config=request_config)
print(f'query: {query}')
print('response: ', end='')
for chunk in stream_resp:
print(chunk.choices[0].text, end='', flush=True)
print()
"""Out[0]
model_type: qwen-7b
query: 浙江 -> 杭州
安徽 -> 合肥
四川 ->
response: 成都
广东 -> 广州
江苏 -> 南京
浙江 -> 杭州
安徽 -> 合肥
四川 -> 成都
query: 浙江 -> 杭州
安徽 -> 合肥
四川 ->
response: 成都
广东 -> 广州
江苏 -> 南京
浙江 -> 杭州
安徽 -> 合肥
四川 -> 成都
"""
```
使用swift的异步客户端接口:
```python
import asyncio
from swift.llm import get_model_list_client, XRequestConfig, inference_client_async
model_list = get_model_list_client()
model_type = model_list.data[0].id
print(f'model_type: {model_type}')
query = '浙江 -> 杭州\n安徽 -> 合肥\n四川 ->'
request_config = XRequestConfig(max_tokens=32, temperature=0.1, seed=42)
resp = asyncio.run(inference_client_async(model_type, query, request_config=request_config))
response = resp.choices[0].text
print(f'query: {query}')
print(f'response: {response}')
async def _stream():
request_config.stream = True
stream_resp = await inference_client_async(model_type, query, request_config=request_config)
print(f'query: {query}')
print('response: ', end='')
async for chunk in stream_resp:
print(chunk.choices[0].text, end='', flush=True)
print()
asyncio.run(_stream())
"""Out[0]
model_type: qwen-7b
query: 浙江 -> 杭州
安徽 -> 合肥
四川 ->
response: 成都
广东 -> 广州
江苏 -> 南京
浙江 -> 杭州
安徽 -> 合肥
四川 -> 成都
query: 浙江 -> 杭州
安徽 -> 合肥
四川 ->
response: 成都
广东 -> 广州
江苏 -> 南京
浙江 -> 杭州
安徽 -> 合肥
四川 -> 成都
"""
```
使用openai(同步):
```python
from openai import OpenAI
client = OpenAI(
api_key='EMPTY',
base_url='http://localhost:8000/v1',
)
model_type = client.models.list().data[0].id
print(f'model_type: {model_type}')
query = '浙江 -> 杭州\n安徽 -> 合肥\n四川 ->'
kwargs = {'model': model_type, 'prompt': query, 'seed': 42, 'temperature': 0.1, 'max_tokens': 32}
resp = client.completions.create(**kwargs)
response = resp.choices[0].text
print(f'query: {query}')
print(f'response: {response}')
# 流式
stream_resp = client.completions.create(stream=True, **kwargs)
response = resp.choices[0].text
print(f'query: {query}')
print('response: ', end='')
for chunk in stream_resp:
print(chunk.choices[0].text, end='', flush=True)
print()
"""Out[0]
model_type: qwen-7b
query: 浙江 -> 杭州
安徽 -> 合肥
四川 ->
response: 成都
广东 -> 广州
江苏 -> 南京
浙江 -> 杭州
安徽 -> 合肥
四川 -> 成都
query: 浙江 -> 杭州
安徽 -> 合肥
四川 ->
response: 成都
广东 -> 广州
江苏 -> 南京
浙江 -> 杭州
安徽 -> 合肥
四川 -> 成都
"""
```
### 微调后模型
服务端:
```bash
# merge LoRA增量权重并部署
# 如果你需要量化, 可以指定`--quant_bits 4`.
CUDA_VISIBLE_DEVICES=0 swift export \
--ckpt_dir 'xxx/vx-xxx/checkpoint-xxx' --merge_lora true
CUDA_VISIBLE_DEVICES=0 swift deploy --ckpt_dir 'xxx/vx-xxx/checkpoint-xxx-merged'
```
客户端示例代码同原始模型.
## 多LoRA部署
目前pt方式部署模型已经支持`peft>=0.10.0`进行多LoRA部署,具体方法为:
- 确保部署时`merge_lora`为`False`
- 使用`--lora_modules`参数, 可以查看[命令行文档](命令行参数.md)
- 推理时指定lora tuner的名字到模型字段
举例:
```shell
# 假设从llama3-8b-instruct训练了一个名字叫卡卡罗特的LoRA模型
# 服务端
swift deploy --ckpt_dir /mnt/ckpt-1000 --infer_backend pt --lora_modules my_tuner=/mnt/my-tuner
# 会加载起来两个tuner,一个是`/mnt/ckpt-1000`的`default-lora`,一个是`/mnt/my-tuner`的`my_tuner`
# 客户端
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "my-tuner",
"messages": [{"role": "user", "content": "who are you?"}],
"max_tokens": 256,
"temperature": 0
}'
# resp: 我是卡卡罗特...
# 如果指定mode='llama3-8b-instruct',则返回I'm llama3...,即原模型的返回值
```
> [!NOTE]
>
> `--ckpt_dir`参数如果是个lora路径,则原来的default会被加载到default-lora的tuner上,其他的tuner需要通过`lora_modules`自行加载
## VLLM & LoRA
VLLM & LoRA支持的模型可以查看: https://docs.vllm.ai/en/latest/models/supported_models.html
### 准备LoRA
```shell
# Experimental environment: 4 * A100
# 4 * 30GB GPU memory
CUDA_VISIBLE_DEVICES=0,1,2,3 \
NPROC_PER_NODE=4 \
swift sft \
--model_type llama2-7b-chat \
--dataset self-cognition#500 sharegpt-gpt4:default#1000 \
--logging_steps 5 \
--max_length 4096 \
--learning_rate 1e-4 \
--output_dir output \
--lora_target_modules ALL \
--model_name 小黄 'Xiao Huang' \
--model_author 魔搭 ModelScope \
```
### VLLM推理加速
推理:
```shell
CUDA_VISIBLE_DEVICES=0 swift infer \
--ckpt_dir output/llama2-7b-chat/vx-xxx/checkpoint-xxx \
--infer_backend vllm \
--vllm_enable_lora true
```
运行结果:
```python
"""
<<< who are you?
I am an artificial intelligence language model developed by ModelScope. I am designed to assist and communicate with users in a helpful and respectful manner. I can answer questions, provide information, and engage in conversation. How can I help you?
"""
```
单样本推理:
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import torch
from swift.llm import (
ModelType, get_vllm_engine, get_default_template_type,
get_template, inference_stream_vllm, LoRARequest, inference_vllm
)
lora_checkpoint = 'output/llama2-7b-chat/vx-xxx/checkpoint-xxx'
lora_request = LoRARequest('default-lora', 1, lora_checkpoint)
model_type = ModelType.llama2_7b_chat
llm_engine = get_vllm_engine(model_type, torch.float16, enable_lora=True,
max_loras=1, max_lora_rank=16)
template_type = get_default_template_type(model_type)
template = get_template(template_type, llm_engine.hf_tokenizer)
# 与`transformers.GenerationConfig`类似的接口
llm_engine.generation_config.max_new_tokens = 256
# use lora
request_list = [{'query': 'who are you?'}]
query = request_list[0]['query']
resp_list = inference_vllm(llm_engine, template, request_list, lora_request=lora_request)
response = resp_list[0]['response']
print(f'query: {query}')
print(f'response: {response}')
# no lora
gen = inference_stream_vllm(llm_engine, template, request_list)
query = request_list[0]['query']
print(f'query: {query}\nresponse: ', end='')
print_idx = 0
for resp_list in gen:
response = resp_list[0]['response']
print(response[print_idx:], end='', flush=True)
print_idx = len(response)
print()
"""
query: who are you?
response: I am an artificial intelligence language model developed by ModelScope. I can understand and respond to text-based questions and prompts, and provide information and assistance on a wide range of topics.
query: who are you?
response: Hello! I'm just an AI assistant, here to help you with any questions or tasks you may have. I'm designed to be helpful, respectful, and honest in my responses, and I strive to provide socially unbiased and positive answers. I'm not a human, but a machine learning model trained on a large dataset of text to generate responses to a wide range of questions and prompts. I'm here to help you in any way I can, while always ensuring that my answers are safe and respectful. Is there anything specific you'd like to know or discuss?
"""
```
### 部署
**服务端**:
```shell
CUDA_VISIBLE_DEVICES=0 swift deploy \
--ckpt_dir output/llama2-7b-chat/vx-xxx/checkpoint-xxx \
--infer_backend vllm \
--vllm_enable_lora true
```
**客户端**:
测试:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "default-lora",
"messages": [{"role": "user", "content": "who are you?"}],
"max_tokens": 256,
"temperature": 0
}'
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama2-7b-chat",
"messages": [{"role": "user", "content": "who are you?"}],
"max_tokens": 256,
"temperature": 0
}'
```
输出:
```python
"""
{"model":"default-lora","choices":[{"index":0,"message":{"role":"assistant","content":"I am an artificial intelligence language model developed by ModelScope. I am designed to assist and communicate with users in a helpful, respectful, and honest manner. I can answer questions, provide information, and engage in conversation. How can I assist you?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":141,"completion_tokens":53,"total_tokens":194},"id":"chatcmpl-fb95932dcdab4ce68f4be49c9946b306","object":"chat.completion","created":1710820459}
{"model":"llama2-7b-chat","choices":[{"index":0,"message":{"role":"assistant","content":" Hello! I'm just an AI assistant, here to help you with any questions or concerns you may have. I'm designed to provide helpful, respectful, and honest responses, while ensuring that my answers are socially unbiased and positive in nature. I'm not capable of providing harmful, unethical, racist, sexist, toxic, dangerous, or illegal content, and I will always do my best to explain why I cannot answer a question if it does not make sense or is not factually coherent. If I don't know the answer to a question, I will not provide false information. My goal is to assist and provide accurate information to the best of my abilities. Is there anything else I can help you with?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":141,"completion_tokens":163,"total_tokens":304},"id":"chatcmpl-d867a3a52bb7451588d4f73e1df4ba95","object":"chat.completion","created":1710820557}
"""
```
使用openai:
```python
from openai import OpenAI
client = OpenAI(
api_key='EMPTY',
base_url='http://localhost:8000/v1',
)
model_type_list = [model.id for model in client.models.list().data]
print(f'model_type_list: {model_type_list}')
query = 'who are you?'
messages = [{
'role': 'user',
'content': query
}]
resp = client.chat.completions.create(
model='default-lora',
messages=messages,
seed=42)
response = resp.choices[0].message.content
print(f'query: {query}')
print(f'response: {response}')
# 流式
stream_resp = client.chat.completions.create(
model='llama2-7b-chat',
messages=messages,
stream=True,
seed=42)
print(f'query: {query}')
print('response: ', end='')
for chunk in stream_resp:
print(chunk.choices[0].delta.content, end='', flush=True)
print()
"""Out[0]
model_type_list: ['llama2-7b-chat', 'default-lora']
query: who are you?
response: I am an artificial intelligence language model developed by ModelScope. I am designed to assist and communicate with users in a helpful, respectful, and honest manner. I can answer questions, provide information, and engage in conversation. How can I assist you?
query: who are you?
response: Hello! I'm just an AI assistant, here to help you with any questions or concerns you may have. I'm designed to provide helpful, respectful, and honest responses, while ensuring that my answers are socially unbiased and positive in nature. I'm not capable of providing harmful, unethical, racist, sexist, toxic, dangerous, or illegal content, and I will always do my best to explain why I cannot answer a question if it does not make sense or is not factually coherent. If I don't know the answer to a question, I will not provide false information. Is there anything else I can help you with?
"""
```
| swift/docs/source/LLM/VLLM推理加速与部署.md/0 | {
"file_path": "swift/docs/source/LLM/VLLM推理加速与部署.md",
"repo_id": "swift",
"token_count": 12072
} | 177 |
.. swift documentation file,
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Swift DOCUMENTATION
========================
.. toctree::
:maxdepth: 2
:caption: Get Started
GetStarted/SWIFT安装.md
GetStarted/界面训练推理.md
GetStarted/使用tuners.md
GetStarted/ResTuning.md
GetStarted/SCEdit.md
GetStarted/在SWIFT内使用PEFT.md
.. toctree::
:maxdepth: 2
:caption: LLM Training and Inference
LLM/LLM推理文档.md
LLM/LLM微调文档.md
LLM/DPO训练文档.md
LLM/LLM评测文档.md
LLM/LLM量化文档.md
LLM/VLLM推理加速与部署.md
LLM/LLM实验文档.md
LLM/命令行参数.md
LLM/支持的模型和数据集.md
LLM/自定义与拓展.md
LLM/自我认知微调最佳实践.md
LLM/Agent微调最佳实践.md
LLM/Agent部署最佳实践.md
LLM/Qwen1.5全流程最佳实践.md
LLM/NPU推理与微调最佳实践.md
LLM/Grok训练和推理.md
LLM/ORPO算法最佳实践.md
LLM/SimPO算法最佳实践.md
LLM/HuggingFace生态兼容.md
LLM/Benchmark.md
.. toctree::
:maxdepth: 2
:caption: Multi-Modal LLM Training and Inference
Multi-Modal/qwen-vl最佳实践.md
Multi-Modal/qwen-audio最佳实践.md
Multi-Modal/deepseek-vl最佳实践.md
Multi-Modal/internlm-xcomposer2最佳实践.md
Multi-Modal/phi3-vision最佳实践.md
Multi-Modal/llava最佳实践.md
Multi-Modal/yi-vl最佳实践.md
Multi-Modal/mplug-owl2最佳实践.md
Multi-Modal/cogvlm最佳实践.md
Multi-Modal/cogvlm2最佳实践.md
Multi-Modal/minicpm-v最佳实践.md
Multi-Modal/minicpm-v-2最佳实践.md
Multi-Modal/minicpm-v-2.5最佳实践.md
Multi-Modal/internvl最佳实践.md
Multi-Modal/MLLM部署文档.md
.. toctree::
:maxdepth: 2
:caption: AIGC Training and Inference
AIGC/AnimateDiff微调推理文档.md
.. toctree::
:maxdepth: 2
:caption: API Doc
Hub <api/swift.hub>
Trainer <api/swift.trainers>
Tuner <api/swift.tuners>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| swift/docs/source/index.rst/0 | {
"file_path": "swift/docs/source/index.rst",
"repo_id": "swift",
"token_count": 1196
} | 178 |
# Human Preference Alignment Training Documentation
This document provides training scripts for various human preference alignment algorithms. If you wish to delve deeper into more detailed algorithm information and selection methods, please refer to [documentation](https://github.com/modelscope/modelscope-classroom/blob/main/LLM-tutorial/M.%E4%BA%BA%E7%B1%BB%E5%81%8F%E5%A5%BD%E5%AF%B9%E9%BD%90%E8%AE%AD%E7%BB%83.md)
## Table of Contents
- [Environment Setup](#environment-setup)
- [Dataset](#dataset)
- [DPO](#dpo)
- [KTO](#kto)
- [CPO](#cpo)
- [ORPO](#orpo)
- [SimPO](#simpo)
- [Custom Data](#custom-data)
## Environment Setup
```bash
# Set pip global mirror (for faster downloads)
pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/
# Install ms-swift
git clone https://github.com/modelscope/swift.git
cd swift
pip install -e '.[llm]'
# Environment alignment (usually not necessary. If you encounter errors, you can run the following code, the repository uses the latest environment test)
pip install -r requirements/framework.txt -U
pip install -r requirements/llm.txt -U
```
## Dataset
Human preference alignment training typically requires data in the format $(x,y_w,y_l)$, where $x$ represents the model input, and $y_w,y_l$ represent the preferred and rejected answers according to human preference, such as 
Data for the KTO algorithm is somewhat special, requiring only data in the format $(x,y,\text{label})$, where $x$ represents the model input, $y$ represents the model output, and the label indicates whether the answer aligns with human preferences
For example, 
KTO can also be trained using the first data format, see the KTO section for differences in training scripts.
**Training Tips**:
- If you are training a base model with history data, you need to specify a template that supports multi-turn dialogue (base models often do not support multi-turn dialogue); for this situation, we have set the default chatml template, but you can also use --model_type to select the template for the training model
- For training with a custom dataset, please refer to [Customization](Customization.md)
- The following training scripts use --lora_target_modules ALL to train all linear layers of the model, but you can set --lora_target_modules DEFAULT to only train the model's QKV matrices
## DPO
[paper arvix](https://arxiv.org/abs/2305.18290)
Hyperparameters
- `beta`:KL regularization coefficient, the higher the value, the greater the penalty for deviations from the reference model. Default is 0.1
It is recommended to train with the preferred answer part of the preference dataset before starting DPO training to ensure data fits the distribution requirements of the DPO algorithm.
We also mix sft loss in the DPO loss to stabilize training; you can adjust the sft loss coefficient by setting the hyperparameter `sft_beta`, the default is 0.1
For training script, we provide single card/multi-card device map/multi-card ddp versions, for brevity, only the single card version is given for subsequent algorithms.
```bash
# Experimental environment: A100
# Memory usage: 40G
CUDA_VISIBLE_DEVICES=0 \
swift rlhf \
--rlhf_type dpo \
--model_type llama3-8b-instruct \
--beta 0.1 \
--sft_beta 0.1 \
--sft_type lora \
--dataset shareai-llama3-dpo-zh-en-emoji \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps 16 \
--warmup_ratio 0.03 \
--save_total_limit 2
# MP(device map)
# Memory usage: 2*24G
CUDA_VISIBLE_DEVICES=0,1 \
swift rlhf \
--rlhf_type dpo \
--model_type llama3-8b-instruct \
--beta 0.1 \
--sft_beta 0.1 \
--sft_type lora \
--dataset shareai-llama3-dpo-zh-en-emoji \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps 16 \
--warmup_ratio 0.03 \
--save_total_limit 2
# DDP + MP
# Memory usage: 4*24G
CUDA_VISIBLE_DEVICES=0,1,2,3 \
NPROC_PER_NODE=2 \
swift rlhf \
--rlhf_type dpo \
--model_type llama3-8b-instruct \
--beta 0.1 \
--sft_beta 0.1 \
--sft_type lora \
--dataset shareai-llama3-dpo-zh-en-emoji \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps $(expr 16 / $nproc_per_node) \
--warmup_ratio 0.03 \
--save_total_limit 2
```
Model inference and deployment after training can refer to [LLM Inference Document](./LLM-Inference.md) and [VLLM Inference Acceleration and Deployment Document](./VLLM-inference-acceleration-and-deployment.md)
## KTO
[Paper arvix](https://arxiv.org/abs/2402.01306)
Hyperparameters
- beta: KL regularization coefficient, the higher the value, the greater the penalty for deviations from the reference model. Default is 0.1
- desirable_weight: The $\lambda_D$ term in the loss function, the loss weight for preference answer samples. Default is 1.0
- undesirable_weight: The $\lambda_U$ term in the loss function, the loss weight for rejected answer samples. Default is 1.0
Use $n_D$ and $n_U$ to respectively represent the number of preference answers and rejected answers in the dataset. For hyperparameters $\lambda_D$ and $\lambda_U$, the authors recommend setting $\frac{\lambda_Dn_D}{\lambda_Un_U}\in[1,\frac{4}{3}]$
Training script using $(x,y,\text{label})$ format data
```bash
CUDA_VISIBLE_DEVICES=0 \
swift rlhf \
--rlhf_type kto \
--model_type llama3-8b-instruct \
--beta 0.1 \
--desirable_weight 1.0 \
--undesirable_weight 1.0 \
--sft_type lora \
--dataset ultrafeedback-kto \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps 16 \
--warmup_ratio 0.03 \
--save_total_limit 2
```
Training script using $(x,y_w,y_l)$ format data
```bash
CUDA_VISIBLE_DEVICES=0 \
swift rlhf \
--rlhf_type dpo \
--loss_type kto_pair \
--model_type llama3-8b-instruct \
--beta 0.1 \
--desirable_weight 1.0 \
--undesirable_weight 1.0 \
--sft_type lora \
--dataset shareai-llama3-dpo-zh-en-emoji \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps 16 \
--warmup_ratio 0.03 \
--save_total_limit 2
```
## CPO
[Paper arvix](https://arxiv.org/abs/2401.08417)
Hyperparameters
- beta: The beta factor in CPO loss., default is 0.1
Training script
```bash
CUDA_VISIBLE_DEVICES=0 \
swift rlhf \
--rlhf_type cpo \
--model_type llama3-8b-instruct \
--beta 0.1 \
--sft_type lora \
--dataset shareai-llama3-dpo-zh-en-emoji \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps 16 \
--warmup_ratio 0.03 \
--save_total_limit 2
```
## ORPO
[paper arvix](https://arxiv.org/abs/2403.07691)
Hyperparameters
- lambda: Coefficient for the Odds Ratio loss
**Note**: ORPO uses the parameter beta to input the hyperparameter lambda
```bash
CUDA_VISIBLE_DEVICES=0 \
swift rlhf \
--rlhf_type orpo \
--model_type llama3-8b-instruct \
--beta 0.1 \
--sft_type lora \
--dataset shareai-llama3-dpo-zh-en-emoji \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps 16 \
--warmup_ratio 0.03 \
--save_total_limit 2
```
## SimPO
[Paper arvix](https://arxiv.org/abs/2405.14734)
Hyperparameters
- beta: Coefficient before the hidden reward, default is 2.0
- simpo_gamma: Reward margin term, default is 1.0
Training script
```bash
CUDA_VISIBLE_DEVICES=0 \
swift rlhf \
--rlhf_type simpo \
--model_type llama3-8b-instruct \
--beta 2.0 \
--simpo_gamma 1.0 \
--sft_type lora \
--dataset shareai-llama3-dpo-zh-en-emoji \
--num_train_epochs 2 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--learning_rate 5e-5 \
--gradient_accumulation_steps 16 \
--warmup_ratio 0.03 \
--save_total_limit 2
```
| swift/docs/source_en/LLM/Human-Preference-Alignment-Training-Documentation.md/0 | {
"file_path": "swift/docs/source_en/LLM/Human-Preference-Alignment-Training-Documentation.md",
"repo_id": "swift",
"token_count": 3328
} | 179 |
# Deepseek-VL Best Practice
## Table of Contents
- [Environment Preparation](#environment-preparation)
- [Inference](#inference)
- [Fine-tuning](#fine-tuning)
- [Inference After Fine-tuning](#inference-after-fine-tuning)
## Environment Preparation
```shell
pip install 'ms-swift[llm]' -U
pip install attrdict
```
Model Link:
- deepseek-vl-1_3b-chat: [https://www.modelscope.cn/models/deepseek-ai/deepseek-vl-1.3b-chat/summary](https://www.modelscope.cn/models/deepseek-ai/deepseek-vl-1.3b-chat/summary)
- deepseek-vl-7b-chat: [https://www.modelscope.cn/models/deepseek-ai/deepseek-vl-7b-chat/summary](https://www.modelscope.cn/models/deepseek-ai/deepseek-vl-7b-chat/summary)
## Inference
Inference for deepseek-vl-7b-chat:
```shell
# Experimental environment: A100
# 30GB GPU memory
CUDA_VISIBLE_DEVICES=0 swift infer --model_type deepseek-vl-7b-chat
# If you want to run it on 3090, you can infer the 1.3b model
CUDA_VISIBLE_DEVICES=0 swift infer --model_type deepseek-vl-1_3b-chat
```
7b model effect demonstration: (supports passing local paths or URLs)
```python
"""
<<< Who are you?
I am an AI language model, designed to understand and generate human-like text based on the input I receive. I am not a human, but I am here to help answer your questions and assist you with any tasks you may have.
--------------------------------------------------
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png</img><img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/cat.png</img>What is the difference between these two images?
The image provided is a close-up of a kitten with big blue eyes, looking directly at the camera. The kitten appears to be a domestic cat, specifically a kitten, given its small size and youthful features. The image is a high-resolution, detailed photograph that captures the kitten's facial features and fur texture.
The second image is a cartoon illustration of three sheep standing in a grassy field with mountains in the background. The sheep are white with brown faces and legs, and they have large, round eyes. The illustration is stylized with a flat, two-dimensional appearance, and the colors are bright and vibrant. The sheep are evenly spaced and facing forward, giving the impression of a peaceful, pastoral scene.
The differences between the two images are primarily in their artistic styles and subjects. The first image is a realistic photograph of a kitten, while the second image is a stylized cartoon illustration of sheep. The first image is a photograph with a focus on the kitten's facial features and fur texture, while the second image is a cartoon with a focus on the sheep's characters and the setting. The first image is a single subject, while the second image features multiple subjects in a group.
--------------------------------------------------
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png</img>How many sheep are there in the picture?
There are four sheep in the picture.
--------------------------------------------------
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/math.png</img>What is the result of the calculation?
The result of the calculation is 1452 + 45304 = 46756.
--------------------------------------------------
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/poem.png</img>Write a poem based on the content of the In the tranquil waters, a boat gently floats,
A beacon of light, a lone candle's soft glow.
The night is vast, a canvas of stars above,
A serene scene, a moment of peace, it seems to offer.
The boat, a vessel of wood and a mast so tall,
Carries a passenger, a figure so still.
The water's surface, a mirror of the night sky,
Reflects the boat's silhouette, a sight so divine.
The trees, standing tall, their forms in the distance,
A forest of mystery, a silent chorus.
The stars, scattered like diamonds in the heavens,
Illuminate the night, a celestial dance.
The boat, a symbol of journey and adventure,
In the quiet of the night, it's a sight to behold.
A moment frozen in time, a memory to cherish,
In the picture of the night, a boat on the water.
"""
```
Sample images are as follows:
cat:
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/cat.png" width="250" style="display: inline-block;">
animal:
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png" width="250" style="display: inline-block;">
math:
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/math.png" width="250" style="display: inline-block;">
poem:
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/poem.png" width="250" style="display: inline-block;">
**Single sample inference**
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from swift.llm import (
get_model_tokenizer, get_template, inference, ModelType,
get_default_template_type, inference_stream
)
from swift.utils import seed_everything
import torch
model_type = ModelType.deepseek_vl_7b_chat
template_type = get_default_template_type(model_type)
print(f'template_type: {template_type}')
model, tokenizer = get_model_tokenizer(model_type, torch.float16,
model_kwargs={'device_map': 'auto'})
model.generation_config.max_new_tokens = 256
template = get_template(template_type, tokenizer)
seed_everything(42)
query = '<img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png</img>How far is it from each city?'
response, history = inference(model, template, query)
print(f'query: {query}')
print(f'response: {response}')
# Streaming
query = 'Which city is the farthest?'
gen = inference_stream(model, template, query, history)
print_idx = 0
print(f'query: {query}\nresponse: ', end='')
for response, history in gen:
delta = response[print_idx:]
print(delta, end='', flush=True)
print_idx = len(response)
print()
print(f'history: {history}')
"""
query: <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png</img>How far is it from each city?
response: The distance from each city is as follows:
- From "Mata", it is 14 km away.
- From "Yangjiang", it is 62 km away.
- From "Guangzhou", it is 293 km away.
These distances are clearly indicated on the green road sign with white text, providing the necessary information for travelers to gauge the distance to each city from the current location.
query: Which city is the farthest?
response: The farthest city from the current location is "Guangzhou", which is 293 km away. This is indicated by the longest number on the green road sign, which is larger than the distances to the other cities listed.
history: [['<img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png</img>How far is it from each city?', 'The distance from each city is as follows:\n\n- From "Mata", it is 14 km away.\n- From "Yangjiang", it is 62 km away.\n- From "Guangzhou", it is 293 km away.\n\nThese distances are clearly indicated on the green road sign with white text, providing the necessary information for travelers to gauge the distance to each city from the current location.'], ['Which city is the farthest?', 'The farthest city from the current location is "Guangzhou", which is 293 km away. This is indicated by the longest number on the green road sign, which is larger than the distances to the other cities listed.']]
"""
```
Sample image is as follows:
road:
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png" width="250" style="display: inline-block;">
## Fine-tuning
Multi-modal large model fine-tuning usually uses **custom datasets**. Here is a runnable demo:
LoRA fine-tuning:
(By default, only lora fine-tuning is performed on the qkv part of the LLM. If you want to fine-tune all linear parts including the vision model, you can specify `--lora_target_modules ALL`)
```shell
# Experimental environment: A10, 3090, V100
# 20GB GPU memory
CUDA_VISIBLE_DEVICES=0 swift sft \
--model_type deepseek-vl-7b-chat \
--dataset coco-en-mini \
```
Full parameter fine-tuning:
```shell
# Experimental environment: 4 * A100
# 4 * 70GB GPU memory
NPROC_PER_NODE=4 CUDA_VISIBLE_DEVICES=0,1,2,3 swift sft \
--model_type deepseek-vl-7b-chat \
--dataset coco-en-mini \
--sft_type full \
--use_flash_attn true \
--deepspeed default-zero2
```
[Custom datasets](../LLM/Customization.md#-Recommended-Command-line-arguments) supports json, jsonl styles. The following is an example of a custom dataset:
(Supports multi-turn conversations, supports multiple images per turn or no images, supports input of local paths or URLs)
```json
[
{"conversations": [
{"from": "user", "value": "<img>img_path</img>11111"},
{"from": "assistant", "value": "22222"}
]},
{"conversations": [
{"from": "user", "value": "<img>img_path</img><img>img_path2</img><img>img_path3</img>aaaaa"},
{"from": "assistant", "value": "bbbbb"},
{"from": "user", "value": "<img>img_path</img>ccccc"},
{"from": "assistant", "value": "ddddd"}
]},
{"conversations": [
{"from": "user", "value": "AAAAA"},
{"from": "assistant", "value": "BBBBB"},
{"from": "user", "value": "CCCCC"},
{"from": "assistant", "value": "DDDDD"}
]}
]
```
## Inference After Fine-tuning
Direct inference:
```shell
CUDA_VISIBLE_DEVICES=0 swift infer \
--ckpt_dir output/deepseek-vl-7b-chat/vx-xxx/checkpoint-xxx \
--load_dataset_config true \
```
**merge-lora** and inference:
```shell
CUDA_VISIBLE_DEVICES=0 swift export \
--ckpt_dir output/deepseek-vl-7b-chat/vx-xxx/checkpoint-xxx \
--merge_lora true
CUDA_VISIBLE_DEVICES=0 swift infer \
--ckpt_dir output/deepseek-vl-7b-chat/vx-xxx/checkpoint-xxx-merged \
--load_dataset_config true
```
| swift/docs/source_en/Multi-Modal/deepseek-vl-best-practice.md/0 | {
"file_path": "swift/docs/source_en/Multi-Modal/deepseek-vl-best-practice.md",
"repo_id": "swift",
"token_count": 3177
} | 180 |
# Experimental environment: A10, 3090
# 10GB GPU memory
PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0 \
python llm_sft.py \
--model_id_or_path ZhipuAI/chatglm3-6b-32k \
--model_revision master \
--sft_type lora \
--tuner_backend peft \
--template_type AUTO \
--dtype AUTO \
--output_dir output \
--dataset agent-instruct-all-en \
--num_train_epochs 1 \
--max_length 4096 \
--check_dataset_strategy warning \
--quantization_bit 4 \
--bnb_4bit_comp_dtype AUTO \
--lora_rank 8 \
--lora_alpha 32 \
--lora_dropout_p 0.05 \
--lora_target_modules DEFAULT \
--gradient_checkpointing true \
--batch_size 1 \
--weight_decay 0.1 \
--learning_rate 1e-4 \
--gradient_accumulation_steps 16 \
--max_grad_norm 0.5 \
--warmup_ratio 0.03 \
--eval_steps 100 \
--save_steps 100 \
--save_total_limit 2 \
--logging_steps 10 \
| swift/examples/pytorch/llm/scripts/chatglm3_6b_32k/qlora/sft.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/chatglm3_6b_32k/qlora/sft.sh",
"repo_id": "swift",
"token_count": 413
} | 181 |
# Experimental environment: V100, A10, 3090
# 12GB GPU memory
CUDA_VISIBLE_DEVICES=0 \
swift sft \
--model_id_or_path AI-ModelScope/gemma-2b-it \
--sft_type lora \
--tuner_backend peft \
--template_type AUTO \
--dtype AUTO \
--output_dir output \
--dataset hc3-zh \
--train_dataset_sample 5000 \
--num_train_epochs 1 \
--max_length 2048 \
--check_dataset_strategy warning \
--lora_rank 8 \
--lora_alpha 32 \
--lora_dropout_p 0.05 \
--lora_target_modules ALL \
--gradient_checkpointing true \
--batch_size 1 \
--weight_decay 0.1 \
--learning_rate 1e-4 \
--gradient_accumulation_steps 16 \
--max_grad_norm 0.5 \
--warmup_ratio 0.1 \
--eval_steps 100 \
--save_steps 100 \
--save_total_limit 2 \
--logging_steps 10 \
| swift/examples/pytorch/llm/scripts/gemma_2b_instruct/lora/sft.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/gemma_2b_instruct/lora/sft.sh",
"repo_id": "swift",
"token_count": 370
} | 182 |
# Experimental environment: A100
# 80GB GPU memory
CUDA_VISIBLE_DEVICES=0 \
swift sft \
--model_type qwen1half-7b-chat \
--sft_type full \
--train_dataset_sample -1 \
--eval_steps 1000 \
--output_dir output \
--num_train_epochs 1 \
--max_length 4096 \
--learning_rate 1e-5 \
--use_flash_attn true \
--save_only_model true \
--dataset codefuse-evol-instruction-zh \
--preprocess_num_proc 4 \
| swift/examples/pytorch/llm/scripts/qwen1half_7b_chat/full/sft.sh/0 | {
"file_path": "swift/examples/pytorch/llm/scripts/qwen1half_7b_chat/full/sft.sh",
"repo_id": "swift",
"token_count": 190
} | 183 |
PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0 \
python infer_dreambooth_lora.py \
--base_model_path "AI-ModelScope/stable-diffusion-v1-5" \
--lora_model_path "train_dreambooth_lora" \
--prompt "A picture of a sks dog in a bucket" \
--image_save_path "dog-bucket.png" \
--torch_dtype "fp16" \
| swift/examples/pytorch/sdxl/scripts/run_infer_dreambooth_lora.sh/0 | {
"file_path": "swift/examples/pytorch/sdxl/scripts/run_infer_dreambooth_lora.sh",
"repo_id": "swift",
"token_count": 139
} | 184 |
# Copyright (c) Alibaba, Inc. and its affiliates.
from swift.aigc import train_controlnet_sdxl
if __name__ == '__main__':
train_controlnet_sdxl()
| swift/examples/pytorch/sdxl/train_controlnet_sdxl.py/0 | {
"file_path": "swift/examples/pytorch/sdxl/train_controlnet_sdxl.py",
"repo_id": "swift",
"token_count": 56
} | 185 |
{
"cmd": "sft",
"requirements":{
"gpu": "1",
"ddp": "1"
},
"eval_requirements": {
"gpu": "1"
},
"eval_dataset": ["ceval", "gsm8k", "arc"],
"args": {
"model_type": "llama2-7b-aqlm-2bit-1x16",
"dataset": "dureader-robust-zh",
"batch_size": 1,
"max_length": 1024,
"gradient_accumulation_steps": 16,
"learning_rate": 5e-5,
"use_flash_attn": true,
"eval_steps": 1000,
"save_steps": 1000,
"train_dataset_sample": 100000,
"val_dataset_sample": 3000,
"num_train_epochs": 2,
"check_dataset_strategy": "none",
"gradient_checkpointing": true,
"weight_decay": 0.01,
"max_grad_norm": 1.0,
"warmup_ratio": 0.03,
"save_total_limit": 2,
"logging_steps": 10,
"sft_type": "lora",
"lora_target_modules": "ALL",
"lora_rank": 8,
"lora_alpha": 32
},
"experiment": [
{
"name": "llama2-7b-aqlm-2bit-1x16"
}
]
}
| swift/scripts/benchmark/config/aqlm.json/0 | {
"file_path": "swift/scripts/benchmark/config/aqlm.json",
"repo_id": "swift",
"token_count": 558
} | 186 |
import os
from datasets import concatenate_datasets
from swift.llm import (DATASET_MAPPING, DatasetName, ModelType, dataset_map, get_dataset, get_default_template_type,
get_model_tokenizer, get_template)
from swift.utils import stat_array
def write_dataset_info() -> None:
fpaths = ['docs/source/LLM/支持的模型和数据集.md', 'docs/source_en/LLM/Supported-models-datasets.md']
pre_texts = []
for fpath in fpaths:
if os.path.exists(fpath):
with open(fpath, 'r', encoding='utf-8') as f:
text = f.read()
idx = text.find('| Dataset Name |')
pre_texts.append(text[:idx])
text = text[idx:]
text_list = [t for t in text.split('\n') if len(t.strip()) > 0]
else:
text_list = []
pre_texts.append('')
res_text_list = []
res_text_list.append(
'| Dataset Name | Dataset ID | Subsets | Dataset Size | Statistic (token) | Tags | HF Dataset ID |')
res_text_list.append(
'| ------------ | ---------- | ------- |------------- | ----------------- | ---- | ------------- |')
if len(text_list) >= 2:
text_list = text_list[2:]
else:
text_list = []
ignore_dataset = {text.split('|', 2)[1].lstrip('🔥 '): text for text in text_list}
all_keys = set(DATASET_MAPPING.keys())
py_keys = DatasetName.get_dataset_name_list()
json_keys = list(all_keys - set(py_keys))
json_keys.sort()
dataset_name_list = py_keys + json_keys
mapping = {}
_iter = zip(
['llm', 'vision', 'audio'],
[ModelType.qwen_7b_chat, ModelType.qwen_vl_chat, ModelType.qwen_audio_chat],
)
for task_type, model_type in _iter:
_, tokenizer = get_model_tokenizer(model_type, load_model=False)
template_type = get_default_template_type(model_type)
template = get_template(template_type, tokenizer)
mapping[task_type] = template
for dataset_name in dataset_name_list:
try:
dataset_info = DATASET_MAPPING[dataset_name]
tags = dataset_info.get('tags', [])
subsets = dataset_info.get('subsets', [])
subsets = '<br>'.join(subsets)
if 'audio' in tags:
template = mapping['audio']
elif 'vision' in tags:
template = mapping['vision']
else:
template = mapping['llm']
if dataset_name in ignore_dataset:
dataset_size, stat_str = ignore_dataset[dataset_name].split('|')[4:6]
else:
dataset_info = DATASET_MAPPING[dataset_name]
if dataset_info.get('huge_dataset', False):
dataset_size = '-'
stat_str = 'Dataset is too huge, please click the original link to view the dataset stat.'
else:
train_dataset, val_dataset = get_dataset([dataset_name],
model_name=['小黄', 'Xiao Huang'],
model_author=['魔搭', 'ModelScope'])
dataset_size = len(train_dataset)
assert val_dataset is None
raw_dataset = train_dataset
if val_dataset is not None:
raw_dataset = concatenate_datasets([raw_dataset, val_dataset])
if len(raw_dataset) < 5000:
num_proc = 1
else:
num_proc = 4
dataset = dataset_map(raw_dataset, template.encode, num_proc=num_proc)
_token_len = []
input_ids = dataset['input_ids']
for i in range(len(dataset)):
_token_len.append(len(input_ids[i]))
stat = stat_array(_token_len)[0]
stat_str = f"{stat['mean']:.1f}±{stat['std']:.1f}, min={stat['min']}, max={stat['max']}"
ms_url = f"https://modelscope.cn/datasets/{dataset_info['dataset_id_or_path']}/summary"
if '🔥' in tags:
tags.remove('🔥')
dataset_name = '🔥' + dataset_name
tags_str = ', '.join(tags)
if len(tags_str) == 0:
tags_str = '-'
hf_dataset_id = dataset_info.get('hf_dataset_id')
if hf_dataset_id is None:
hf_dataset_id_str = '-'
else:
hf_url = f'https://huggingface.co/datasets/{hf_dataset_id}'
hf_dataset_id_str = f'[{hf_dataset_id}]({hf_url})'
res_text_list.append(f"|{dataset_name}|[{dataset_info['dataset_id_or_path']}]({ms_url})|{subsets}|"
f'{dataset_size}|{stat_str}|{tags_str}|{hf_dataset_id_str}|')
except Exception:
import traceback
print(traceback.format_exc())
break
for idx in range(len(fpaths)):
text = '\n'.join(res_text_list)
text = pre_texts[idx] + text + '\n'
with open(fpaths[idx], 'w', encoding='utf-8') as f:
f.write(text)
print(f'数据集总数: {len(dataset_name_list)}')
if __name__ == '__main__':
write_dataset_info()
| swift/scripts/utils/run_dataset_info.py/0 | {
"file_path": "swift/scripts/utils/run_dataset_info.py",
"repo_id": "swift",
"token_count": 2881
} | 187 |
# Copyright (c) Alibaba, Inc. and its affiliates.
import os
from dataclasses import dataclass, field
from typing import Literal, Optional, Union
import torch
import torch.distributed as dist
from swift import get_logger
from swift.utils import broadcast_string, get_dist_setting, is_dist
logger = get_logger()
@dataclass
class AnimateDiffArguments:
motion_adapter_id_or_path: Optional[str] = None
motion_adapter_revision: Optional[str] = None
model_id_or_path: str = None
model_revision: str = None
dataset_sample_size: int = None
sft_type: str = field(default='lora', metadata={'choices': ['lora', 'full']})
output_dir: str = 'output'
ddp_backend: str = field(default='nccl', metadata={'choices': ['nccl', 'gloo', 'mpi', 'ccl', 'hccl']})
seed: int = 42
lora_rank: int = 8
lora_alpha: int = 32
lora_dropout_p: float = 0.05
lora_dtype: Literal['fp16', 'bf16', 'fp32', 'AUTO'] = 'fp32'
gradient_checkpointing: bool = False
batch_size: int = 1
num_train_epochs: int = 1
# if max_steps >= 0, override num_train_epochs
max_steps: int = -1
learning_rate: Optional[float] = None
weight_decay: float = 0.01
gradient_accumulation_steps: int = 16
max_grad_norm: float = 1.
lr_scheduler_type: str = 'cosine'
warmup_ratio: float = 0.05
eval_steps: int = 50
save_steps: Optional[int] = None
dataloader_num_workers: int = 1
push_to_hub: bool = False
# 'user_name/repo_name' or 'repo_name'
hub_model_id: Optional[str] = None
hub_private_repo: bool = False
push_hub_strategy: str = field(default='push_best', metadata={'choices': ['push_last', 'all_checkpoints']})
# None: use env var `MODELSCOPE_API_TOKEN`
hub_token: Optional[str] = field(
default=None, metadata={'help': 'SDK token can be found in https://modelscope.cn/my/myaccesstoken'})
ignore_args_error: bool = False # True: notebook compatibility
text_dropout_rate: float = 0.1
validation_prompts_path: str = field(
default=None, metadata={'help': 'The validation prompts file path, use llm/configs/ad_validation.txt is None'})
trainable_modules: str = field(
default='.*motion_modules.*',
metadata={'help': 'The trainable modules, by default, the .*motion_modules.* will be trained'})
mixed_precision: bool = True
enable_xformers_memory_efficient_attention: bool = True
num_inference_steps: int = 25
guidance_scale: float = 8.
sample_size: int = 256
sample_stride: int = 4
sample_n_frames: int = 16
csv_path: str = None
video_folder: str = None
motion_num_attention_heads: int = 8
motion_max_seq_length: int = 32
num_train_timesteps: int = 1000
beta_start: int = 0.00085
beta_end: int = 0.012
beta_schedule: str = 'linear'
steps_offset: int = 1
clip_sample: bool = False
use_wandb: bool = False
def __post_init__(self) -> None:
handle_compatibility(self)
current_dir = os.path.dirname(__file__)
if self.validation_prompts_path is None:
self.validation_prompts_path = os.path.join(current_dir, 'configs/animatediff', 'validation.txt')
if self.learning_rate is None:
self.learning_rate = 1e-4
if self.save_steps is None:
self.save_steps = self.eval_steps
if is_dist():
rank, local_rank, _, _ = get_dist_setting()
torch.cuda.set_device(local_rank)
self.seed += rank # Avoid the same dropout
# Initialize in advance
if not dist.is_initialized():
dist.init_process_group(backend=self.ddp_backend)
# Make sure to set the same output_dir when using DDP.
self.output_dir = broadcast_string(self.output_dir)
@dataclass
class AnimateDiffInferArguments:
motion_adapter_id_or_path: Optional[str] = None
motion_adapter_revision: Optional[str] = None
model_id_or_path: str = None
model_revision: str = None
sft_type: str = field(default='lora', metadata={'choices': ['lora', 'full']})
ckpt_dir: Optional[str] = field(default=None, metadata={'help': '/path/to/your/vx-xxx/checkpoint-xxx'})
eval_human: bool = False # False: eval val_dataset
seed: int = 42
# other
ignore_args_error: bool = False # True: notebook compatibility
validation_prompts_path: str = None
output_path: str = './generated'
enable_xformers_memory_efficient_attention: bool = True
num_inference_steps: int = 25
guidance_scale: float = 7.5
sample_size: int = 256
sample_stride: int = 4
sample_n_frames: int = 16
motion_num_attention_heads: int = 8
motion_max_seq_length: int = 32
num_train_timesteps: int = 1000
beta_start: int = 0.00085
beta_end: int = 0.012
beta_schedule: str = 'linear'
steps_offset: int = 1
clip_sample: bool = False
merge_lora: bool = False
replace_if_exists: bool = False
# compatibility. (Deprecated)
merge_lora_and_save: Optional[bool] = None
def __post_init__(self) -> None:
handle_compatibility(self)
def handle_compatibility(args: Union[AnimateDiffArguments, AnimateDiffInferArguments]) -> None:
if isinstance(args, AnimateDiffInferArguments):
if args.merge_lora_and_save is not None:
args.merge_lora = args.merge_lora_and_save
| swift/swift/aigc/utils/argument.py/0 | {
"file_path": "swift/swift/aigc/utils/argument.py",
"repo_id": "swift",
"token_count": 2179
} | 188 |
# Copyright (c) Alibaba, Inc. and its affiliates.
import concurrent.futures
import os
import shutil
from multiprocessing import Manager, Process, Value
from swift.utils.logger import get_logger
from .api import HubApi
from .constants import DEFAULT_REPOSITORY_REVISION, ModelVisibility
logger = get_logger()
_executor = concurrent.futures.ProcessPoolExecutor(max_workers=8)
_queues = dict()
_flags = dict()
_tasks = dict()
_manager = None
def _api_push_to_hub(repo_name,
output_dir,
token,
private=True,
commit_message='',
tag=None,
source_repo='',
ignore_file_pattern=None,
revision=DEFAULT_REPOSITORY_REVISION):
try:
api = HubApi()
api.login(token)
api.push_model(
repo_name,
output_dir,
visibility=ModelVisibility.PUBLIC if not private else ModelVisibility.PRIVATE,
chinese_name=repo_name,
commit_message=commit_message,
tag=tag,
original_model_id=source_repo,
ignore_file_pattern=ignore_file_pattern,
revision=revision)
commit_message = commit_message or 'No commit message'
logger.info(f'Successfully upload the model to {repo_name} with message: {commit_message}')
return True
except Exception as e:
logger.error(f'Error happens when uploading model {repo_name} with message: {commit_message}: {e}')
return False
def push_to_hub(repo_name,
output_dir,
token=None,
private=True,
retry=3,
commit_message='',
tag=None,
source_repo='',
ignore_file_pattern=None,
revision=DEFAULT_REPOSITORY_REVISION):
"""
Args:
repo_name: The repo name for the modelhub repo
output_dir: The local output_dir for the checkpoint
token: The user api token, function will check the `MODELSCOPE_API_TOKEN` variable if this argument is None
private: If is a private repo, default True
retry: Retry times if something error in uploading, default 3
commit_message: The commit message
tag: The tag of this commit
source_repo: The source repo (model id) which this model comes from
ignore_file_pattern: The file pattern to be ignored in uploading.
revision: The branch to commit to
Returns:
The boolean value to represent whether the model is uploaded.
"""
if token is None:
token = os.environ.get('MODELSCOPE_API_TOKEN')
if ignore_file_pattern is None:
ignore_file_pattern = os.environ.get('UPLOAD_IGNORE_FILE_PATTERN')
assert repo_name is not None
assert token is not None, 'Either pass in a token or to set `MODELSCOPE_API_TOKEN` in the environment variables.'
assert os.path.isdir(output_dir)
assert 'configuration.json' in os.listdir(output_dir) or 'configuration.yaml' in os.listdir(output_dir) \
or 'configuration.yml' in os.listdir(output_dir)
logger.info(f'Uploading {output_dir} to {repo_name} with message {commit_message}')
for i in range(retry):
if _api_push_to_hub(repo_name, output_dir, token, private, commit_message, tag, source_repo,
ignore_file_pattern, revision):
return True
return False
def push_to_hub_async(repo_name,
output_dir,
token=None,
private=True,
commit_message='',
tag=None,
source_repo='',
ignore_file_pattern=None,
revision=DEFAULT_REPOSITORY_REVISION):
"""
Args:
repo_name: The repo name for the modelhub repo
output_dir: The local output_dir for the checkpoint
token: The user api token, function will check the `MODELSCOPE_API_TOKEN` variable if this argument is None
private: If is a private repo, default True
commit_message: The commit message
tag: The tag of this commit
source_repo: The source repo (model id) which this model comes from
ignore_file_pattern: The file pattern to be ignored in uploading
revision: The branch to commit to
Returns:
A handler to check the result and the status
"""
if token is None:
token = os.environ.get('MODELSCOPE_API_TOKEN')
if ignore_file_pattern is None:
ignore_file_pattern = os.environ.get('UPLOAD_IGNORE_FILE_PATTERN')
assert repo_name is not None
assert token is not None, 'Either pass in a token or to set `MODELSCOPE_API_TOKEN` in the environment variables.'
assert os.path.isdir(output_dir)
assert 'configuration.json' in os.listdir(output_dir) or 'configuration.yaml' in os.listdir(output_dir) \
or 'configuration.yml' in os.listdir(output_dir)
logger.info(f'Uploading {output_dir} to {repo_name} with message {commit_message}')
return _executor.submit(_api_push_to_hub, repo_name, output_dir, token, private, commit_message, tag, source_repo,
ignore_file_pattern, revision)
def submit_task(q, b):
while True:
b.value = False
item = q.get()
logger.info(item)
b.value = True
if not item.pop('done', False):
delete_dir = item.pop('delete_dir', False)
output_dir = item.get('output_dir')
try:
push_to_hub(**item)
if delete_dir and os.path.exists(output_dir):
shutil.rmtree(output_dir)
except Exception as e:
logger.error(e)
else:
break
class UploadStrategy:
cancel = 'cancel'
wait = 'wait'
def push_to_hub_in_queue(queue_name, strategy=UploadStrategy.cancel, **kwargs):
assert queue_name is not None and len(queue_name) > 0, 'Please specify a valid queue name!'
global _manager
if _manager is None:
_manager = Manager()
if queue_name not in _queues:
_queues[queue_name] = _manager.Queue()
_flags[queue_name] = Value('b', False)
process = Process(target=submit_task, args=(_queues[queue_name], _flags[queue_name]))
process.start()
_tasks[queue_name] = process
queue = _queues[queue_name]
flag: Value = _flags[queue_name]
if kwargs.get('done', False):
queue.put(kwargs)
elif flag.value and strategy == UploadStrategy.cancel:
logger.error(f'Another uploading is running, '
f'this uploading with message {kwargs.get("commit_message")} will be canceled.')
else:
queue.put(kwargs)
def wait_for_done(queue_name):
process: Process = _tasks.pop(queue_name, None)
if process is None:
return
process.join()
_queues.pop(queue_name)
_flags.pop(queue_name)
| swift/swift/hub/push_to_hub.py/0 | {
"file_path": "swift/swift/hub/push_to_hub.py",
"repo_id": "swift",
"token_count": 3132
} | 189 |
# Copyright (c) Alibaba, Inc. and its affiliates.
import asyncio
import datetime as dt
import os
import time
from typing import Any, Dict, List, Optional, Tuple
import json
from llmuses.models.custom import CustomModel
from modelscope import GenerationConfig
from tqdm import tqdm
from swift.utils import append_to_jsonl, get_logger, get_main, seed_everything
from .infer import merge_lora, prepare_model_template
from .utils import EvalArguments, XRequestConfig, inference, inference_client_async
logger = get_logger()
class EvalModel(CustomModel):
def __init__(self, args: EvalArguments, model_name: str, **kwargs) -> None:
if args.eval_url is None:
if args.merge_lora:
merge_lora(args, device_map=args.merge_device_map)
if args.infer_backend == 'vllm':
from .utils import prepare_vllm_engine_template
self.llm_engine, self.template = prepare_vllm_engine_template(args)
else:
self.model, self.template = prepare_model_template(args)
self.args = args
super().__init__(config={'model_id': model_name}, **kwargs)
self.model_name = model_name
@staticmethod
async def _call_openai(model_type: str, query: str, eval_url: str, *, is_chat_model: bool,
request_config: XRequestConfig, idx: int) -> Tuple[str, Optional[int]]:
# idx: maintain the order
resp = await inference_client_async(
model_type, query, is_chat_request=is_chat_model, request_config=request_config, url=eval_url)
if is_chat_model:
response = resp.choices[0].message.content
else:
response = resp.choices[0].text
return response, idx
async def call_openai_batched(self, prompts: List[str], request_config: XRequestConfig) -> List[str]:
assert self.args.eval_is_chat_model is not None
use_tqdm = True if len(prompts) >= 20 else False
prog_bar = tqdm(total=len(prompts), dynamic_ncols=True, disable=not use_tqdm)
tasks = []
for i, prompt in enumerate(prompts):
tasks.append(
self._call_openai(
self.args.model_type,
prompt,
self.args.eval_url,
is_chat_model=self.args.eval_is_chat_model,
request_config=request_config,
idx=i))
response_list: List[Optional[str]] = [None] * len(prompts)
for coro in asyncio.as_completed(tasks):
response, i = await coro
response_list[i] = response
prog_bar.update()
prog_bar.close()
return response_list
def predict(self, prompts: List[str], **kwargs) -> List[Dict[str, Any]]:
infer_cfg = kwargs['infer_cfg'].copy()
infer_cfg.pop('limit', None)
infer_cfg.pop('max_length', None)
assert infer_cfg.get('max_new_tokens') is not None, f'infer_cfg: {infer_cfg}'
do_sample = infer_cfg.pop('do_sample', None)
if self.args.eval_url is not None:
if do_sample is False:
infer_cfg['temperature'] = 0
max_new_tokens = infer_cfg.pop('max_new_tokens', None)
if max_new_tokens is not None:
infer_cfg['max_tokens'] = max_new_tokens
request_config = XRequestConfig(**infer_cfg)
response_list = asyncio.run(self.call_openai_batched(prompts, request_config))
elif self.args.infer_backend == 'vllm':
from .utils import inference_vllm, VllmGenerationConfig
if do_sample is False:
infer_cfg['temperature'] = 0
generation_config = VllmGenerationConfig(**infer_cfg)
request_list = [{'query': prompt} for prompt in prompts]
use_tqdm = True if len(request_list) >= 20 else False
resp_list = inference_vllm(
self.llm_engine, self.template, request_list, generation_config=generation_config, use_tqdm=use_tqdm)
response_list = [resp['response'] for resp in resp_list]
else:
if do_sample is False:
# fix warning
infer_cfg['temperature'] = 1.
infer_cfg['top_p'] = 1.
infer_cfg['top_k'] = 50
if do_sample is not None:
infer_cfg['do_sample'] = do_sample
response_list = []
generation_config = GenerationConfig(**infer_cfg)
use_tqdm = True if len(prompts) >= 5 else False
prog_bar = tqdm(total=len(prompts), dynamic_ncols=True, disable=not use_tqdm)
for prompt in prompts:
response, _ = inference(self.model, self.template, prompt, generation_config=generation_config)
response_list.append(response)
prog_bar.update()
prog_bar.close()
res_d = []
for response in response_list:
res_d.append({
'choices': [{
'index': 0,
'message': {
'content': response,
'role': 'assistant'
}
}],
'created': int(time.time()),
'model': self.model_name,
'object': 'chat.completion',
})
return res_d
def llm_eval(args: EvalArguments) -> List[Dict[str, Any]]:
from llmuses.run import run_task
from llmuses.config import TaskConfig
from llmuses.summarizer import Summarizer
logger.info(f'args: {args}')
seed_everything(args.seed)
model_name = args.model_type
if args.name:
model_name += f'-{args.name}'
custom_names = []
if args.custom_eval_config is not None:
assert os.path.isfile(args.custom_eval_config)
with open(args.custom_eval_config, 'r') as f:
custom_eval = json.load(f)
for _ds in custom_eval:
custom_names.append(_ds['name'])
TaskConfig.registry(_ds['name'], _ds['pattern'], _ds['dataset'], subset_list=_ds.get('subset_list'))
eval_model = EvalModel(args, model_name)
task_configs = TaskConfig.load(custom_model=eval_model, tasks=args.eval_dataset + custom_names)
for task_config in task_configs:
task_config.use_cache = args.eval_use_cache
if args.eval_limit is not None:
task_config.limit = args.eval_limit
if args.eval_few_shot is not None:
for dataset in task_config.datasets:
if not task_config.dataset_args.get(dataset):
task_config.dataset_args[dataset] = {}
task_config.dataset_args[dataset]['few_shot_num'] = args.eval_few_shot
run_task(task_cfg=task_configs)
final_report: List[dict] = Summarizer.get_report_from_cfg(task_cfg=task_configs)
logger.info(f'Final report:{final_report}\n')
if args.save_result:
result_dir = args.ckpt_dir
if result_dir is None:
result_dir = eval_model.llm_engine.model_dir if args.infer_backend == 'vllm' else eval_model.model.model_dir
assert result_dir is not None
jsonl_path = os.path.join(result_dir, 'eval_result.jsonl')
result = {report['name']: report['score'] for report in final_report}
logger.info(f'result: {result}')
result_info = {
'result': result,
'time': dt.datetime.now().strftime('%Y%m%d-%H%M%S'),
}
append_to_jsonl(jsonl_path, result_info)
logger.info(f'save_result_path: {jsonl_path}')
return final_report
eval_main = get_main(EvalArguments, llm_eval)
| swift/swift/llm/eval.py/0 | {
"file_path": "swift/swift/llm/eval.py",
"repo_id": "swift",
"token_count": 3658
} | 190 |
# copy dependencies from transformers/optimization.py
# code borrowed from https://github.com/jiaweizzhao/GaLore
import math
import torch
from torch.optim import Optimizer
from transformers.utils.versions import require_version
from .galore_projector import GaLoreProjector
class Adafactor(Optimizer):
"""
AdaFactor pytorch implementation can be used as a drop in replacement for Adam original fairseq code:
https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py
Paper: *Adafactor: Adaptive Learning Rates with Sublinear Memory Cost* https://arxiv.org/abs/1804.04235 Note that
this optimizer internally adjusts the learning rate depending on the `scale_parameter`, `relative_step` and
`warmup_init` options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and
`relative_step=False`.
Arguments:
params (`Iterable[nn.parameter.Parameter]`):
Iterable of parameters to optimize or dictionaries defining parameter groups.
lr (`float`, *optional*):
The external learning rate.
eps (`Tuple[float, float]`, *optional*, defaults to `(1e-30, 0.001)`):
Regularization constants for square gradient and parameter scale respectively
clip_threshold (`float`, *optional*, defaults to 1.0):
Threshold of root mean square of final gradient update
decay_rate (`float`, *optional*, defaults to -0.8):
Coefficient used to compute running averages of square
beta1 (`float`, *optional*):
Coefficient used for computing running averages of gradient
weight_decay (`float`, *optional*, defaults to 0.0):
Weight decay (L2 penalty)
scale_parameter (`bool`, *optional*, defaults to `True`):
If True, learning rate is scaled by root mean square
relative_step (`bool`, *optional*, defaults to `True`):
If True, time-dependent learning rate is computed instead of external learning rate
warmup_init (`bool`, *optional*, defaults to `False`):
Time-dependent learning rate computation depends on whether warm-up initialization is being used
This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested.
Recommended T5 finetuning settings (https://discuss.huggingface.co/t/t5-finetuning-tips/684/3):
- Training without LR warmup or clip_threshold is not recommended.
- use scheduled LR warm-up to fixed LR
- use clip_threshold=1.0 (https://arxiv.org/abs/1804.04235)
- Disable relative updates
- Use scale_parameter=False
- Additional optimizer operations like gradient clipping should not be used alongside Adafactor
Example:
```python
Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3)
```
Others reported the following combination to work well:
```python
Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
```
When using `lr=None` with [`Trainer`] you will most likely need to use [`~optimization.AdafactorSchedule`]
scheduler as following:
```python
from transformers.optimization import Adafactor, AdafactorSchedule
optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
lr_scheduler = AdafactorSchedule(optimizer)
trainer = Trainer(..., optimizers=(optimizer, lr_scheduler))
```
Usage:
```python
# replace AdamW with Adafactor
optimizer = Adafactor(
model.parameters(),
lr=1e-3,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=False,
scale_parameter=False,
warmup_init=False,
)
```"""
def __init__(
self,
params,
lr=None,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
scale_parameter=True,
relative_step=True,
warmup_init=False,
):
require_version('torch>=1.5.0') # add_ with alpha
if lr is not None and relative_step:
raise ValueError('Cannot combine manual `lr` and `relative_step=True` options')
if warmup_init and not relative_step:
raise ValueError('`warmup_init=True` requires `relative_step=True`')
defaults = {
'lr': lr,
'eps': eps,
'clip_threshold': clip_threshold,
'decay_rate': decay_rate,
'beta1': beta1,
'weight_decay': weight_decay,
'scale_parameter': scale_parameter,
'relative_step': relative_step,
'warmup_init': warmup_init,
}
super().__init__(params, defaults)
@staticmethod
def _get_lr(param_group, param_state):
rel_step_sz = param_group['lr']
if param_group['relative_step']:
min_step = 1e-6 * param_state['step'] if param_group['warmup_init'] else 1e-2
rel_step_sz = min(min_step, 1.0 / math.sqrt(param_state['step']))
param_scale = 1.0
if param_group['scale_parameter']:
param_scale = max(param_group['eps'][1], param_state['RMS'])
return param_scale * rel_step_sz
@staticmethod
def _get_options(param_group, param_shape):
factored = len(param_shape) >= 2
use_first_moment = param_group['beta1'] is not None
return factored, use_first_moment
@staticmethod
def _rms(tensor):
return tensor.norm(2) / (tensor.numel()**0.5)
@staticmethod
def _approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col):
# copy from fairseq's adafactor implementation:
# https://github.com/huggingface/transformers/blob/8395f14de6068012787d83989c3627c3df6a252b/src/transformers/optimization.py#L505
r_factor = (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)).rsqrt_().unsqueeze(-1)
c_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt()
return torch.mul(r_factor, c_factor)
@torch.no_grad()
def step(self, closure=None):
"""
Performs a single optimization step
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad
if grad.dtype in {torch.float16, torch.bfloat16}:
grad = grad.float()
if grad.is_sparse:
raise RuntimeError('Adafactor does not support sparse gradients.')
state = self.state[p]
if 'step' not in state:
state['step'] = 0
# GaLore Projection
if 'rank' in group:
if 'projector' not in state:
state['projector'] = GaLoreProjector(
group['rank'],
update_proj_gap=group['update_proj_gap'],
scale=group['scale'],
proj_type=group['proj_type'])
grad = state['projector'].project(grad, state['step'])
grad_shape = grad.shape
factored, use_first_moment = self._get_options(group, grad_shape)
# State Initialization
if 'RMS' not in state:
state['step'] = 0
if use_first_moment:
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(grad)
if factored:
state['exp_avg_sq_row'] = torch.zeros(grad_shape[:-1]).to(grad)
state['exp_avg_sq_col'] = torch.zeros(grad_shape[:-2] + grad_shape[-1:]).to(grad)
else:
state['exp_avg_sq'] = torch.zeros_like(grad)
state['RMS'] = 0
else:
if use_first_moment:
state['exp_avg'] = state['exp_avg'].to(grad)
if factored:
state['exp_avg_sq_row'] = state['exp_avg_sq_row'].to(grad)
state['exp_avg_sq_col'] = state['exp_avg_sq_col'].to(grad)
else:
state['exp_avg_sq'] = state['exp_avg_sq'].to(grad)
p_data_fp32 = p
if p.dtype in {torch.float16, torch.bfloat16}:
p_data_fp32 = p_data_fp32.float()
state['step'] += 1
state['RMS'] = self._rms(p_data_fp32)
lr = self._get_lr(group, state)
beta2t = 1.0 - math.pow(state['step'], group['decay_rate'])
update = (grad**2) + group['eps'][0]
if factored:
exp_avg_sq_row = state['exp_avg_sq_row']
exp_avg_sq_col = state['exp_avg_sq_col']
exp_avg_sq_row.mul_(beta2t).add_(update.mean(dim=-1), alpha=(1.0 - beta2t))
exp_avg_sq_col.mul_(beta2t).add_(update.mean(dim=-2), alpha=(1.0 - beta2t))
# Approximation of exponential moving average of square of gradient
update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)
update.mul_(grad)
else:
exp_avg_sq = state['exp_avg_sq']
exp_avg_sq.mul_(beta2t).add_(update, alpha=(1.0 - beta2t))
update = exp_avg_sq.rsqrt().mul_(grad)
update.div_((self._rms(update) / group['clip_threshold']).clamp_(min=1.0))
update.mul_(lr)
if use_first_moment:
exp_avg = state['exp_avg']
exp_avg.mul_(group['beta1']).add_(update, alpha=(1 - group['beta1']))
update = exp_avg
# GaLore Projection Back
if 'rank' in group:
update = state['projector'].project_back(update)
if group['weight_decay'] != 0:
p_data_fp32.add_(p_data_fp32, alpha=(-group['weight_decay'] * lr))
p_data_fp32.add_(-update)
if p.dtype in {torch.float16, torch.bfloat16}:
p.copy_(p_data_fp32)
return loss
GaLoreAdafactor = Adafactor
| swift/swift/trainers/optimizers/galore/adafactor.py/0 | {
"file_path": "swift/swift/trainers/optimizers/galore/adafactor.py",
"repo_id": "swift",
"token_count": 5250
} | 191 |
# Copyright (c) Alibaba, Inc. and its affiliates.
from .scetuning import SCETuning, SCETuningConfig
| swift/swift/tuners/scetuning/__init__.py/0 | {
"file_path": "swift/swift/tuners/scetuning/__init__.py",
"repo_id": "swift",
"token_count": 29
} | 192 |
from typing import Type
import gradio as gr
from swift.ui.base import BaseUI
class Hyper(BaseUI):
group = 'llm_train'
locale_dict = {
'hyper_param': {
'label': {
'zh': '超参数设置',
'en': 'Hyper settings',
},
},
'batch_size': {
'label': {
'zh': '训练batch size',
'en': 'Train batch size',
},
'info': {
'zh': '训练的batch size',
'en': 'Set the train batch size',
}
},
'eval_batch_size': {
'label': {
'zh': '验证batch size',
'en': 'Val batch size',
},
'info': {
'zh': '验证的batch size',
'en': 'Set the val batch size',
}
},
'learning_rate': {
'label': {
'zh': '学习率',
'en': 'Learning rate',
},
'info': {
'zh': '设置学习率',
'en': 'Set the learning rate',
}
},
'eval_steps': {
'label': {
'zh': '交叉验证步数',
'en': 'Eval steps',
},
'info': {
'zh': '设置每隔多少步数进行一次验证',
'en': 'Set the step interval to validate',
}
},
'num_train_epochs': {
'label': {
'zh': '数据集迭代轮次',
'en': 'Train epoch',
},
'info': {
'zh': '设置对数据集训练多少轮次',
'en': 'Set the max train epoch',
}
},
'max_steps': {
'label': {
'zh': '最大迭代步数',
'en': 'Max steps',
},
'info': {
'zh': '设置最大迭代步数,该值如果大于零则数据集迭代次数不生效',
'en': 'Set the max steps, if the value > 0 then num_train_epochs has no effects',
}
},
'gradient_accumulation_steps': {
'label': {
'zh': '梯度累计步数',
'en': 'Gradient accumulation steps',
},
'info': {
'zh': '设置梯度累计步数以减小显存占用',
'en': 'Set the gradient accumulation steps',
}
},
'max_grad_norm': {
'label': {
'zh': '梯度裁剪',
'en': 'Max grad norm',
},
'info': {
'zh': '设置梯度裁剪',
'en': 'Set the max grad norm',
}
},
'predict_with_generate': {
'label': {
'zh': '使用生成指标代替loss',
'en': 'Use generate metric instead of loss',
},
'info': {
'zh': '验证时使用generate/Rouge代替loss',
'en': 'Use model.generate/Rouge instead of loss',
}
},
'use_flash_attn': {
'label': {
'zh': '使用Flash Attention',
'en': 'Use Flash Attention',
},
'info': {
'zh': '使用Flash Attention减小显存占用',
'en': 'Use Flash Attention to reduce memory',
}
},
'neftune_noise_alpha': {
'label': {
'zh': 'neftune_noise_alpha',
'en': 'neftune_noise_alpha'
},
'info': {
'zh': '使用neftune提升训练效果, 一般设置为5或者10',
'en': 'Use neftune to improve performance, normally the value should be 5 or 10'
}
},
}
@classmethod
def do_build_ui(cls, base_tab: Type['BaseUI']):
with gr.Accordion(elem_id='hyper_param', open=True):
with gr.Blocks():
with gr.Row():
gr.Slider(elem_id='batch_size', minimum=1, maximum=256, step=2, scale=20)
gr.Textbox(elem_id='learning_rate', value='1e-4', lines=1, scale=20)
gr.Textbox(elem_id='num_train_epochs', lines=1, scale=20)
gr.Textbox(elem_id='max_steps', lines=1, scale=20)
gr.Slider(elem_id='gradient_accumulation_steps', minimum=1, maximum=256, step=2, value=16, scale=20)
with gr.Row():
gr.Slider(elem_id='eval_batch_size', minimum=1, maximum=256, step=2, scale=20)
gr.Textbox(elem_id='eval_steps', lines=1, value='500', scale=20)
gr.Textbox(elem_id='max_grad_norm', lines=1, scale=20)
gr.Checkbox(elem_id='predict_with_generate', scale=20)
gr.Checkbox(elem_id='use_flash_attn', scale=20)
gr.Slider(elem_id='neftune_noise_alpha', minimum=0.0, maximum=20.0, step=0.5, scale=4)
@staticmethod
def update_lr(sft_type):
if sft_type == 'full':
return 1e-5
else:
return 1e-4
| swift/swift/ui/llm_train/hyper.py/0 | {
"file_path": "swift/swift/ui/llm_train/hyper.py",
"repo_id": "swift",
"token_count": 3254
} | 193 |
from datetime import datetime
from typing import Callable, List, Type, TypeVar, Union
from .logger import get_logger
from .utils import parse_args
logger = get_logger()
_TArgsClass = TypeVar('_TArgsClass')
_T = TypeVar('_T')
NoneType = type(None)
def get_main(args_class: Type[_TArgsClass],
llm_x: Callable[[_TArgsClass], _T]) -> Callable[[Union[List[str], _TArgsClass, NoneType]], _T]:
def x_main(argv: Union[List[str], _TArgsClass, NoneType] = None, **kwargs) -> _T:
logger.info(f'Start time of running main: {datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")}')
if not isinstance(argv, (list, tuple, NoneType)):
args, remaining_argv = argv, []
else:
args, remaining_argv = parse_args(args_class, argv)
if len(remaining_argv) > 0:
if getattr(args, 'ignore_args_error', False):
logger.warning(f'remaining_argv: {remaining_argv}')
else:
raise ValueError(f'remaining_argv: {remaining_argv}')
result = llm_x(args, **kwargs)
logger.info(f'End time of running main: {datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")}')
return result
return x_main
| swift/swift/utils/run_utils.py/0 | {
"file_path": "swift/swift/utils/run_utils.py",
"repo_id": "swift",
"token_count": 550
} | 194 |
#!/usr/bin/env python
# Copyright (c) Alibaba, Inc. and its affiliates.
import argparse
import datetime
import math
import os
import subprocess
import sys
sys.path.insert(0, '/swift')
import tempfile
import time
import unittest
from fnmatch import fnmatch
from pathlib import Path
from unittest import TextTestResult
import pandas
# NOTICE: Tensorflow 1.15 seems not so compatible with pytorch.
# A segmentation fault may be raise by pytorch cpp library
# if 'import tensorflow' in front of 'import torch'.
# Puting a 'import torch' here can bypass this incompatibility.
import torch
import yaml
from model_tag import ModelTag, commit_model_ut_result
from test_utils import get_case_model_info
from swift.utils.logger import get_logger
logger = get_logger()
def test_cases_result_to_df(result_list):
table_header = [
'Name', 'Result', 'Info', 'Start time', 'Stop time',
'Time cost(seconds)'
]
df = pandas.DataFrame(
result_list, columns=table_header).sort_values(
by=['Start time'], ascending=True)
return df
def statistics_test_result(df):
total_cases = df.shape[0]
# yapf: disable
success_cases = df.loc[df['Result'] == 'Success'].shape[0]
error_cases = df.loc[df['Result'] == 'Error'].shape[0]
failures_cases = df.loc[df['Result'] == 'Failures'].shape[0]
expected_failure_cases = df.loc[df['Result'] == 'ExpectedFailures'].shape[0]
unexpected_success_cases = df.loc[df['Result'] == 'UnexpectedSuccesses'].shape[0]
skipped_cases = df.loc[df['Result'] == 'Skipped'].shape[0]
# yapf: enable
if failures_cases > 0 or \
error_cases > 0 or \
unexpected_success_cases > 0:
final_result = 'FAILED'
else:
final_result = 'SUCCESS'
result_msg = '%s (Runs=%s,success=%s,failures=%s,errors=%s,\
skipped=%s,expected failures=%s,unexpected successes=%s)' % (
final_result, total_cases, success_cases, failures_cases, error_cases,
skipped_cases, expected_failure_cases, unexpected_success_cases)
model_cases = get_case_model_info()
for model_name, case_info in model_cases.items():
cases = df.loc[df['Name'].str.contains('|'.join(list(case_info)))]
results = cases['Result']
result = None
if any(results == 'Error') or any(results == 'Failures') or any(
results == 'UnexpectedSuccesses'):
result = ModelTag.MODEL_FAIL
elif any(results == 'Success'):
result = ModelTag.MODEL_PASS
elif all(results == 'Skipped'):
result = ModelTag.MODEL_SKIP
else:
print(f'invalid results for {model_name} \n{result}')
if result is not None:
commit_model_ut_result(model_name, result)
print('Testing result summary.')
print(result_msg)
if final_result == 'FAILED':
sys.exit(1)
def gather_test_suites_in_files(test_dir, case_file_list, list_tests):
test_suite = unittest.TestSuite()
for case in case_file_list:
test_case = unittest.defaultTestLoader.discover(
start_dir=test_dir, pattern=case)
test_suite.addTest(test_case)
if hasattr(test_case, '__iter__'):
for subcase in test_case:
if list_tests:
print(subcase)
else:
if list_tests:
print(test_case)
return test_suite
def gather_test_suites_files(test_dir, pattern):
case_file_list = []
for dirpath, dirnames, filenames in os.walk(test_dir):
for file in filenames:
if fnmatch(file, pattern):
case_file_list.append(file)
return case_file_list
def collect_test_results(case_results):
result_list = [
] # each item is Case, Result, Start time, Stop time, Time cost
for case_result in case_results.successes:
result_list.append(
(case_result.test_full_name, 'Success', '', case_result.start_time,
case_result.stop_time, case_result.time_cost))
for case_result in case_results.errors:
result_list.append(
(case_result[0].test_full_name, 'Error', case_result[1],
case_result[0].start_time, case_result[0].stop_time,
case_result[0].time_cost))
for case_result in case_results.skipped:
result_list.append(
(case_result[0].test_full_name, 'Skipped', case_result[1],
case_result[0].start_time, case_result[0].stop_time,
case_result[0].time_cost))
for case_result in case_results.expectedFailures:
result_list.append(
(case_result[0].test_full_name, 'ExpectedFailures', case_result[1],
case_result[0].start_time, case_result[0].stop_time,
case_result[0].time_cost))
for case_result in case_results.failures:
result_list.append(
(case_result[0].test_full_name, 'Failures', case_result[1],
case_result[0].start_time, case_result[0].stop_time,
case_result[0].time_cost))
for case_result in case_results.unexpectedSuccesses:
result_list.append((case_result.test_full_name, 'UnexpectedSuccesses',
'', case_result.start_time, case_result.stop_time,
case_result.time_cost))
return result_list
def run_command_with_popen(cmd):
with subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
bufsize=1,
encoding='utf8') as sub_process:
for line in iter(sub_process.stdout.readline, ''):
sys.stdout.write(line)
def async_run_command_with_popen(cmd, device_id):
logger.info('Worker id: %s args: %s' % (device_id, cmd))
env = os.environ.copy()
env['CUDA_VISIBLE_DEVICES'] = '%s' % device_id
sub_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
bufsize=1,
universal_newlines=True,
env=env,
encoding='utf8')
return sub_process
def save_test_result(df, args):
if args.result_dir is not None:
file_name = str(int(datetime.datetime.now().timestamp() * 1000))
os.umask(0)
Path(args.result_dir).mkdir(mode=0o777, parents=True, exist_ok=True)
Path(os.path.join(args.result_dir, file_name)).touch(
mode=0o666, exist_ok=True)
df.to_pickle(os.path.join(args.result_dir, file_name))
def run_command(cmd):
logger.info('Running command: %s' % ' '.join(cmd))
response = subprocess.run(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
try:
response.check_returncode()
logger.info(response.stdout.decode('utf8'))
except subprocess.CalledProcessError as error:
logger.error(
'stdout: %s, stderr: %s' %
(response.stdout.decode('utf8'), error.stderr.decode('utf8')))
def install_packages(pkgs):
cmd = [sys.executable, '-m', 'pip', 'install']
for pkg in pkgs:
cmd.append(pkg)
run_command(cmd)
def install_requirements(requirements):
for req in requirements:
cmd = [
sys.executable, '-m', 'pip', 'install', '-r',
'requirements/%s' % req, '-f',
'https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html'
]
run_command(cmd)
def wait_for_free_worker(workers):
while True:
for idx, worker in enumerate(workers):
if worker is None:
logger.info('return free worker: %s' % (idx))
return idx
if worker.poll() is None: # running, get output
for line in iter(worker.stdout.readline, ''):
if line != '':
sys.stdout.write(line)
else:
break
else: # worker process completed.
logger.info('Process end: %s' % (idx))
workers[idx] = None
return idx
time.sleep(0.001)
def wait_for_workers(workers):
while True:
for idx, worker in enumerate(workers):
if worker is None:
continue
# check worker is completed.
if worker.poll() is None:
for line in iter(worker.stdout.readline, ''):
if line != '':
sys.stdout.write(line)
else:
break
else:
logger.info('Process idx: %s end!' % (idx))
workers[idx] = None
is_all_completed = True
for idx, worker in enumerate(workers):
if worker is not None:
is_all_completed = False
break
if is_all_completed:
logger.info('All sub porcess is completed!')
break
time.sleep(0.001)
def parallel_run_case_in_env(env_name, env, test_suite_env_map, isolated_cases,
result_dir, parallel):
logger.info('Running case in env: %s' % env_name)
# install requirements and deps # run_config['envs'][env]
if 'requirements' in env:
install_requirements(env['requirements'])
if 'dependencies' in env:
install_packages(env['dependencies'])
# case worker processes
worker_processes = [None] * parallel
for test_suite_file in isolated_cases: # run case in subprocess
if test_suite_file in test_suite_env_map and test_suite_env_map[
test_suite_file] == env_name:
cmd = [
'python',
'tests/run.py',
'--pattern',
test_suite_file,
'--result_dir',
result_dir,
]
worker_idx = wait_for_free_worker(worker_processes)
worker_process = async_run_command_with_popen(cmd, worker_idx)
os.set_blocking(worker_process.stdout.fileno(), False)
worker_processes[worker_idx] = worker_process
else:
pass # case not in run list.
# run remain cases in a process.
remain_suite_files = []
for k, v in test_suite_env_map.items():
if k not in isolated_cases and v == env_name:
remain_suite_files.append(k)
if len(remain_suite_files) == 0:
wait_for_workers(worker_processes)
return
# roughly split case in parallel
part_count = math.ceil(len(remain_suite_files) / parallel)
suites_chunks = [
remain_suite_files[x:x + part_count]
for x in range(0, len(remain_suite_files), part_count)
]
for suites_chunk in suites_chunks:
worker_idx = wait_for_free_worker(worker_processes)
cmd = [
'python', 'tests/run.py', '--result_dir', result_dir, '--suites'
]
for suite in suites_chunk:
cmd.append(suite)
worker_process = async_run_command_with_popen(cmd, worker_idx)
os.set_blocking(worker_process.stdout.fileno(), False)
worker_processes[worker_idx] = worker_process
wait_for_workers(worker_processes)
def run_case_in_env(env_name, env, test_suite_env_map, isolated_cases,
result_dir):
# install requirements and deps # run_config['envs'][env]
if 'requirements' in env:
install_requirements(env['requirements'])
if 'dependencies' in env:
install_packages(env['dependencies'])
for test_suite_file in isolated_cases: # run case in subprocess
if test_suite_file in test_suite_env_map and test_suite_env_map[
test_suite_file] == env_name:
cmd = [
'python',
'tests/run.py',
'--pattern',
test_suite_file,
'--result_dir',
result_dir,
]
run_command_with_popen(cmd)
else:
pass # case not in run list.
# run remain cases in a process.
remain_suite_files = []
for k, v in test_suite_env_map.items():
if k not in isolated_cases and v == env_name:
remain_suite_files.append(k)
if len(remain_suite_files) == 0:
return
cmd = ['python', 'tests/run.py', '--result_dir', result_dir, '--suites']
for suite in remain_suite_files:
cmd.append(suite)
run_command_with_popen(cmd)
def run_non_parallelizable_test_suites(suites, result_dir):
cmd = ['python', 'tests/run.py', '--result_dir', result_dir, '--suites']
for suite in suites:
cmd.append(suite)
run_command_with_popen(cmd)
# Selected cases:
def get_selected_cases():
cmd = ['python', '-u', 'tests/run_analysis.py']
selected_cases = []
with subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
bufsize=1,
encoding='utf8') as sub_process:
for line in iter(sub_process.stdout.readline, ''):
sys.stdout.write(line)
if line.startswith('Selected cases:'):
line = line.replace('Selected cases:', '').strip()
selected_cases = line.split(',')
sub_process.wait()
if sub_process.returncode != 0:
msg = 'Run analysis exception, returncode: %s!' % sub_process.returncode
logger.error(msg)
raise Exception(msg)
return selected_cases
def run_in_subprocess(args):
# only case args.isolated_cases run in subporcess, all other run in a subprocess
if not args.no_diff: # run based on git diff
try:
test_suite_files = get_selected_cases()
logger.info('Tests suite to run: ')
for f in test_suite_files:
logger.info(f)
except Exception:
logger.error(
'Get test suite based diff exception!, will run all cases.')
test_suite_files = gather_test_suites_files(
os.path.abspath(args.test_dir), args.pattern)
if len(test_suite_files) == 0:
logger.error('Get no test suite based on diff, run all the cases.')
test_suite_files = gather_test_suites_files(
os.path.abspath(args.test_dir), args.pattern)
else:
test_suite_files = gather_test_suites_files(
os.path.abspath(args.test_dir), args.pattern)
non_parallelizable_suites = []
test_suite_files = [
x for x in test_suite_files if x not in non_parallelizable_suites
]
run_config = None
isolated_cases = []
test_suite_env_map = {}
# put all the case in default env.
for test_suite_file in test_suite_files:
test_suite_env_map[test_suite_file] = 'default'
if args.run_config is not None and Path(args.run_config).exists():
with open(args.run_config, encoding='utf-8') as f:
run_config = yaml.load(f, Loader=yaml.FullLoader)
if 'isolated' in run_config:
isolated_cases = run_config['isolated']
if 'envs' in run_config:
for env in run_config['envs']:
if env != 'default':
for test_suite in run_config['envs'][env]['tests']:
if test_suite in test_suite_env_map:
test_suite_env_map[test_suite] = env
if args.subprocess: # run all case in subprocess
isolated_cases = test_suite_files
with tempfile.TemporaryDirectory() as temp_result_dir:
# first run cases that nonparallelizable
run_non_parallelizable_test_suites(non_parallelizable_suites,
temp_result_dir)
# run case parallel in envs
for env in set(test_suite_env_map.values()):
parallel_run_case_in_env(env, run_config['envs'][env],
test_suite_env_map, isolated_cases,
temp_result_dir, args.parallel)
result_dfs = []
result_path = Path(temp_result_dir)
for result in result_path.iterdir():
if Path.is_file(result):
df = pandas.read_pickle(result)
result_dfs.append(df)
result_pd = pandas.concat(
result_dfs) # merge result of every test suite.
print_table_result(result_pd)
print_abnormal_case_info(result_pd)
statistics_test_result(result_pd)
def get_object_full_name(obj):
klass = obj.__class__
module = klass.__module__
if module == 'builtins':
return klass.__qualname__
return module + '.' + klass.__qualname__
class TimeCostTextTestResult(TextTestResult):
"""Record test case time used!"""
def __init__(self, stream, descriptions, verbosity):
self.successes = []
return super(TimeCostTextTestResult,
self).__init__(stream, descriptions, verbosity)
def startTest(self, test):
test.start_time = datetime.datetime.now()
test.test_full_name = get_object_full_name(
test) + '.' + test._testMethodName
self.stream.writeln('Test case: %s start at: %s' %
(test.test_full_name, test.start_time))
return super(TimeCostTextTestResult, self).startTest(test)
def stopTest(self, test):
TextTestResult.stopTest(self, test)
test.stop_time = datetime.datetime.now()
test.time_cost = (test.stop_time - test.start_time).total_seconds()
self.stream.writeln(
'Test case: %s stop at: %s, cost time: %s(seconds)' %
(test.test_full_name, test.stop_time, test.time_cost))
if torch.cuda.is_available(
) and test.time_cost > 5.0: # print nvidia-smi
cmd = ['nvidia-smi']
run_command_with_popen(cmd)
super(TimeCostTextTestResult, self).stopTest(test)
def addSuccess(self, test):
self.successes.append(test)
super(TextTestResult, self).addSuccess(test)
class TimeCostTextTestRunner(unittest.runner.TextTestRunner):
resultclass = TimeCostTextTestResult
def run(self, test):
return super(TimeCostTextTestRunner, self).run(test)
def _makeResult(self):
result = super(TimeCostTextTestRunner, self)._makeResult()
return result
def gather_test_cases(test_dir, pattern, list_tests):
case_list = []
for dirpath, dirnames, filenames in os.walk(test_dir):
for file in filenames:
if fnmatch(file, pattern):
case_list.append(file)
test_suite = unittest.TestSuite()
for case in case_list:
test_case = unittest.defaultTestLoader.discover(
start_dir=test_dir, pattern=case)
test_suite.addTest(test_case)
if hasattr(test_case, '__iter__'):
for subcase in test_case:
if list_tests:
print(subcase)
else:
if list_tests:
print(test_case)
return test_suite
def print_abnormal_case_info(df):
df = df.loc[(df['Result'] == 'Error') | (df['Result'] == 'Failures')]
for _, row in df.iterrows():
print('Case %s run result: %s, msg:\n%s' %
(row['Name'], row['Result'], row['Info']))
def print_table_result(df):
df = df.loc[df['Result'] != 'Skipped']
df = df.drop('Info', axis=1)
formatters = {
'Name': '{{:<{}s}}'.format(df['Name'].str.len().max()).format,
'Result': '{{:<{}s}}'.format(df['Result'].str.len().max()).format,
}
with pandas.option_context('display.max_rows', None, 'display.max_columns',
None, 'display.width', None):
print(df.to_string(justify='left', formatters=formatters, index=False))
def main(args):
runner = TimeCostTextTestRunner()
if args.suites is not None and len(args.suites) > 0:
logger.info('Running: %s' % ' '.join(args.suites))
test_suite = gather_test_suites_in_files(args.test_dir, args.suites,
args.list_tests)
else:
test_suite = gather_test_cases(
os.path.abspath(args.test_dir), args.pattern, args.list_tests)
if not args.list_tests:
result = runner.run(test_suite)
logger.info('Running case completed, pid: %s, suites: %s' %
(os.getpid(), args.suites))
result = collect_test_results(result)
df = test_cases_result_to_df(result)
if args.result_dir is not None:
save_test_result(df, args)
else:
print_table_result(df)
print_abnormal_case_info(df)
statistics_test_result(df)
if __name__ == '__main__':
parser = argparse.ArgumentParser('test runner')
parser.add_argument(
'--list_tests', action='store_true', help='list all tests')
parser.add_argument(
'--pattern', default='test_*.py', help='test file pattern')
parser.add_argument(
'--test_dir', default='tests', help='directory to be tested')
parser.add_argument(
'--level', default=0, type=int, help='2 -- all, 1 -- p1, 0 -- p0')
parser.add_argument(
'--profile', action='store_true', help='enable profiling')
parser.add_argument(
'--run_config',
default=None,
help='specified case run config file(yaml file)')
parser.add_argument(
'--subprocess',
action='store_true',
help='run all test suite in subprocess')
parser.add_argument(
'--result_dir',
default=None,
help='Save result to directory, internal use only')
parser.add_argument(
'--parallel',
default=1,
type=int,
help='Set case parallels, default single process, set with gpu number.'
)
parser.add_argument(
'--no-diff',
action='store_true',
help=
'Default running case based on git diff(with master), disable with --no-diff)'
)
parser.add_argument(
'--suites',
nargs='*',
help='Run specified test suites(test suite files list split by space)')
args = parser.parse_args()
print(args)
if args.run_config is not None or args.subprocess:
run_in_subprocess(args)
else:
main(args)
| swift/tests/run.py/0 | {
"file_path": "swift/tests/run.py",
"repo_id": "swift",
"token_count": 10407
} | 195 |
from swift.llm import merge_lora_main
if __name__ == '__main__':
merge_lora_main(replace_if_exists=True)
| swift/tools/merge_lora_weights_to_model.py/0 | {
"file_path": "swift/tools/merge_lora_weights_to_model.py",
"repo_id": "swift",
"token_count": 45
} | 196 |
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ChangeListManager">
<list default="true" id="55d14196-ee61-43d0-9830-f39d82d19e5d" name="更改" comment="">
<change afterPath="$PROJECT_DIR$/insightface/recognition/arcface_paddle/Dockerfile" afterDir="false" />
<change beforePath="$PROJECT_DIR$/PaddleDetection/deploy/pipeline/database_model.py" beforeDir="false" afterPath="$PROJECT_DIR$/PaddleDetection/deploy/pipeline/database_model.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/PaddleDetection/deploy/pipeline/pipeline.py" beforeDir="false" afterPath="$PROJECT_DIR$/PaddleDetection/deploy/pipeline/pipeline.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/PaddleDetection/deploy/pipeline/pphuman/mtmct.py" beforeDir="false" afterPath="$PROJECT_DIR$/PaddleDetection/deploy/pipeline/pphuman/mtmct.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/PaddleDetection/deploy/pptracking/python/mot/utils.py" beforeDir="false" afterPath="$PROJECT_DIR$/PaddleDetection/deploy/pptracking/python/mot/utils.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/fauxpilot/proxy.Dockerfile" beforeDir="false" afterPath="$PROJECT_DIR$/fauxpilot/proxy.Dockerfile" afterDir="false" />
<change beforePath="$PROJECT_DIR$/fauxpilot/setup.sh" beforeDir="false" afterPath="$PROJECT_DIR$/fauxpilot/setup.sh" afterDir="false" />
<change beforePath="$PROJECT_DIR$/insightface/recognition/arcface_paddle/tools/test_recognition.py" beforeDir="false" afterPath="$PROJECT_DIR$/insightface/recognition/arcface_paddle/tools/test_recognition.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/mybatis-native-demo/pom.xml" beforeDir="false" afterPath="$PROJECT_DIR$/mybatis-native-demo/pom.xml" afterDir="false" />
<change beforePath="$PROJECT_DIR$/mybatis-native-demo/src/main/java/com/example/nativedemo/controller/DemoController.java" beforeDir="false" afterPath="$PROJECT_DIR$/mybatis-native-demo/src/main/java/com/example/nativedemo/controller/DemoController.java" afterDir="false" />
<change beforePath="$PROJECT_DIR$/mybatis-native-demo/src/main/resources/application.yml" beforeDir="false" afterPath="$PROJECT_DIR$/mybatis-native-demo/src/main/resources/application.yml" afterDir="false" />
</list>
<option name="SHOW_DIALOG" value="false" />
<option name="HIGHLIGHT_CONFLICTS" value="true" />
<option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" />
<option name="LAST_RESOLUTION" value="IGNORE" />
</component>
<component name="Git.Settings">
<option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$/PaddleDetection" />
</component>
<component name="ProjectColorInfo"><![CDATA[{
"associatedIndex": 6
}]]></component>
<component name="ProjectId" id="2iMJrgq53ymq2nZqvxw1eeEjVNv" />
<component name="ProjectViewState">
<option name="hideEmptyMiddlePackages" value="true" />
<option name="showLibraryContents" value="true" />
</component>
<component name="PropertiesComponent"><![CDATA[{
"keyToString": {
"RunOnceActivity.ShowReadmeOnStart": "true",
"git-widget-placeholder": "sub-count",
"nodejs_package_manager_path": "npm",
"vue.rearranger.settings.migration": "true"
}
}]]></component>
<component name="RdControllerToolWindowsLayoutState" isNewUi="true">
<layout>
<window_info id="Bookmarks" show_stripe_button="false" side_tool="true" />
<window_info id="Merge Requests" show_stripe_button="false" />
<window_info id="Commit_Guest" show_stripe_button="false" />
<window_info id="Pull Requests" show_stripe_button="false" />
<window_info id="Learn" show_stripe_button="false" />
<window_info active="true" content_ui="combo" id="Project" order="0" visible="true" weight="0.33008274" />
<window_info id="Commit" order="1" weight="0.25" />
<window_info id="Structure" order="2" side_tool="true" weight="0.25" />
<window_info anchor="bottom" id="Database Changes" show_stripe_button="false" />
<window_info anchor="bottom" id="TypeScript" show_stripe_button="false" />
<window_info anchor="bottom" id="TODO" show_stripe_button="false" />
<window_info anchor="bottom" id="File Transfer" show_stripe_button="false" />
<window_info anchor="bottom" id="Version Control" order="0" />
<window_info anchor="bottom" id="Problems" order="1" />
<window_info anchor="bottom" id="Problems View" order="2" />
<window_info anchor="bottom" id="Terminal" order="3" />
<window_info anchor="bottom" id="Services" order="4" />
<window_info anchor="bottom" id="Python Packages" order="5" weight="0.1" />
<window_info anchor="bottom" id="Python Console" order="6" weight="0.1" />
<window_info anchor="right" id="Endpoints" show_stripe_button="false" />
<window_info anchor="right" id="Coverage" show_stripe_button="false" side_tool="true" />
<window_info anchor="right" id="SciView" show_stripe_button="false" />
<window_info anchor="right" content_ui="combo" id="Notifications" order="0" weight="0.25" />
<window_info anchor="right" id="AIAssistant" order="1" weight="0.25" />
<window_info anchor="right" id="Database" order="2" weight="0.25" />
<window_info anchor="right" id="Gradle" order="3" weight="0.25" />
<window_info anchor="right" id="Maven" order="4" weight="0.25" />
<window_info anchor="right" id="Plots" order="5" weight="0.1" />
<window_info anchor="right" id="Translation.Wordbook" order="5" show_stripe_button="false" side_tool="true" />
</layout>
</component>
<component name="SharedIndexes">
<attachedChunks>
<set>
<option value="bundled-js-predefined-1d06a55b98c1-74d2a5396914-JavaScript-PY-241.14494.241" />
<option value="bundled-python-sdk-0509580d9d50-28c9f5db9ffe-com.jetbrains.pycharm.pro.sharedIndexes.bundled-PY-241.14494.241" />
</set>
</attachedChunks>
</component>
<component name="SpellCheckerSettings" RuntimeDictionaries="0" Folders="0" CustomDictionaries="0" DefaultDictionary="应用程序级" UseSingleDictionary="true" transferred="true" />
<component name="TaskManager">
<task active="true" id="Default" summary="默认任务">
<changelist id="55d14196-ee61-43d0-9830-f39d82d19e5d" name="更改" comment="" />
<created>1719294713876</created>
<option name="number" value="Default" />
<option name="presentableId" value="Default" />
<updated>1719294713876</updated>
<workItem from="1719294715268" duration="22000" />
</task>
<servers />
</component>
<component name="TypeScriptGeneratedFilesManager">
<option name="version" value="3" />
</component>
</project> | .idea/workspace.xml/0 | {
"file_path": ".idea/workspace.xml",
"repo_id": ".idea",
"token_count": 2623
} | 0 |
cff-version: 1.2.0
date-released: 2024-03
message: "If you use this software, please cite it as below."
authors:
- family-names: "Zheng"
given-names: "Yaowei"
- family-names: "Zhang"
given-names: "Richong"
- family-names: "Zhang"
given-names: "Junhao"
- family-names: "Ye"
given-names: "Yanhan"
- family-names: "Luo"
given-names: "Zheyan"
- family-names: "Feng"
given-names: "Zhangchi"
- family-names: "Ma"
given-names: "Yongqiang"
title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
url: "https://arxiv.org/abs/2403.13372"
preferred-citation:
type: conference-paper
conference:
name: "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)"
authors:
- family-names: "Zheng"
given-names: "Yaowei"
- family-names: "Zhang"
given-names: "Richong"
- family-names: "Zhang"
given-names: "Junhao"
- family-names: "Ye"
given-names: "Yanhan"
- family-names: "Luo"
given-names: "Zheyan"
- family-names: "Feng"
given-names: "Zhangchi"
- family-names: "Ma"
given-names: "Yongqiang"
title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
url: "https://arxiv.org/abs/2403.13372"
year: 2024
publisher: "Association for Computational Linguistics"
address: "Bangkok, Thailand"
| LLaMA-Factory/CITATION.cff/0 | {
"file_path": "LLaMA-Factory/CITATION.cff",
"repo_id": "LLaMA-Factory",
"token_count": 546
} | 1 |
[dataset_info.json](dataset_info.json) 包含了所有可用的数据集。如果您希望使用自定义数据集,请**务必**在 `dataset_info.json` 文件中添加*数据集描述*,并通过修改 `dataset: 数据集名称` 配置来使用数据集。
目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集。
```json
"数据集名称": {
"hf_hub_url": "Hugging Face 的数据集仓库地址(若指定,则忽略 script_url 和 file_name)",
"ms_hub_url": "ModelScope 的数据集仓库地址(若指定,则忽略 script_url 和 file_name)",
"script_url": "包含数据加载脚本的本地文件夹名称(若指定,则忽略 file_name)",
"file_name": "该目录下数据集文件夹或文件的名称(若上述参数未指定,则此项必需)",
"formatting": "数据集格式(可选,默认:alpaca,可以为 alpaca 或 sharegpt)",
"ranking": "是否为偏好数据集(可选,默认:False)",
"subset": "数据集子集的名称(可选,默认:None)",
"folder": "Hugging Face 仓库的文件夹名称(可选,默认:None)",
"num_samples": "该数据集中用于训练的样本数量。(可选,默认:None)",
"columns(可选)": {
"prompt": "数据集代表提示词的表头名称(默认:instruction)",
"query": "数据集代表请求的表头名称(默认:input)",
"response": "数据集代表回答的表头名称(默认:output)",
"history": "数据集代表历史对话的表头名称(默认:None)",
"messages": "数据集代表消息列表的表头名称(默认:conversations)",
"system": "数据集代表系统提示的表头名称(默认:None)",
"tools": "数据集代表工具描述的表头名称(默认:None)",
"images": "数据集代表图像输入的表头名称(默认:None)",
"chosen": "数据集代表更优回答的表头名称(默认:None)",
"rejected": "数据集代表更差回答的表头名称(默认:None)",
"kto_tag": "数据集代表 KTO 标签的表头名称(默认:None)"
},
"tags(可选,用于 sharegpt 格式)": {
"role_tag": "消息中代表发送者身份的键名(默认:from)",
"content_tag": "消息中代表文本内容的键名(默认:value)",
"user_tag": "消息中代表用户的 role_tag(默认:human)",
"assistant_tag": "消息中代表助手的 role_tag(默认:gpt)",
"observation_tag": "消息中代表工具返回结果的 role_tag(默认:observation)",
"function_tag": "消息中代表工具调用的 role_tag(默认:function_call)",
"system_tag": "消息中代表系统提示的 role_tag(默认:system,会覆盖 system column)"
}
}
```
## Alpaca 格式
### 指令监督微调数据集
- [样例数据集](alpaca_zh_demo.json)
在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为人类指令,即人类指令为 `instruction\ninput`。而 `output` 列对应的内容为模型回答。
如果指定,`system` 列对应的内容将被作为系统提示词。
`history` 列是由多个字符串二元组构成的列表,分别代表历史消息中每轮对话的指令和回答。注意在指令监督微调时,历史消息中的回答内容**也会被用于模型学习**。
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"output": "模型回答(必填)",
"system": "系统提示词(选填)",
"history": [
["第一轮指令(选填)", "第一轮回答(选填)"],
["第二轮指令(选填)", "第二轮回答(选填)"]
]
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"system": "system",
"history": "history"
}
}
```
### 预训练数据集
- [样例数据集](c4_demo.json)
在预训练时,只有 `text` 列中的内容会用于模型学习。
```json
[
{"text": "document"},
{"text": "document"}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "text"
}
}
```
### 偏好数据集
偏好数据集用于奖励模型训练、DPO 训练和 ORPO 训练。
它需要在 `chosen` 列中提供更优的回答,并在 `rejected` 列中提供更差的回答。
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"chosen": "优质回答(必填)",
"rejected": "劣质回答(必填)"
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"ranking": true,
"columns": {
"prompt": "instruction",
"query": "input",
"chosen": "chosen",
"rejected": "rejected"
}
}
```
### KTO 数据集
- [样例数据集](kto_en_demo.json)
KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人类反馈。
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"output": "模型回答(必填)",
"kto_tag": "人类反馈 [true/false](必填)"
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"kto_tag": "kto_tag"
}
}
```
### 多模态数据集
- [样例数据集](mllm_demo.json)
多模态数据集需要额外添加一个 `images` 列,包含输入图像的路径。目前我们仅支持单张图像输入。
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"output": "模型回答(必填)",
"images": [
"图像路径(必填)"
]
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"images": "images"
}
}
```
## Sharegpt 格式
### 指令监督微调数据集
- [样例数据集](glaive_toolcall_zh_demo.json)
相比 alpaca 格式的数据集,sharegpt 格式支持**更多的角色种类**,例如 human、gpt、observation、function 等等。它们构成一个对象列表呈现在 `conversations` 列中。
注意其中 human 和 observation 必须出现在奇数位置,gpt 和 function 必须出现在偶数位置。
```json
[
{
"conversations": [
{
"from": "human",
"value": "人类指令"
},
{
"from": "function_call",
"value": "工具参数"
},
{
"from": "observation",
"value": "工具结果"
},
{
"from": "gpt",
"value": "模型回答"
}
],
"system": "系统提示词(选填)",
"tools": "工具描述(选填)"
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "conversations",
"system": "system",
"tools": "tools"
}
}
```
### 偏好数据集
- [样例数据集](dpo_zh_demo.json)
Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的消息,并在 `rejected` 列中提供更差的消息。
```json
[
{
"conversations": [
{
"from": "human",
"value": "人类指令"
},
{
"from": "gpt",
"value": "模型回答"
},
{
"from": "human",
"value": "人类指令"
}
],
"chosen": {
"from": "gpt",
"value": "优质回答"
},
"rejected": {
"from": "gpt",
"value": "劣质回答"
}
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"ranking": true,
"columns": {
"messages": "conversations",
"chosen": "chosen",
"rejected": "rejected"
}
}
```
### OpenAI 格式
OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消息可能是系统提示词。
```json
[
{
"messages": [
{
"role": "system",
"content": "系统提示词(选填)"
},
{
"role": "user",
"content": "人类指令"
},
{
"role": "assistant",
"content": "模型回答"
}
]
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
Sharegpt 格式中的 KTO 数据集和多模态数据集与 alpaca 格式的类似。
预训练数据集**不支持** sharegpt 格式。
| LLaMA-Factory/data/README_zh.md/0 | {
"file_path": "LLaMA-Factory/data/README_zh.md",
"repo_id": "LLaMA-Factory",
"token_count": 5991
} | 2 |
We provide diverse examples about fine-tuning LLMs.
Make sure to execute these commands in the `LLaMA-Factory` directory.
## Table of Contents
- [LoRA Fine-Tuning](#lora-fine-tuning)
- [QLoRA Fine-Tuning](#qlora-fine-tuning)
- [Full-Parameter Fine-Tuning](#full-parameter-fine-tuning)
- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization)
- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models)
- [Extras](#extras)
Use `CUDA_VISIBLE_DEVICES` (GPU) or `ASCEND_RT_VISIBLE_DEVICES` (NPU) to choose computing devices.
## Examples
### LoRA Fine-Tuning
#### (Continuous) Pre-Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
```
#### Supervised Fine-Tuning
```bash
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
#### Multimodal Supervised Fine-Tuning
```bash
llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml
```
#### Reward Modeling
```bash
llamafactory-cli train examples/train_lora/llama3_lora_reward.yaml
```
#### PPO Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_ppo.yaml
```
#### DPO/ORPO/SimPO Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml
```
#### KTO Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_kto.yaml
```
#### Preprocess Dataset
It is useful for large dataset, use `tokenized_path` in config to load the preprocessed dataset.
```bash
llamafactory-cli train examples/train_lora/llama3_preprocess.yaml
```
#### Evaluating on MMLU/CMMLU/C-Eval Benchmarks
```bash
llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml
```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_lora/llama3_lora_predict.yaml
```
#### Supervised Fine-Tuning on Multiple Nodes
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
#### Supervised Fine-Tuning with DeepSpeed ZeRO-3 (Weight Sharding)
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml
```
### QLoRA Fine-Tuning
#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes/HQQ/EETQ Quantization (Recommended)
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml
```
#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml
```
#### Supervised Fine-Tuning with 4-bit AWQ Quantization
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml
```
#### Supervised Fine-Tuning with 2-bit AQLM Quantization
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
```
### Full-Parameter Fine-Tuning
#### Supervised Fine-Tuning on Single Node
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
```
#### Supervised Fine-Tuning on Multiple Nodes
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_full/llama3_full_predict.yaml
```
### Merging LoRA Adapters and Quantization
#### Merge LoRA Adapters
Note: DO NOT use quantized model or `quantization_bit` when merging LoRA adapters.
```bash
llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
```
#### Quantizing Model using AutoGPTQ
```bash
llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
```
### Inferring LoRA Fine-Tuned Models
#### Use CLI
```bash
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
```
#### Use Web UI
```bash
llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
```
#### Launch OpenAI-style API
```bash
llamafactory-cli api examples/inference/llama3_lora_sft.yaml
```
### Extras
#### Full-Parameter Fine-Tuning using GaLore
```bash
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
```
#### Full-Parameter Fine-Tuning using BAdam
```bash
llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
```
#### LoRA+ Fine-Tuning
```bash
llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
```
#### PiSSA Fine-Tuning
```bash
llamafactory-cli train examples/extras/pissa/llama3_lora_sft.yaml
```
#### Mixture-of-Depths Fine-Tuning
```bash
llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
```
#### LLaMA-Pro Fine-Tuning
```bash
bash examples/extras/llama_pro/expand.sh
llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
```
#### FSDP+QLoRA Fine-Tuning
```bash
bash examples/extras/fsdp_qlora/train.sh
```
| LLaMA-Factory/examples/README.md/0 | {
"file_path": "LLaMA-Factory/examples/README.md",
"repo_id": "LLaMA-Factory",
"token_count": 2042
} | 3 |
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
mixture_of_depths: convert
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b-mod/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
optim: paged_adamw_8bit
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
pure_bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
| LLaMA-Factory/examples/extras/mod/llama3_full_sft.yaml/0 | {
"file_path": "LLaMA-Factory/examples/extras/mod/llama3_full_sft.yaml",
"repo_id": "LLaMA-Factory",
"token_count": 303
} | 4 |
# Copyright 2024 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from contextlib import asynccontextmanager
from typing import Optional
from typing_extensions import Annotated
from ..chat import ChatModel
from ..extras.misc import torch_gc
from ..extras.packages import is_fastapi_available, is_starlette_available, is_uvicorn_available
from .chat import (
create_chat_completion_response,
create_score_evaluation_response,
create_stream_chat_completion_response,
)
from .protocol import (
ChatCompletionRequest,
ChatCompletionResponse,
ModelCard,
ModelList,
ScoreEvaluationRequest,
ScoreEvaluationResponse,
)
if is_fastapi_available():
from fastapi import Depends, FastAPI, HTTPException, status
from fastapi.middleware.cors import CORSMiddleware
from fastapi.security.http import HTTPAuthorizationCredentials, HTTPBearer
if is_starlette_available():
from sse_starlette import EventSourceResponse
if is_uvicorn_available():
import uvicorn
@asynccontextmanager
async def lifespan(app: "FastAPI"): # collects GPU memory
yield
torch_gc()
def create_app(chat_model: "ChatModel") -> "FastAPI":
app = FastAPI(lifespan=lifespan)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
api_key = os.environ.get("API_KEY")
security = HTTPBearer(auto_error=False)
async def verify_api_key(auth: Annotated[Optional[HTTPAuthorizationCredentials], Depends(security)]):
if api_key and (auth is None or auth.credentials != api_key):
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key.")
@app.get(
"/v1/models",
response_model=ModelList,
status_code=status.HTTP_200_OK,
dependencies=[Depends(verify_api_key)],
)
async def list_models():
model_card = ModelCard(id="gpt-3.5-turbo")
return ModelList(data=[model_card])
@app.post(
"/v1/chat/completions",
response_model=ChatCompletionResponse,
status_code=status.HTTP_200_OK,
dependencies=[Depends(verify_api_key)],
)
async def create_chat_completion(request: ChatCompletionRequest):
if not chat_model.engine.can_generate:
raise HTTPException(status_code=status.HTTP_405_METHOD_NOT_ALLOWED, detail="Not allowed")
if request.stream:
generate = create_stream_chat_completion_response(request, chat_model)
return EventSourceResponse(generate, media_type="text/event-stream")
else:
return await create_chat_completion_response(request, chat_model)
@app.post(
"/v1/score/evaluation",
response_model=ScoreEvaluationResponse,
status_code=status.HTTP_200_OK,
dependencies=[Depends(verify_api_key)],
)
async def create_score_evaluation(request: ScoreEvaluationRequest):
if chat_model.engine.can_generate:
raise HTTPException(status_code=status.HTTP_405_METHOD_NOT_ALLOWED, detail="Not allowed")
return await create_score_evaluation_response(request, chat_model)
return app
def run_api() -> None:
chat_model = ChatModel()
app = create_app(chat_model)
api_host = os.environ.get("API_HOST", "0.0.0.0")
api_port = int(os.environ.get("API_PORT", "8000"))
print("Visit http://localhost:{}/docs for API document.".format(api_port))
uvicorn.run(app, host=api_host, port=api_port)
| LLaMA-Factory/src/llamafactory/api/app.py/0 | {
"file_path": "LLaMA-Factory/src/llamafactory/api/app.py",
"repo_id": "LLaMA-Factory",
"token_count": 1520
} | 5 |
# Copyright 2024 HuggingFace Inc. and the LlamaFactory team.
#
# This code is inspired by the HuggingFace's transformers library.
# https://github.com/huggingface/transformers/blob/v4.40.0/src/transformers/commands/env.py
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import platform
import accelerate
import datasets
import peft
import torch
import transformers
import trl
from transformers.utils import is_torch_cuda_available, is_torch_npu_available
VERSION = "0.8.3.dev0"
def print_env() -> None:
info = {
"`llamafactory` version": VERSION,
"Platform": platform.platform(),
"Python version": platform.python_version(),
"PyTorch version": torch.__version__,
"Transformers version": transformers.__version__,
"Datasets version": datasets.__version__,
"Accelerate version": accelerate.__version__,
"PEFT version": peft.__version__,
"TRL version": trl.__version__,
}
if is_torch_cuda_available():
info["PyTorch version"] += " (GPU)"
info["GPU type"] = torch.cuda.get_device_name()
if is_torch_npu_available():
info["PyTorch version"] += " (NPU)"
info["NPU type"] = torch.npu.get_device_name()
info["CANN version"] = torch.version.cann
try:
import deepspeed # type: ignore
info["DeepSpeed version"] = deepspeed.__version__
except Exception:
pass
try:
import bitsandbytes
info["Bitsandbytes version"] = bitsandbytes.__version__
except Exception:
pass
try:
import vllm
info["vLLM version"] = vllm.__version__
except Exception:
pass
print("\n" + "\n".join(["- {}: {}".format(key, value) for key, value in info.items()]) + "\n")
| LLaMA-Factory/src/llamafactory/extras/env.py/0 | {
"file_path": "LLaMA-Factory/src/llamafactory/extras/env.py",
"repo_id": "LLaMA-Factory",
"token_count": 832
} | 6 |
# Copyright 2024 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional
@dataclass
class GeneratingArguments:
r"""
Arguments pertaining to specify the decoding parameters.
"""
do_sample: bool = field(
default=True,
metadata={"help": "Whether or not to use sampling, use greedy decoding otherwise."},
)
temperature: float = field(
default=0.95,
metadata={"help": "The value used to modulate the next token probabilities."},
)
top_p: float = field(
default=0.7,
metadata={
"help": "The smallest set of most probable tokens with probabilities that add up to top_p or higher are kept."
},
)
top_k: int = field(
default=50,
metadata={"help": "The number of highest probability vocabulary tokens to keep for top-k filtering."},
)
num_beams: int = field(
default=1,
metadata={"help": "Number of beams for beam search. 1 means no beam search."},
)
max_length: int = field(
default=1024,
metadata={"help": "The maximum length the generated tokens can have. It can be overridden by max_new_tokens."},
)
max_new_tokens: int = field(
default=1024,
metadata={"help": "The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt."},
)
repetition_penalty: float = field(
default=1.0,
metadata={"help": "The parameter for repetition penalty. 1.0 means no penalty."},
)
length_penalty: float = field(
default=1.0,
metadata={"help": "Exponential penalty to the length that is used with beam-based generation."},
)
default_system: Optional[str] = field(
default=None,
metadata={"help": "Default system message to use in chat completion."},
)
def to_dict(self) -> Dict[str, Any]:
args = asdict(self)
if args.get("max_new_tokens", -1) > 0:
args.pop("max_length", None)
else:
args.pop("max_new_tokens", None)
return args
| LLaMA-Factory/src/llamafactory/hparams/generating_args.py/0 | {
"file_path": "LLaMA-Factory/src/llamafactory/hparams/generating_args.py",
"repo_id": "LLaMA-Factory",
"token_count": 968
} | 7 |
# Copyright 2024 HuggingFace Inc. and the LlamaFactory team.
#
# This code is inspired by the HuggingFace's Transformers and Optimum library.
# https://github.com/huggingface/transformers/blob/v4.41.0/src/transformers/utils/quantization_config.py
# https://github.com/huggingface/optimum/blob/v1.20.0/optimum/gptq/data.py
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import random
from enum import Enum, unique
from typing import TYPE_CHECKING, Any, Dict, List
import torch
from datasets import load_dataset
from transformers import BitsAndBytesConfig, EetqConfig, GPTQConfig, HqqConfig
from transformers.integrations import is_deepspeed_zero3_enabled
from transformers.modeling_utils import is_fsdp_enabled
from transformers.utils.versions import require_version
from ...extras.constants import FILEEXT2TYPE
from ...extras.logging import get_logger
from ...extras.misc import get_current_device
if TYPE_CHECKING:
from transformers import PretrainedConfig, PreTrainedTokenizer
from ...hparams import ModelArguments
logger = get_logger(__name__)
@unique
class QuantizationMethod(str, Enum):
r"""
Borrowed from `transformers.utils.quantization_config.QuantizationMethod`.
"""
BITS_AND_BYTES = "bitsandbytes"
GPTQ = "gptq"
AWQ = "awq"
AQLM = "aqlm"
QUANTO = "quanto"
EETQ = "eetq"
HQQ = "hqq"
def _get_quantization_dataset(tokenizer: "PreTrainedTokenizer", model_args: "ModelArguments") -> List[Dict[str, Any]]:
r"""
Prepares the tokenized dataset to perform AutoGPTQ. Do not use tensor output for JSON serialization.
"""
if os.path.isfile(model_args.export_quantization_dataset):
data_path = FILEEXT2TYPE.get(model_args.export_quantization_dataset.split(".")[-1], None)
data_files = model_args.export_quantization_dataset
else:
data_path = model_args.export_quantization_dataset
data_files = None
dataset = load_dataset(
path=data_path,
data_files=data_files,
split="train",
cache_dir=model_args.cache_dir,
token=model_args.hf_hub_token,
)
samples = []
maxlen = model_args.export_quantization_maxlen
for _ in range(model_args.export_quantization_nsamples):
n_try = 0
while True:
if n_try > 100:
raise ValueError("Cannot find satisfying example, considering decrease `export_quantization_maxlen`.")
sample_idx = random.randint(0, len(dataset) - 1)
sample: Dict[str, "torch.Tensor"] = tokenizer(dataset[sample_idx]["text"], return_tensors="pt")
n_try += 1
if sample["input_ids"].size(1) > maxlen:
break # TODO: fix large maxlen
word_idx = random.randint(0, sample["input_ids"].size(1) - maxlen - 1)
input_ids = sample["input_ids"][:, word_idx : word_idx + maxlen]
attention_mask = sample["attention_mask"][:, word_idx : word_idx + maxlen]
samples.append({"input_ids": input_ids.tolist(), "attention_mask": attention_mask.tolist()})
return samples
def configure_quantization(
config: "PretrainedConfig",
tokenizer: "PreTrainedTokenizer",
model_args: "ModelArguments",
init_kwargs: Dict[str, Any],
) -> None:
r"""
Priority: PTQ-quantized (train/infer) > AutoGPTQ (export) > On-the-fly quantization (train/infer)
"""
if getattr(config, "quantization_config", None): # ptq
if model_args.quantization_bit is not None:
logger.warning("`quantization_bit` will not affect on the PTQ-quantized models.")
if is_deepspeed_zero3_enabled() or is_fsdp_enabled():
raise ValueError("DeepSpeed ZeRO-3 or FSDP is incompatible with PTQ-quantized models.")
quantization_config: Dict[str, Any] = getattr(config, "quantization_config", None)
quant_method = quantization_config.get("quant_method", "")
if quant_method == QuantizationMethod.GPTQ:
require_version("auto_gptq>=0.5.0", "To fix: pip install auto_gptq>=0.5.0")
quantization_config.pop("disable_exllama", None) # remove deprecated args
quantization_config["use_exllama"] = False # disable exllama
if quant_method == QuantizationMethod.AWQ:
require_version("autoawq", "To fix: pip install autoawq")
if quant_method == QuantizationMethod.AQLM:
require_version("transformers>=4.39.0", "To fix: pip install transformers>=4.39.0")
require_version("aqlm>=1.1.0", "To fix: pip install aqlm[gpu]>=1.1.0")
quantization_config["bits"] = 2
quant_bits = quantization_config.get("bits", "?")
logger.info("Loading {}-bit {}-quantized model.".format(quant_bits, quant_method.upper()))
elif model_args.export_quantization_bit is not None: # auto-gptq
if model_args.export_quantization_bit not in [8, 4, 3, 2]:
raise ValueError("AutoGPTQ only accepts 2/3/4/8-bit quantization.")
require_version("optimum>=1.17.0", "To fix: pip install optimum>=1.17.0")
require_version("auto_gptq>=0.5.0", "To fix: pip install auto_gptq>=0.5.0")
from accelerate.utils import get_max_memory
if getattr(config, "model_type", None) == "chatglm":
raise ValueError("ChatGLM model is not supported yet.")
init_kwargs["quantization_config"] = GPTQConfig(
bits=model_args.export_quantization_bit,
dataset=_get_quantization_dataset(tokenizer, model_args),
)
init_kwargs["device_map"] = "auto"
init_kwargs["max_memory"] = get_max_memory()
logger.info("Quantizing model to {} bit with AutoGPTQ.".format(model_args.export_quantization_bit))
elif model_args.quantization_bit is not None: # on-the-fly
if model_args.quantization_method == QuantizationMethod.BITS_AND_BYTES.value:
if model_args.quantization_bit == 8:
require_version("bitsandbytes>=0.37.0", "To fix: pip install bitsandbytes>=0.37.0")
init_kwargs["quantization_config"] = BitsAndBytesConfig(load_in_8bit=True)
elif model_args.quantization_bit == 4:
require_version("bitsandbytes>=0.39.0", "To fix: pip install bitsandbytes>=0.39.0")
init_kwargs["quantization_config"] = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=model_args.compute_dtype,
bnb_4bit_use_double_quant=model_args.double_quantization,
bnb_4bit_quant_type=model_args.quantization_type,
bnb_4bit_quant_storage=model_args.compute_dtype, # crucial for fsdp+qlora
)
else:
raise ValueError("Bitsandbytes only accepts 4-bit or 8-bit quantization.")
# Do not assign device map if:
# 1. deepspeed zero3 or fsdp (train)
# 2. auto quantization device map (inference)
if is_deepspeed_zero3_enabled() or is_fsdp_enabled() or model_args.quantization_device_map == "auto":
if model_args.quantization_bit != 4:
raise ValueError("Only 4-bit quantized model can use fsdp+qlora or auto device map.")
require_version("bitsandbytes>=0.43.0", "To fix: pip install bitsandbytes>=0.43.0")
else:
init_kwargs["device_map"] = {"": get_current_device()} # change auto device map for inference
logger.info("Quantizing model to {} bit with bitsandbytes.".format(model_args.quantization_bit))
elif model_args.quantization_method == QuantizationMethod.HQQ.value:
if model_args.quantization_bit not in [8, 6, 5, 4, 3, 2, 1]:
raise ValueError("HQQ only accepts 1/2/3/4/5/6/8-bit quantization.")
if is_deepspeed_zero3_enabled() or is_fsdp_enabled():
raise ValueError("HQQ quantization is incompatible with DeepSpeed ZeRO-3 or FSDP.")
require_version("hqq", "To fix: pip install hqq")
init_kwargs["quantization_config"] = HqqConfig(
nbits=model_args.quantization_bit, quant_zero=False, quant_scale=False, axis=0
) # use ATEN kernel (axis=0) for performance
logger.info("Quantizing model to {} bit with HQQ.".format(model_args.quantization_bit))
elif model_args.quantization_method == QuantizationMethod.EETQ.value:
if model_args.quantization_bit != 8:
raise ValueError("EETQ only accepts 8-bit quantization.")
if is_deepspeed_zero3_enabled() or is_fsdp_enabled():
raise ValueError("EETQ quantization is incompatible with DeepSpeed ZeRO-3 or FSDP.")
require_version("eetq", "To fix: pip install eetq")
init_kwargs["quantization_config"] = EetqConfig()
logger.info("Quantizing model to {} bit with EETQ.".format(model_args.quantization_bit))
| LLaMA-Factory/src/llamafactory/model/model_utils/quantization.py/0 | {
"file_path": "LLaMA-Factory/src/llamafactory/model/model_utils/quantization.py",
"repo_id": "LLaMA-Factory",
"token_count": 3910
} | 8 |
# Copyright 2024 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Dict
from ...extras.packages import is_gradio_available
from .chatbot import create_chat_box
if is_gradio_available():
import gradio as gr
if TYPE_CHECKING:
from gradio.components import Component
from ..engine import Engine
def create_infer_tab(engine: "Engine") -> Dict[str, "Component"]:
input_elems = engine.manager.get_base_elems()
elem_dict = dict()
with gr.Row():
infer_backend = gr.Dropdown(choices=["huggingface", "vllm"], value="huggingface")
infer_dtype = gr.Dropdown(choices=["auto", "float16", "bfloat16", "float32"], value="auto")
with gr.Row():
load_btn = gr.Button()
unload_btn = gr.Button()
info_box = gr.Textbox(show_label=False, interactive=False)
input_elems.update({infer_backend, infer_dtype})
elem_dict.update(
dict(
infer_backend=infer_backend,
infer_dtype=infer_dtype,
load_btn=load_btn,
unload_btn=unload_btn,
info_box=info_box,
)
)
chatbot, messages, chat_elems = create_chat_box(engine, visible=False)
elem_dict.update(chat_elems)
load_btn.click(engine.chatter.load_model, input_elems, [info_box]).then(
lambda: gr.Column(visible=engine.chatter.loaded), outputs=[chat_elems["chat_box"]]
)
unload_btn.click(engine.chatter.unload_model, input_elems, [info_box]).then(
lambda: ([], []), outputs=[chatbot, messages]
).then(lambda: gr.Column(visible=engine.chatter.loaded), outputs=[chat_elems["chat_box"]])
engine.manager.get_elem_by_id("top.visual_inputs").change(
lambda enabled: gr.Column(visible=enabled),
[engine.manager.get_elem_by_id("top.visual_inputs")],
[chat_elems["image_box"]],
)
return elem_dict
| LLaMA-Factory/src/llamafactory/webui/components/infer.py/0 | {
"file_path": "LLaMA-Factory/src/llamafactory/webui/components/infer.py",
"repo_id": "LLaMA-Factory",
"token_count": 922
} | 9 |
# Copyright 2024 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from transformers.utils import is_flash_attn_2_available, is_torch_sdpa_available
from llamafactory.hparams import get_infer_args
from llamafactory.model import load_model, load_tokenizer
TINY_LLAMA = os.environ.get("TINY_LLAMA", "llamafactory/tiny-random-Llama-3")
INFER_ARGS = {
"model_name_or_path": TINY_LLAMA,
"template": "llama3",
}
def test_attention():
attention_available = ["disabled"]
if is_torch_sdpa_available():
attention_available.append("sdpa")
if is_flash_attn_2_available():
attention_available.append("fa2")
llama_attention_classes = {
"disabled": "LlamaAttention",
"sdpa": "LlamaSdpaAttention",
"fa2": "LlamaFlashAttention2",
}
for requested_attention in attention_available:
model_args, _, finetuning_args, _ = get_infer_args({"flash_attn": requested_attention, **INFER_ARGS})
tokenizer_module = load_tokenizer(model_args)
model = load_model(tokenizer_module["tokenizer"], model_args, finetuning_args)
for module in model.modules():
if "Attention" in module.__class__.__name__:
assert module.__class__.__name__ == llama_attention_classes[requested_attention]
| LLaMA-Factory/tests/model/model_utils/test_attention.py/0 | {
"file_path": "LLaMA-Factory/tests/model/model_utils/test_attention.py",
"repo_id": "LLaMA-Factory",
"token_count": 662
} | 10 |
#!/bin/bash
set -e
clang-format $@
| PaddleDetection/.travis/codestyle/clang_format.hook/0 | {
"file_path": "PaddleDetection/.travis/codestyle/clang_format.hook",
"repo_id": "PaddleDetection",
"token_count": 18
} | 11 |
# Cascade R-CNN: High Quality Object Detection and Instance Segmentation
## Model Zoo
| 骨架网络 | 网络类型 | 每张GPU图片个数 | 学习率策略 |推理时间(fps) | Box AP | Mask AP | 下载 | 配置文件 |
| :------------------- | :------------- | :-----: | :-----: | :------------: | :-----: | :-----: | :-----------------------------------------------------: | :-----: |
| ResNet50-FPN | Cascade Faster | 1 | 1x | ---- | 41.1 | - | [下载链接](https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_fpn_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.yml) |
| ResNet50-FPN | Cascade Mask | 1 | 1x | ---- | 41.8 | 36.3 | [下载链接](https://paddledet.bj.bcebos.com/models/cascade_mask_rcnn_r50_fpn_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.yml) |
| ResNet50-vd-SSLDv2-FPN | Cascade Faster | 1 | 1x | ---- | 44.4 | - | [下载链接](https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn/cascade_rcnn_r50_vd_fpn_ssld_1x_coco.yml) |
| ResNet50-vd-SSLDv2-FPN | Cascade Faster | 1 | 2x | ---- | 45.0 | - | [下载链接](https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.yml) |
| ResNet50-vd-SSLDv2-FPN | Cascade Mask | 1 | 1x | ---- | 44.9 | 39.1 | [下载链接](https://paddledet.bj.bcebos.com/models/cascade_mask_rcnn_r50_vd_fpn_ssld_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn/cascade_mask_rcnn_r50_vd_fpn_ssld_1x_coco.yml) |
| ResNet50-vd-SSLDv2-FPN | Cascade Mask | 1 | 2x | ---- | 45.7 | 39.7 | [下载链接](https://paddledet.bj.bcebos.com/models/cascade_mask_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/cascade_rcnn/cascade_mask_rcnn_r50_vd_fpn_ssld_2x_coco.yml) |
## Citations
```
@article{Cai_2019,
title={Cascade R-CNN: High Quality Object Detection and Instance Segmentation},
ISSN={1939-3539},
url={http://dx.doi.org/10.1109/tpami.2019.2956516},
DOI={10.1109/tpami.2019.2956516},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
publisher={Institute of Electrical and Electronics Engineers (IEEE)},
author={Cai, Zhaowei and Vasconcelos, Nuno},
year={2019},
pages={1–1}
}
```
| PaddleDetection/configs/cascade_rcnn/README.md/0 | {
"file_path": "PaddleDetection/configs/cascade_rcnn/README.md",
"repo_id": "PaddleDetection",
"token_count": 1548
} | 12 |
worker_num: 4
TrainReader:
inputs_def:
image_shape: [3, 512, 512]
sample_transforms:
- Decode: {}
- FlipWarpAffine: {keep_res: False, input_h: 512, input_w: 512, use_random: True}
- CenterRandColor: {}
- Lighting: {eigval: [0.2141788, 0.01817699, 0.00341571], eigvec: [[-0.58752847, -0.69563484, 0.41340352], [-0.5832747, 0.00994535, -0.81221408], [-0.56089297, 0.71832671, 0.41158938]]}
- NormalizeImage: {mean: [0.40789655, 0.44719303, 0.47026116], std: [0.2886383 , 0.27408165, 0.27809834], is_scale: False}
- Permute: {}
- Gt2CenterNetTarget: {down_ratio: 4, max_objs: 128}
batch_size: 16
shuffle: True
drop_last: True
use_shared_memory: True
EvalReader:
sample_transforms:
- Decode: {}
- WarpAffine: {keep_res: True, input_h: 512, input_w: 512}
- NormalizeImage: {mean: [0.40789655, 0.44719303, 0.47026116], std: [0.2886383 , 0.27408165, 0.27809834]}
- Permute: {}
batch_size: 1
TestReader:
inputs_def:
image_shape: [3, 512, 512]
sample_transforms:
- Decode: {}
- WarpAffine: {keep_res: True, input_h: 512, input_w: 512}
- NormalizeImage: {mean: [0.40789655, 0.44719303, 0.47026116], std: [0.2886383 , 0.27408165, 0.27809834], is_scale: True}
- Permute: {}
batch_size: 1
| PaddleDetection/configs/centernet/_base_/centernet_reader.yml/0 | {
"file_path": "PaddleDetection/configs/centernet/_base_/centernet_reader.yml",
"repo_id": "PaddleDetection",
"token_count": 588
} | 13 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../yolox/_base_/yolox_cspdarknet.yml',
'../yolox/_base_/yolox_reader.yml'
]
depth_mult: 0.33
width_mult: 0.50
log_iter: 100
snapshot_epoch: 5
weights: output/yolox_convnext_s_36e_coco/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/convnext_tiny_22k_224.pdparams
YOLOX:
backbone: ConvNeXt
neck: YOLOCSPPAN
head: YOLOXHead
size_stride: 32
size_range: [15, 25] # multi-scale range [480*480 ~ 800*800]
ConvNeXt:
arch: 'tiny'
drop_path_rate: 0.4
layer_scale_init_value: 1.0
return_idx: [1, 2, 3]
TrainReader:
batch_size: 8
mosaic_epoch: 30
YOLOXHead:
l1_epoch: 30
nms:
name: MultiClassNMS
nms_top_k: 10000
keep_top_k: 1000
score_threshold: 0.001
nms_threshold: 0.65
epoch: 36
LearningRate:
base_lr: 0.0002
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [36]
use_warmup: false
OptimizerBuilder:
regularizer: false
optimizer:
type: AdamW
weight_decay: 0.0005
| PaddleDetection/configs/convnext/yolox_convnext_s_36e_coco.yml/0 | {
"file_path": "PaddleDetection/configs/convnext/yolox_convnext_s_36e_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 491
} | 14 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/deformable_optimizer_1x.yml',
'_base_/deformable_detr_r50.yml',
'_base_/deformable_detr_reader.yml',
]
weights: output/deformable_detr_r50_1x_coco/model_final
find_unused_parameters: True
| PaddleDetection/configs/deformable_detr/deformable_detr_r50_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/deformable_detr/deformable_detr_r50_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 128
} | 15 |
# 人脸检测模型
## 简介
`face_detection`中提供高效、高速的人脸检测解决方案,包括最先进的模型和经典模型。

## 模型库
#### WIDER-FACE数据集上的mAP
| 网络结构 | 输入尺寸 | 图片个数/GPU | 学习率策略 | Easy/Medium/Hard Set | 预测时延(SD855)| 模型大小(MB) | 下载 | 配置文件 |
|:------------:|:--------:|:----:|:-------:|:-------:|:---------:|:----------:|:---------:|:--------:|
| BlazeFace | 640 | 8 | 1000e | 0.885 / 0.855 / 0.731 | - | 0.472 |[下载链接](https://paddledet.bj.bcebos.com/models/blazeface_1000e.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/face_detection/blazeface_1000e.yml) |
| BlazeFace-FPN-SSH | 640 | 8 | 1000e | 0.907 / 0.883 / 0.793 | - | 0.479 |[下载链接](https://paddledet.bj.bcebos.com/models/blazeface_fpn_ssh_1000e.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/face_detection/blazeface_fpn_ssh_1000e.yml) |
**注意:**
- 我们使用多尺度评估策略得到`Easy/Medium/Hard Set`里的mAP。具体细节请参考[在WIDER-FACE数据集上评估](#在WIDER-FACE数据集上评估)。
## 快速开始
### 数据准备
我们使用[WIDER-FACE数据集](http://shuoyang1213.me/WIDERFACE/)进行训练和模型测试,官方网站提供了详细的数据介绍。
- WIDER-Face数据源:
使用如下目录结构加载`wider_face`类型的数据集:
```
dataset/wider_face/
├── wider_face_split
│ ├── wider_face_train_bbx_gt.txt
│ ├── wider_face_val_bbx_gt.txt
├── WIDER_train
│ ├── images
│ │ ├── 0--Parade
│ │ │ ├── 0_Parade_marchingband_1_100.jpg
│ │ │ ├── 0_Parade_marchingband_1_381.jpg
│ │ │ │ ...
│ │ ├── 10--People_Marching
│ │ │ ...
├── WIDER_val
│ ├── images
│ │ ├── 0--Parade
│ │ │ ├── 0_Parade_marchingband_1_1004.jpg
│ │ │ ├── 0_Parade_marchingband_1_1045.jpg
│ │ │ │ ...
│ │ ├── 10--People_Marching
│ │ │ ...
```
- 手动下载数据集:
要下载WIDER-FACE数据集,请运行以下命令:
```
cd dataset/wider_face && ./download_wider_face.sh
```
### 参数配置
基础模型的配置可以参考`configs/face_detection/_base_/blazeface.yml`;
改进模型增加FPN和SSH的neck结构,配置文件可以参考`configs/face_detection/_base_/blazeface_fpn.yml`,可以根据需求配置FPN和SSH,具体如下:
```yaml
BlazeNet:
blaze_filters: [[24, 24], [24, 24], [24, 48, 2], [48, 48], [48, 48]]
double_blaze_filters: [[48, 24, 96, 2], [96, 24, 96], [96, 24, 96],
[96, 24, 96, 2], [96, 24, 96], [96, 24, 96]]
act: hard_swish #配置backbone中BlazeBlock的激活函数,基础模型为relu,增加FPN和SSH时需使用hard_swish
BlazeNeck:
neck_type : fpn_ssh #可选only_fpn、only_ssh和fpn_ssh
in_channel: [96,96]
```
### 训练与评估
训练流程与评估流程方法与其他算法一致,请参考[GETTING_STARTED_cn.md](../../docs/tutorials/GETTING_STARTED_cn.md)。
**注意:** 人脸检测模型目前不支持边训练边评估。
#### 在WIDER-FACE数据集上评估
- 步骤一:评估并生成结果文件:
```shell
python -u tools/eval.py -c configs/face_detection/blazeface_1000e.yml \
-o weights=output/blazeface_1000e/model_final \
multi_scale=True
```
设置`multi_scale=True`进行多尺度评估,评估完成后,将在`output/pred`中生成txt格式的测试结果。
- 步骤二:下载官方评估脚本和Ground Truth文件:
```
wget http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/eval_script/eval_tools.zip
unzip eval_tools.zip && rm -f eval_tools.zip
```
- 步骤三:开始评估
方法一:python评估:
```
git clone https://github.com/wondervictor/WiderFace-Evaluation.git
cd WiderFace-Evaluation
# 编译
python3 setup.py build_ext --inplace
# 开始评估
python3 evaluation.py -p /path/to/PaddleDetection/output/pred -g /path/to/eval_tools/ground_truth
```
方法二:MatLab评估:
```
# 在`eval_tools/wider_eval.m`中修改保存结果路径和绘制曲线的名称:
pred_dir = './pred';
legend_name = 'Paddle-BlazeFace';
`wider_eval.m` 是评估模块的主要执行程序。运行命令如下:
matlab -nodesktop -nosplash -nojvm -r "run wider_eval.m;quit;"
```
### Python脚本预测
为了支持二次开发,这里提供通过Python脚本使用Paddle Detection whl包来进行预测的示例。
```python
import cv2
import paddle
import numpy as np
from ppdet.core.workspace import load_config
from ppdet.engine import Trainer
from ppdet.metrics import get_infer_results
from ppdet.data.transform.operators import NormalizeImage, Permute
if __name__ == '__main__':
# 准备基础的参数
config_path = 'PaddleDetection/configs/face_detection/blazeface_1000e.yml'
cfg = load_config(config_path)
weight_path = 'PaddleDetection/output/blazeface_1000e.pdparams'
infer_img_path = 'PaddleDetection/demo/hrnet_demo.jpg'
cfg.weights = weight_path
bbox_thre = 0.8
paddle.set_device('gpu')
# 创建所需的类
trainer = Trainer(cfg, mode='test')
trainer.load_weights(cfg.weights)
trainer.model.eval()
normaler = NormalizeImage(mean=[123, 117, 104], std=[127.502231, 127.502231, 127.502231], is_scale=False)
permuter = Permute()
# 进行图片读取
im = cv2.imread(infer_img_path)
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
# 准备数据字典
data_dict = {'image': im}
data_dict = normaler(data_dict)
data_dict = permuter(data_dict)
h, w, c = im.shape
data_dict['im_id'] = paddle.Tensor(np.array([[0]]))
data_dict['im_shape'] = paddle.Tensor(np.array([[h, w]], dtype=np.float32))
data_dict['scale_factor'] = paddle.Tensor(np.array([[1., 1.]], dtype=np.float32))
data_dict['image'] = paddle.Tensor(data_dict['image'].reshape((1, c, h, w)))
data_dict['curr_iter'] = paddle.Tensor(np.array([0]))
# 进行预测
outs = trainer.model(data_dict)
# 对预测的数据进行后处理得到最终的bbox信息
for key in ['im_shape', 'scale_factor', 'im_id']:
outs[key] = data_dict[key]
for key, value in outs.items():
outs[key] = value.numpy()
clsid2catid, catid2name = {0: 'face'}, {0: 0}
batch_res = get_infer_results(outs, clsid2catid)
bbox = [sub_dict for sub_dict in batch_res['bbox'] if sub_dict['score'] > bbox_thre]
print(bbox)
```
## Citations
```
@article{bazarevsky2019blazeface,
title={BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs},
author={Valentin Bazarevsky and Yury Kartynnik and Andrey Vakunov and Karthik Raveendran and Matthias Grundmann},
year={2019},
eprint={1907.05047},
archivePrefix={arXiv},
```
| PaddleDetection/configs/face_detection/README.md/0 | {
"file_path": "PaddleDetection/configs/face_detection/README.md",
"repo_id": "PaddleDetection",
"token_count": 3644
} | 16 |
epoch: 12
LearningRate:
base_lr: 0.0001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [8, 11]
- !LinearWarmup
start_factor: 0.1
steps: 1000
OptimizerBuilder:
clip_grad_by_norm: 1.0
optimizer:
type: AdamW
weight_decay: 0.05
param_groups:
- params: ['absolute_pos_embed', 'relative_position_bias_table', 'norm']
weight_decay: 0.0
| PaddleDetection/configs/faster_rcnn/_base_/optimizer_swin_1x.yml/0 | {
"file_path": "PaddleDetection/configs/faster_rcnn/_base_/optimizer_swin_1x.yml",
"repo_id": "PaddleDetection",
"token_count": 179
} | 17 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'_base_/gfl_r50_fpn.yml',
'_base_/optimizer_1x.yml',
'_base_/gfl_reader.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet101_vd_pretrained.pdparams
weights: output/gfl_r101vd_fpn_mstrain_2x_coco/model_final
find_unused_parameters: True
use_ema: true
ema_decay: 0.9998
ResNet:
depth: 101
variant: d
norm_type: bn
freeze_at: 0
return_idx: [1,2,3]
num_stages: 4
epoch: 24
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [16, 22]
- !LinearWarmup
start_factor: 0.001
steps: 500
TrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[480, 1333], [512, 1333], [544, 1333], [576, 1333], [608, 1333], [640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True}
- RandomFlip: {prob: 0.5}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
- Gt2GFLTarget:
downsample_ratios: [8, 16, 32, 64, 128]
grid_cell_scale: 8
| PaddleDetection/configs/gfl/gfl_r101vd_fpn_mstrain_2x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/gfl/gfl_r101vd_fpn_mstrain_2x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 558
} | 18 |
_BASE_: [
'mask_rcnn_r50_fpn_1x_coco.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNeXt101_vd_64x4d_pretrained.pdparams
weights: output/mask_rcnn_x101_vd_64x4d_fpn_1x_coco/model_final
ResNet:
# for ResNeXt: groups, base_width, base_channels
depth: 101
variant: d
groups: 64
base_width: 4
norm_type: bn
freeze_at: 0
return_idx: [0,1,2,3]
num_stages: 4
epoch: 12
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [8, 11]
- !LinearWarmup
start_factor: 0.1
steps: 1000
| PaddleDetection/configs/mask_rcnn/mask_rcnn_x101_vd_64x4d_fpn_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/mask_rcnn/mask_rcnn_x101_vd_64x4d_fpn_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 275
} | 19 |
input_height: &input_height 800
input_width: &input_width 1440
input_size: &input_size [*input_height, *input_width]
worker_num: 4
TrainReader:
sample_transforms:
- Decode: {}
- Mosaic:
prob: 1.0
input_dim: *input_size
degrees: [-10, 10]
scale: [0.1, 2.0]
shear: [-2, 2]
translate: [-0.1, 0.1]
enable_mixup: True
mixup_prob: 1.0
mixup_scale: [0.5, 1.5]
- AugmentHSV: {is_bgr: False, hgain: 5, sgain: 30, vgain: 30}
- PadResize: {target_size: *input_size}
- RandomFlip: {}
batch_transforms:
- Permute: {}
batch_size: 6
shuffle: True
drop_last: True
collate_batch: False
mosaic_epoch: 20
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *input_size, keep_ratio: True}
- Pad: {size: *input_size, fill_value: [114., 114., 114.]}
- Permute: {}
batch_size: 8
TestReader:
inputs_def:
image_shape: [3, 800, 1440]
sample_transforms:
- Decode: {}
- Resize: {target_size: *input_size, keep_ratio: True}
- Pad: {size: *input_size, fill_value: [114., 114., 114.]}
- Permute: {}
batch_size: 1
# add MOTReader for MOT evaluation and inference, note batch_size should be 1 in MOT
EvalMOTReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *input_size, keep_ratio: True}
- Pad: {size: *input_size, fill_value: [114., 114., 114.]}
- Permute: {}
batch_size: 1
TestMOTReader:
inputs_def:
image_shape: [3, 800, 1440]
sample_transforms:
- Decode: {}
- Resize: {target_size: *input_size, keep_ratio: True}
- Pad: {size: *input_size, fill_value: [114., 114., 114.]}
- Permute: {}
batch_size: 1
| PaddleDetection/configs/mot/bytetrack/_base_/yolox_mot_reader_800x1440.yml/0 | {
"file_path": "PaddleDetection/configs/mot/bytetrack/_base_/yolox_mot_reader_800x1440.yml",
"repo_id": "PaddleDetection",
"token_count": 768
} | 20 |
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/crowdhuman_centertrack.pdparams
architecture: CenterTrack
for_mot: True
mot_metric: True
### model
CenterTrack:
detector: CenterNet
plugin_head: CenterTrackHead
tracker: CenterTracker
### CenterTrack.detector
CenterNet:
backbone: DLA
neck: CenterNetDLAFPN
head: CenterNetHead
post_process: CenterNetPostProcess
for_mot: True # Note
DLA:
depth: 34
pre_img: True # Note
pre_hm: True # Note
CenterNetDLAFPN:
down_ratio: 4
last_level: 5
out_channel: 0
dcn_v2: True
CenterNetHead:
head_planes: 256
prior_bias: -4.6 # Note
regress_ltrb: False
size_loss: 'L1'
loss_weight: {'heatmap': 1.0, 'size': 0.1, 'offset': 1.0}
CenterNetPostProcess:
max_per_img: 100 # top-K
regress_ltrb: False
### CenterTrack.plugin_head
CenterTrackHead:
head_planes: 256
task: tracking
loss_weight: {'tracking': 1.0, 'ltrb_amodal': 0.1}
add_ltrb_amodal: True
### CenterTrack.tracker
CenterTracker:
min_box_area: -1
vertical_ratio: -1
track_thresh: 0.4
pre_thresh: 0.5
| PaddleDetection/configs/mot/centertrack/_base_/centertrack_dla34.yml/0 | {
"file_path": "PaddleDetection/configs/mot/centertrack/_base_/centertrack_dla34.yml",
"repo_id": "PaddleDetection",
"token_count": 430
} | 21 |
简体中文 | [English](README.md)
# DeepSORT的检测器
## 简介
[DeepSORT](https://arxiv.org/abs/1812.00442)(Deep Cosine Metric Learning SORT) 由检测器和ReID模型串联组合而成,此处提供了几个常用检测器的配置作为参考。由于训练数据集、输入尺度、训练epoch数、NMS阈值设置等的不同均会导致模型精度和性能的差异,请自行根据需求进行适配。
## 模型库
### 在MOT17-half val数据集上的检测结果
| 骨架网络 | 网络类型 | 输入尺度 | 学习率策略 |推理时间(fps) | Box AP | 下载 | 配置文件 |
| :-------------- | :------------- | :--------: | :---------: | :-----------: | :-----: | :------: | :-----: |
| DarkNet-53 | YOLOv3 | 608X608 | 40e | ---- | 42.7 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolov3_darknet53_40e_608x608_mot17half.pdparams) | [配置文件](./yolov3_darknet53_40e_608x608_mot17half.yml) |
| ResNet50-vd | PPYOLOv2 | 640x640 | 365e | ---- | 46.8 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_640x640_mot17half.pdparams) | [配置文件](./ppyolov2_r50vd_dcn_365e_640x640_mot17half.yml) |
| CSPResNet | PPYOLOe | 640x640 | 36e | ---- | 52.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams) | [配置文件](./ppyoloe_crn_l_36e_640x640_mot17half.yml) |
**注意:**
- 以上模型均可采用**MOT17-half train**数据集训练,数据集可以从[此链接](https://bj.bcebos.com/v1/paddledet/data/mot/MOT17.zip)下载。
- **MOT17-half train**是MOT17的train序列(共7个)每个视频的前一半帧的图片和标注组成的数据集,而为了验证精度可以都用**MOT17-half val**数据集去评估,它是每个视频的后一半帧组成的,数据集可以从[此链接](https://paddledet.bj.bcebos.com/data/mot/mot17half/annotations.zip)下载,并解压放在`dataset/mot/MOT17/images/`文件夹下。
- YOLOv3和`configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml`是相同的pedestrian数据集训练的,此数据集暂未开放。
- 行人跟踪请使用行人检测器结合行人ReID模型。车辆跟踪请使用车辆检测器结合车辆ReID模型。
- 用于DeepSORT跟踪时需要高质量的检出框,因此这些模型的NMS阈值等后处理设置会与纯检测任务的设置不同。
## 快速开始
通过如下命令一键式启动训练和评估
```bash
job_name=ppyoloe_crn_l_36e_640x640_mot17half
config=configs/mot/deepsort/detector/${job_name}.yml
log_dir=log_dir/${job_name}
# 1. training
python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp
# 2. evaluation
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=https://paddledet.bj.bcebos.com/models/mot/${job_name}.pdparams
```
| PaddleDetection/configs/mot/deepsort/detector/README_cn.md/0 | {
"file_path": "PaddleDetection/configs/mot/deepsort/detector/README_cn.md",
"repo_id": "PaddleDetection",
"token_count": 1844
} | 22 |
worker_num: 4
TrainReader:
inputs_def:
image_shape: [3, 608, 1088]
sample_transforms:
- Decode: {}
- RGBReverse: {}
- AugmentHSV: {}
- LetterBoxResize: {target_size: [608, 1088]}
- MOTRandomAffine: {reject_outside: False}
- RandomFlip: {}
- BboxXYXY2XYWH: {}
- NormalizeBox: {}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1]}
- RGBReverse: {}
- Permute: {}
batch_transforms:
- Gt2FairMOTTarget: {}
batch_size: 6
shuffle: True
drop_last: True
use_shared_memory: True
EvalMOTReader:
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
TestMOTReader:
inputs_def:
image_shape: [3, 608, 1088]
sample_transforms:
- Decode: {}
- LetterBoxResize: {target_size: [608, 1088]}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_size: 1
| PaddleDetection/configs/mot/fairmot/_base_/fairmot_reader_1088x608.yml/0 | {
"file_path": "PaddleDetection/configs/mot/fairmot/_base_/fairmot_reader_1088x608.yml",
"repo_id": "PaddleDetection",
"token_count": 454
} | 23 |
_BASE_: [
'../fairmot/fairmot_dla34_30e_1088x608.yml',
'../../datasets/mcmot.yml'
]
metric: MCMOT
num_classes: 4
# for MCMOT training
TrainDataset:
!MCMOTDataSet
dataset_dir: dataset/mot
image_lists: ['visdrone_mcmot_vehicle.train']
data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide']
label_list: label_list.txt
EvalMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
data_root: visdrone_mcmot_vehicle/images/val
keep_ori_im: False # set True if save visualization images or video, or used in DeepSORT
anno_path: dataset/mot/visdrone_mcmot_vehicle/label_list.txt
# for MCMOT video inference
TestMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
keep_ori_im: True # set True if save visualization images or video
anno_path: dataset/mot/visdrone_mcmot_vehicle/label_list.txt
pretrain_weights: https://paddledet.bj.bcebos.com/models/centernet_dla34_140e_coco.pdparams
FairMOT:
detector: CenterNet
reid: FairMOTEmbeddingHead
loss: FairMOTLoss
tracker: JDETracker # multi-class tracker
CenterNetHead:
regress_ltrb: False
CenterNetPostProcess:
regress_ltrb: False
max_per_img: 200
JDETracker:
min_box_area: 0
vertical_ratio: 0 # no need to filter bboxes according to w/h
use_byte: True
match_thres: 0.8
conf_thres: 0.4
low_conf_thres: 0.2
weights: output/mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker/model_final
epoch: 30
LearningRate:
base_lr: 0.0005
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [10, 20]
use_warmup: False
OptimizerBuilder:
optimizer:
type: Adam
regularizer: NULL
| PaddleDetection/configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml/0 | {
"file_path": "PaddleDetection/configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml",
"repo_id": "PaddleDetection",
"token_count": 677
} | 24 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import os.path as osp
import numpy as np
import argparse
def mkdirs(d):
if not osp.exists(d):
os.makedirs(d)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='BDD100K to MOT format')
parser.add_argument(
"--mot_data", default='./bdd100k')
parser.add_argument("--phase", default='train')
args = parser.parse_args()
MOT_data = args.mot_data
phase = args.phase
seq_root = osp.join(MOT_data, 'bdd100kmot_vehicle', 'images', phase)
label_root = osp.join(MOT_data, 'bdd100kmot_vehicle', 'labels_with_ids',
phase)
mkdirs(label_root)
seqs = [s for s in os.listdir(seq_root)]
tid_curr = 0
tid_last = -1
os.system(f'rm -r {MOT_data}/bdd100kmot_vehicle/labels_with_ids')
for seq in seqs:
print('seq => ', seq)
seq_info = open(osp.join(seq_root, seq, 'seqinfo.ini')).read()
seq_width = int(seq_info[seq_info.find('imWidth=') + 8:seq_info.find(
'\nimHeight')])
seq_height = int(seq_info[seq_info.find('imHeight=') + 9:seq_info.find(
'\nimExt')])
gt_txt = osp.join(seq_root, seq, 'gt', 'gt.txt')
gt = np.loadtxt(gt_txt, dtype=np.float64, delimiter=',')
seq_label_root = osp.join(label_root, seq, 'img1')
mkdirs(seq_label_root)
for fid, tid, x, y, w, h, mark, label, _ in gt:
fid = int(fid)
tid = int(tid)
if not tid == tid_last:
tid_curr += 1
tid_last = tid
x += w / 2
y += h / 2
label_fpath = osp.join(seq_label_root,
seq + '-' + '{:07d}.txt'.format(fid))
label_str = '0 {:d} {:.6f} {:.6f} {:.6f} {:.6f}\n'.format(
tid_curr, x / seq_width, y / seq_height, w / seq_width,
h / seq_height)
with open(label_fpath, 'a') as f:
f.write(label_str)
| PaddleDetection/configs/mot/vehicle/tools/bdd100kmot/gen_labels_MOT.py/0 | {
"file_path": "PaddleDetection/configs/mot/vehicle/tools/bdd100kmot/gen_labels_MOT.py",
"repo_id": "PaddleDetection",
"token_count": 1209
} | 25 |
_BASE_: [
'../../../datasets/coco_detection.yml',
'../../../runtime.yml',
'../_base_/picodet_esnet.yml',
'../_base_/optimizer_300e.yml',
'../_base_/picodet_416_reader.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/MobileNetV3_large_x1_0_ssld_pretrained.pdparams
weights: output/picodet_mobilenetv3_large_1x_416_coco/model_final
find_unused_parameters: True
use_ema: true
cycle_epoch: 40
snapshot_epoch: 10
epoch: 180
PicoDet:
backbone: MobileNetV3
neck: CSPPAN
head: PicoHead
MobileNetV3:
model_name: large
scale: 1.0
with_extra_blocks: false
extra_block_filters: []
feature_maps: [7, 13, 16]
TrainReader:
batch_size: 56
LearningRate:
base_lr: 0.3
schedulers:
- !CosineDecay
max_epochs: 300
- !LinearWarmup
start_factor: 0.1
steps: 300
| PaddleDetection/configs/picodet/legacy_model/more_config/picodet_mobilenetv3_large_1x_416_coco.yml/0 | {
"file_path": "PaddleDetection/configs/picodet/legacy_model/more_config/picodet_mobilenetv3_large_1x_416_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 365
} | 26 |
[English](README.md) | 简体中文
# 特色垂类检测模型
我们提供了针对不同场景的基于PaddlePaddle的检测模型,用户可以下载模型进行使用。
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
| 行人检测 | YOLOv3 | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml) |
## 行人检测(Pedestrian Detection)
行人检测的主要应用有智能监控。在监控场景中,大多是从公共区域的监控摄像头视角拍摄行人,获取图像后再进行行人检测。
### 1. 模型结构
Backbone为Dacknet53的YOLOv3。
### 2. 训练参数配置
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行行人检测的模型训练时,我们对以下参数进行了修改:
* num_classes: 1
* dataset_dir: dataset/pedestrian
### 2. 精度指标
模型在我们针对监控场景的内部数据上精度指标为:
IOU=.5时的AP为 0.792。
IOU=.5-.95时的AP为 0.518。
### 3. 预测
用户可以使用我们训练好的模型进行行人检测:
```
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml \
-o weights=https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams \
--infer_dir configs/pphuman/pedestrian_yolov3/demo \
--draw_threshold 0.3 \
--output_dir configs/pphuman/pedestrian_yolov3/demo/output
```
预测结果示例:


| PaddleDetection/configs/pphuman/pedestrian_yolov3/README_cn.md/0 | {
"file_path": "PaddleDetection/configs/pphuman/pedestrian_yolov3/README_cn.md",
"repo_id": "PaddleDetection",
"token_count": 1313
} | 27 |
epoch: 405
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 243
- 324
- !LinearWarmup
start_factor: 0.
steps: 4000
OptimizerBuilder:
clip_grad_by_norm: 35.
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
| PaddleDetection/configs/ppyolo/_base_/optimizer_1x.yml/0 | {
"file_path": "PaddleDetection/configs/ppyolo/_base_/optimizer_1x.yml",
"repo_id": "PaddleDetection",
"token_count": 151
} | 28 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/ppyolo_r50vd_dcn.yml',
'./_base_/optimizer_1x.yml',
'./_base_/ppyolo_reader.yml',
]
snapshot_epoch: 16
weights: output/ppyolo_r50vd_dcn_1x_coco/model_final
| PaddleDetection/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 122
} | 29 |
_BASE_: [
'./_base_/exdark_detection.yml',
'../../runtime.yml',
'../_base_/optimizer_80e.yml',
'../_base_/ppyoloe_plus_crn.yml',
'../_base_/ppyoloe_plus_reader.yml',
]
log_iter: 100
snapshot_epoch: 5
weights: output/ppyoloe_plus_crn_m_80e_coco_pretrained_exdark/model_final
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_m_80e_coco.pdparams
depth_mult: 0.67
width_mult: 0.75
| PaddleDetection/configs/ppyoloe/application/ppyoloe_plus_crn_m_80e_coco_pretrained_exdark.yml/0 | {
"file_path": "PaddleDetection/configs/ppyoloe/application/ppyoloe_plus_crn_m_80e_coco_pretrained_exdark.yml",
"repo_id": "PaddleDetection",
"token_count": 200
} | 30 |
_BASE_: [
'../../datasets/objects365_detection.yml',
'../../runtime.yml',
'../_base_/optimizer_60e.yml',
'../_base_/ppyoloe_plus_crn.yml',
'../_base_/ppyoloe_plus_reader.yml',
]
log_iter: 100
snapshot_epoch: 5
weights: output/ppyoloe_plus_crn_s_60e_objects365/model_final
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/CSPResNetb_s_pretrained.pdparams
CSPResNet:
use_alpha: False
PPYOLOEHead:
static_assigner_epoch: 20
depth_mult: 0.33
width_mult: 0.50
| PaddleDetection/configs/ppyoloe/objects365/ppyoloe_plus_crn_s_60e_objects365.yml/0 | {
"file_path": "PaddleDetection/configs/ppyoloe/objects365/ppyoloe_plus_crn_s_60e_objects365.yml",
"repo_id": "PaddleDetection",
"token_count": 223
} | 31 |
# PP-YOLOE
## 模型库
### VOC数据集模型库
| 模型 | Epoch | GPU个数 | 每GPU图片个数 | 骨干网络 | 输入尺寸 | Box AP<sup>0.5 | Params(M) | FLOPs(G) | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | 模型下载 | 配置文件 |
|:---------------:|:-----:|:-----------:|:-----------:|:---------:|:----------:|:--------------:|:---------:|:---------:|:-------------:|:-----------------------:| :-------: |:--------:|
| PP-YOLOE+_s | 30 | 8 | 8 | cspresnet-s | 640 | 86.7 | 7.93 | 17.36 | 208.3 | 333.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_30e_voc.pdparams) | [config](./ppyoloe_plus_crn_s_30e_voc.yml) |
| PP-YOLOE+_l | 30 | 8 | 8 | cspresnet-l | 640 | 89.0 | 52.20 | 110.07 | 78.1 | 149.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_30e_voc.pdparams) | [config](./ppyoloe_plus_crn_l_30e_voc.yml) |
| PaddleDetection/configs/ppyoloe/voc/README_cn.md/0 | {
"file_path": "PaddleDetection/configs/ppyoloe/voc/README_cn.md",
"repo_id": "PaddleDetection",
"token_count": 544
} | 32 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'../faster_rcnn/_base_/optimizer_1x.yml',
'../faster_rcnn/_base_/faster_rcnn_r50_fpn.yml',
'../faster_rcnn/_base_/faster_fpn_reader.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/Res2Net50_26w_4s_pretrained.pdparams
weights: output/faster_rcnn_res2net50_vb_26w_4s_fpn_1x_coco/model_final
FasterRCNN:
backbone: Res2Net
neck: FPN
rpn_head: RPNHead
bbox_head: BBoxHead
# post process
bbox_post_process: BBoxPostProcess
Res2Net:
# index 0 stands for res2
depth: 50
width: 26
scales: 4
norm_type: bn
freeze_at: 0
return_idx: [0,1,2,3]
num_stages: 4
variant: b
TrainReader:
batch_size: 2
| PaddleDetection/configs/res2net/faster_rcnn_res2net50_vb_26w_4s_fpn_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/res2net/faster_rcnn_res2net50_vb_26w_4s_fpn_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 332
} | 33 |
worker_num: 4
image_height: &image_height 1024
image_width: &image_width 1024
image_size: &image_size [*image_height, *image_width]
TrainReader:
sample_transforms:
- Decode: {}
- Poly2Array: {}
- RandomRFlip: {}
- RandomRRotate: {angle_mode: 'value', angle: [0, 90, 180, -90]}
- RandomRRotate: {angle_mode: 'value', angle: [30, 60], rotate_prob: 0.5}
- RResize: {target_size: *image_size, keep_ratio: True, interp: 2}
- Poly2RBox: {filter_threshold: 2, filter_mode: 'edge', rbox_type: 'oc'}
batch_transforms:
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
- PadRGT: {}
- PadBatch: {pad_to_stride: 32}
batch_size: 4
shuffle: true
drop_last: true
use_shared_memory: true
collate_batch: true
EvalReader:
sample_transforms:
- Decode: {}
- Poly2Array: {}
- RResize: {target_size: *image_size, keep_ratio: True, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 2
collate_batch: false
TestReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: *image_size, keep_ratio: True, interp: 2}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 2
| PaddleDetection/configs/rotate/fcosr/_base_/fcosr_reader.yml/0 | {
"file_path": "PaddleDetection/configs/rotate/fcosr/_base_/fcosr_reader.yml",
"repo_id": "PaddleDetection",
"token_count": 644
} | 34 |
# DETRs Beat YOLOs on Real-time Object Detection
## 最新动态
- 发布RT-DETR-R50和RT-DETR-R101的代码和预训练模型
- 发布RT-DETR-L和RT-DETR-X的代码和预训练模型
- 发布RT-DETR-R50-m模型(scale模型的范例)
- 发布RT-DETR-R34模型
- 发布RT-DETR-R18模型
- 发布RT-DETR-Swin和RT-DETR-FocalNet模型
- 发布RTDETR Obj365预训练模型
## 简介
<!-- We propose a **R**eal-**T**ime **DE**tection **TR**ansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge. Specifically, we design an efficient hybrid encoder to efficiently process multi-scale features by decoupling the intra-scale interaction and cross-scale fusion, and propose IoU-aware query selection to improve the initialization of object queries. In addition, our proposed detector supports flexibly adjustment of the inference speed by using different decoder layers without the need for retraining, which facilitates the practical application of real-time object detectors. Our RT-DETR-L achieves 53.0% AP on COCO val2017 and 114 FPS on T4 GPU, while RT-DETR-X achieves 54.8% AP and 74 FPS, outperforming all YOLO detectors of the same scale in both speed and accuracy. Furthermore, our RT-DETR-R50 achieves 53.1% AP and 108 FPS, outperforming DINO-Deformable-DETR-R50 by 2.2% AP in accuracy and by about 21 times in FPS. -->
RT-DETR是第一个实时端到端目标检测器。具体而言,我们设计了一个高效的混合编码器,通过解耦尺度内交互和跨尺度融合来高效处理多尺度特征,并提出了IoU感知的查询选择机制,以优化解码器查询的初始化。此外,RT-DETR支持通过使用不同的解码器层来灵活调整推理速度,而不需要重新训练,这有助于实时目标检测器的实际应用。RT-DETR-L在COCO val2017上实现了53.0%的AP,在T4 GPU上实现了114FPS,RT-DETR-X实现了54.8%的AP和74FPS,在速度和精度方面都优于相同规模的所有YOLO检测器。RT-DETR-R50实现了53.1%的AP和108FPS,RT-DETR-R101实现了54.3%的AP和74FPS,在精度上超过了全部使用相同骨干网络的DETR检测器。
若要了解更多细节,请参考我们的论文[paper](https://arxiv.org/abs/2304.08069).
<div align="center">
<img src="https://github.com/PaddlePaddle/PaddleDetection/assets/17582080/196b0a10-d2e8-401c-9132-54b9126e0a33" width=500 />
</div>
## 基础模型
| Model | Epoch | Backbone | Input shape | $AP^{val}$ | $AP^{val}_{50}$| Params(M) | FLOPs(G) | T4 TensorRT FP16(FPS) | Pretrained Model | config |
|:--------------:|:-----:|:----------:| :-------:|:--------------------------:|:---------------------------:|:---------:|:--------:| :---------------------: |:------------------------------------------------------------------------------------:|:-------------------------------------------:|
| RT-DETR-R18 | 6x | ResNet-18 | 640 | 46.5 | 63.8 | 20 | 60 | 217 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r18vd_dec3_6x_coco.pdparams) | [config](./rtdetr_r18vd_6x_coco.yml)
| RT-DETR-R34 | 6x | ResNet-34 | 640 | 48.9 | 66.8 | 31 | 92 | 161 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r34vd_dec4_6x_coco.pdparams) | [config](./rtdetr_r34vd_6x_coco.yml)
| RT-DETR-R50-m | 6x | ResNet-50 | 640 | 51.3 | 69.6 | 36 | 100 | 145 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r50vd_m_6x_coco.pdparams) | [config](./rtdetr_r50vd_m_6x_coco.yml)
| RT-DETR-R50 | 6x | ResNet-50 | 640 | 53.1 | 71.3 | 42 | 136 | 108 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r50vd_6x_coco.pdparams) | [config](./rtdetr_r50vd_6x_coco.yml)
| RT-DETR-R101 | 6x | ResNet-101 | 640 | 54.3 | 72.7 | 76 | 259 | 74 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r101vd_6x_coco.pdparams) | [config](./rtdetr_r101vd_6x_coco.yml)
| RT-DETR-L | 6x | HGNetv2 | 640 | 53.0 | 71.6 | 32 | 110 | 114 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_hgnetv2_l_6x_coco.pdparams) | [config](rtdetr_hgnetv2_l_6x_coco.yml)
| RT-DETR-X | 6x | HGNetv2 | 640 | 54.8 | 73.1 | 67 | 234 | 74 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_hgnetv2_x_6x_coco.pdparams) | [config](rtdetr_hgnetv2_x_6x_coco.yml)
## 高精度模型
| Model | Epoch | backbone | input shape | $AP^{val}$ | $AP^{val}_{50}$ | Pretrained Model | config |
|:-----:|:-----:|:---------:| :---------:|:-----------:|:---------------:|:----------------:|:------:|
| RT-DETR-Swin | 3x | Swin_L_384 | 640 | 56.2 | 73.5 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_swin_L_384_3x_coco.pdparams) | [config](./rtdetr_swin_L_384_3x_coco.yml)
| RT-DETR-FocalNet | 3x | FocalNet_L_384 | 640 | 56.9 | 74.3 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_focalnet_L_384_3x_coco.pdparams) | [config](./rtdetr_focalnet_L_384_3x_coco.yml)
## Objects365预训练模型
| Model | Epoch | Dataset | Input shape | $AP^{val}$ | $AP^{val}_{50}$ | T4 TensorRT FP16(FPS) | Weight | Logs
|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|:---:|
RT-DETR-R18 | 1x | Objects365 | 640 | 22.9 | 31.2 | - | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r18vd_1x_objects365.pdparams) | [log](https://github.com/lyuwenyu/RT-DETR/issues/8)
RT-DETR-R18 | 5x | COCO + Objects365 | 640 | 49.2 | 66.6 | 217 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r18vd_5x_coco_objects365.pdparams) | [log](https://github.com/lyuwenyu/RT-DETR/issues/8)
RT-DETR-R50 | 1x | Objects365 | 640 | 35.1 | 46.2 | - | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r50vd_1x_objects365.pdparams) | [log](https://github.com/lyuwenyu/RT-DETR/issues/8)
RT-DETR-R50 | 2x | COCO + Objects365 | 640 | 55.3 | 73.4 | 108 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r50vd_2x_coco_objects365.pdparams) | [log](https://github.com/lyuwenyu/RT-DETR/issues/8)
RT-DETR-R101 | 1x | Objects365 | 640 | 36.8 | 48.3 | - | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r101vd_1x_objects365.pdparams) | [log](https://github.com/lyuwenyu/RT-DETR/issues/8)
RT-DETR-R101 | 2x | COCO + Objects365 | 640 | 56.2 | 74.5 | 74 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetr_r101vd_2x_coco_objects365.pdparams) | [log](https://github.com/lyuwenyu/RT-DETR/issues/8)
**Notes:**
- `COCO + Objects365` 代表使用Objects365预训练权重,在COCO上finetune的结果
**注意事项:**
- RT-DETR 基础模型均使用4个GPU训练。
- RT-DETR 在COCO train2017上训练,并在val2017上评估。
- 高精度模型RT-DETR-Swin和RT-DETR-FocalNet使用8个GPU训练,显存需求较高。
## 快速开始
<details open>
<summary>依赖包:</summary>
- PaddlePaddle >= 2.4.1
</details>
<details>
<summary>安装</summary>
- [安装指导文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md)
</details>
<details>
<summary>训练&评估</summary>
- 单卡GPU上训练:
```shell
# training on single-GPU
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/rtdetr/rtdetr_r50vd_6x_coco.yml --eval
```
- 多卡GPU上训练:
```shell
# training on multi-GPU
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/rtdetr/rtdetr_r50vd_6x_coco.yml --fleet --eval
```
- 评估:
```shell
python tools/eval.py -c configs/rtdetr/rtdetr_r50vd_6x_coco.yml \
-o weights=https://bj.bcebos.com/v1/paddledet/models/rtdetr_r50vd_6x_coco.pdparams
```
- 测试:
```shell
python tools/infer.py -c configs/rtdetr/rtdetr_r50vd_6x_coco.yml \
-o weights=https://bj.bcebos.com/v1/paddledet/models/rtdetr_r50vd_6x_coco.pdparams \
--infer_img=./demo/000000570688.jpg
```
详情请参考[快速开始文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
</details>
## 部署
<details open>
<summary>1. 导出模型 </summary>
```shell
cd PaddleDetection
python tools/export_model.py -c configs/rtdetr/rtdetr_r50vd_6x_coco.yml \
-o weights=https://bj.bcebos.com/v1/paddledet/models/rtdetr_r50vd_6x_coco.pdparams trt=True \
--output_dir=output_inference
```
</details>
<details>
<summary>2. 转换模型至ONNX </summary>
- 安装[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) 和 ONNX
```shell
pip install onnx==1.13.0
pip install paddle2onnx==1.0.5
```
- 转换模型:
```shell
paddle2onnx --model_dir=./output_inference/rtdetr_r50vd_6x_coco/ \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--opset_version 16 \
--save_file rtdetr_r50vd_6x_coco.onnx
```
</details>
<details>
<summary>3. 转换成TensorRT(可选) </summary>
- 确保TensorRT的版本>=8.5.1
- TRT推理可以参考[RT-DETR](https://github.com/lyuwenyu/RT-DETR)的部分代码或者其他网络资源
```shell
trtexec --onnx=./rtdetr_r50vd_6x_coco.onnx \
--workspace=4096 \
--shapes=image:1x3x640x640 \
--saveEngine=rtdetr_r50vd_6x_coco.trt \
--avgRuns=100 \
--fp16
```
-
</details>
## 量化压缩
详细步骤请参考:[RT-DETR自动化量化压缩](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/auto_compression#rt-detr)
| 模型 | Base mAP | ACT量化mAP | TRT-FP32 | TRT-FP16 | TRT-INT8 | 配置文件 | 量化模型 |
| :---------------- | :------- | :--------: | :------: | :------: | :--------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| RT-DETR-R50 | 53.1 | 53.0 | 32.05ms | 9.12ms | **6.96ms** | [config](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/example/auto_compression/detection/configs/rtdetr_r50vd_qat_dis.yaml) | [Model](https://bj.bcebos.com/v1/paddle-slim-models/act/rtdetr_r50vd_6x_coco_quant.tar) |
| RT-DETR-R101 | 54.3 | 54.1 | 54.13ms | 12.68ms | **9.20ms** | [config](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/example/auto_compression/detection/configs/rtdetr_r101vd_qat_dis.yaml) | [Model](https://bj.bcebos.com/v1/paddle-slim-models/act/rtdetr_r101vd_6x_coco_quant.tar) |
| RT-DETR-HGNetv2-L | 53.0 | 52.9 | 26.16ms | 8.54ms | **6.65ms** | [config](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/example/auto_compression/detection/configs/rtdetr_hgnetv2_l_qat_dis.yaml) | [Model](https://bj.bcebos.com/v1/paddle-slim-models/act/rtdetr_hgnetv2_l_6x_coco_quant.tar) |
| RT-DETR-HGNetv2-X | 54.8 | 54.6 | 49.22ms | 12.50ms | **9.24ms** | [config](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/example/auto_compression/detection/configs/rtdetr_hgnetv2_x_qat_dis.yaml) | [Model](https://bj.bcebos.com/v1/paddle-slim-models/act/rtdetr_hgnetv2_x_6x_coco_quant.tar) |
- 上表测试环境:Tesla T4,TensorRT 8.6.0,CUDA 11.7,batch_size=1。
- 也可直接参考:[PaddleSlim自动化压缩示例](https://github.com/PaddlePaddle/PaddleSlim/tree/develop/example/auto_compression/detection)
## 其他
<details>
<summary>1. 参数量和计算量统计 </summary>
可以使用以下代码片段实现参数量和计算量的统计
```
import paddle
from ppdet.core.workspace import load_config, merge_config
from ppdet.core.workspace import create
cfg_path = './configs/rtdetr/rtdetr_r50vd_6x_coco.yml'
cfg = load_config(cfg_path)
model = create(cfg.architecture)
blob = {
'image': paddle.randn([1, 3, 640, 640]),
'im_shape': paddle.to_tensor([[640], [640]]),
'scale_factor': paddle.to_tensor([[1.], [1.]])
}
paddle.flops(model, None, blob, custom_ops=None, print_detail=False)
```
</details>
<details open>
<summary>2. YOLOs端到端速度测速 </summary>
- 可以参考[RT-DETR](https://github.com/lyuwenyu/RT-DETR) benchmark部分或者其他网络资源
</details>
## 引用RT-DETR
如果需要在你的研究中使用RT-DETR,请通过以下方式引用我们的论文:
```
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Wenyu Lv and Shangliang Xu and Yian Zhao and Guanzhong Wang and Jinman Wei and Cheng Cui and Yuning Du and Qingqing Dang and Yi Liu},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| PaddleDetection/configs/rtdetr/README.md/0 | {
"file_path": "PaddleDetection/configs/rtdetr/README.md",
"repo_id": "PaddleDetection",
"token_count": 6245
} | 35 |
_BASE_: [
'../../faster_rcnn/faster_rcnn_r50_fpn_2x_coco.yml',
]
log_iter: 50
snapshot_epoch: 2
weights: output/faster_rcnn_r50_fpn_2x_coco_sup005/model_final
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: semi_annotations/instances_train2017.1@5.json
dataset_dir: dataset/coco
data_fields: ['image', 'gt_bbox', 'gt_class']
worker_num: 2
TrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True}
- RandomFlip: {}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 2
shuffle: true
drop_last: true
collate_batch: false
epoch: 24
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [16, 22]
- !LinearWarmup
start_factor: 0.1
epochs: 1
| PaddleDetection/configs/semi_det/baseline/faster_rcnn_r50_fpn_2x_coco_sup005.yml/0 | {
"file_path": "PaddleDetection/configs/semi_det/baseline/faster_rcnn_r50_fpn_2x_coco_sup005.yml",
"repo_id": "PaddleDetection",
"token_count": 459
} | 36 |
# teacher and slim config
_BASE_: [
'../../ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml',
]
depth_mult: 1.0
width_mult: 1.0
for_distill: True
architecture: PPYOLOE
PPYOLOE:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_l_80e_coco.pdparams
find_unused_parameters: True
worker_num: 4
TrainReader:
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {}
- RandomFlip: {}
batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
- PadGT: {}
batch_size: 8
shuffle: True
drop_last: True
use_shared_memory: True
collate_batch: True
slim: Distill
slim_method: PPYOLOEDistill
distill_loss: DistillPPYOLOELoss
DistillPPYOLOELoss: # L -> M
loss_weight: {'logits': 4.0, 'feat': 1.0}
logits_distill: True
logits_loss_weight: {'class': 1.0, 'iou': 2.5, 'dfl': 0.5}
logits_ld_distill: True
logits_ld_params: {'weight': 20000, 'T': 10}
feat_distill: True
feat_distiller: 'fgd' # ['cwd', 'fgd', 'pkd', 'mgd', 'mimic']
feat_distill_place: 'neck_feats'
teacher_width_mult: 1.0 # L
student_width_mult: 0.75 # M
feat_out_channels: [768, 384, 192] # The actual channel will multiply width_mult
| PaddleDetection/configs/slim/distill/ppyoloe_plus_distill_l_distill_m.yml/0 | {
"file_path": "PaddleDetection/configs/slim/distill/ppyoloe_plus_distill_l_distill_m.yml",
"repo_id": "PaddleDetection",
"token_count": 648
} | 37 |
pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/dygraph/ssd_mobilenet_v1_300_120e_voc.pdparams
slim: QAT
QAT:
quant_config: {
'weight_quantize_type': 'channel_wise_abs_max', 'activation_quantize_type': 'moving_average_abs_max',
'weight_bits': 8, 'activation_bits': 8, 'dtype': 'int8', 'window_size': 10000, 'moving_rate': 0.9,
'quantizable_layer_type': ['Conv2D', 'Linear']}
print_model: True
| PaddleDetection/configs/slim/quant/ssd_mobilenet_v1_qat.yml/0 | {
"file_path": "PaddleDetection/configs/slim/quant/ssd_mobilenet_v1_qat.yml",
"repo_id": "PaddleDetection",
"token_count": 176
} | 38 |
简体中文 | [English](README.md)
# SNIPER: Efficient Multi-Scale Training
## 模型库
| 有无sniper | GPU个数 | 每张GPU图片个数 | 骨架网络 | 数据集 | 学习率策略 | Box AP | 模型下载 | 配置文件 |
| :---------------- | :-------------------: | :------------------: | :-----: | :-----: | :------------: | :-----: | :-----------------------------------------------------: | :-----: |
| w/o sniper | 4 | 1 | ResNet-r50-FPN | [VisDrone](https://github.com/VisDrone/VisDrone-Dataset) | 1x | 23.3 | [下载链接](https://bj.bcebos.com/v1/paddledet/models/faster_rcnn_r50_fpn_1x_visdrone.pdparams ) | [配置文件](./faster_rcnn_r50_fpn_1x_visdrone.yml) |
| w sniper | 4 | 1 | ResNet-r50-FPN | [VisDrone](https://github.com/VisDrone/VisDrone-Dataset) | 1x | 29.7 | [下载链接](https://bj.bcebos.com/v1/paddledet/models/faster_rcnn_r50_fpn_1x_sniper_visdrone.pdparams) | [配置文件](./faster_rcnn_r50_fpn_1x_sniper_visdrone.yml) |
### 注意
- 我们使用的是`VisDrone`数据集, 并且检查其中的9类,包括 `person, bicycles, car, van, truck, tricyle, awning-tricyle, bus, motor`.
- 暂时不支持和导出预测部署(deploy).
## 使用说明
### 1. 训练
a. 可选:统计数据集信息,获得数据缩放尺度、有效框范围、chip尺度和步长等参数,修改configs/datasets/sniper_coco_detection.yml中对应参数
```bash
python tools/sniper_params_stats.py FasterRCNN annotations/instances_train2017.json
```
b. 可选:训练检测器,生成负样本
```bash
python -m paddle.distributed.launch --log_dir=./sniper/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/sniper/faster_rcnn_r50_fpn_1x_sniper_visdrone.yml --save_proposals --proposals_path=./proposals.json &>sniper.log 2>&1 &
```
c. 训练模型
```bash
python -m paddle.distributed.launch --log_dir=./sniper/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/sniper/faster_rcnn_r50_fpn_1x_sniper_visdrone.yml --eval &>sniper.log 2>&1 &
```
### 2. 评估
使用单GPU通过如下命令一键式评估模型在COCO val2017数据集效果
```bash
# 使用训练保存的checkpoint
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/sniper/faster_rcnn_r50_fpn_1x_sniper_visdrone.yml -o weights=output/faster_rcnn_r50_fpn_1x_sniper_visdrone/model_final
```
### 3. 推理
使用单GPU通过如下命令一键式推理图像,通过`--infer_img`指定图像路径,或通过`--infer_dir`指定目录并推理目录下所有图像
```bash
# 推理单张图像
CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/sniper/faster_rcnn_r50_fpn_1x_sniper_visdrone.yml -o weights=output/faster_rcnn_r50_fpn_1x_sniper_visdrone/model_final --infer_img=demo/P0861__1.0__1154___824.png
# 推理目录下所有图像
CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/sniper/faster_rcnn_r50_fpn_1x_sniper_visdrone.yml -o weights=output/faster_rcnn_r50_fpn_1x_sniper_visdrone/model_final --infer_dir=demo
```
## Citations
```
@misc{1805.09300,
Author = {Bharat Singh and Mahyar Najibi and Larry S. Davis},
Title = {SNIPER: Efficient Multi-Scale Training},
Year = {2018},
Eprint = {arXiv:1805.09300},
}
@ARTICLE{9573394,
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/TPAMI.2021.3119563}}
```
| PaddleDetection/configs/sniper/README_cn.md/0 | {
"file_path": "PaddleDetection/configs/sniper/README_cn.md",
"repo_id": "PaddleDetection",
"token_count": 1880
} | 39 |
_BASE_: [
'../datasets/coco_instance.yml',
'../runtime.yml',
'_base_/solov2_r50_fpn.yml',
'_base_/optimizer_1x.yml',
'_base_/solov2_reader.yml',
]
weights: output/solov2_r50_fpn_3x_coco/model_final
epoch: 36
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [24, 33]
- !LinearWarmup
start_factor: 0.
steps: 1000
TrainReader:
sample_transforms:
- Decode: {}
- Poly2Mask: {}
- RandomResize: {interp: 1,
target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]],
keep_ratio: True}
- RandomFlip: {}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
- Gt2Solov2Target: {num_grids: [40, 36, 24, 16, 12],
scale_ranges: [[1, 96], [48, 192], [96, 384], [192, 768], [384, 2048]],
coord_sigma: 0.2}
batch_size: 2
shuffle: true
drop_last: true
| PaddleDetection/configs/solov2/solov2_r50_fpn_3x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/solov2/solov2_r50_fpn_3x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 539
} | 40 |
epoch: 70
LearningRate:
base_lr: 0.05
schedulers:
- !PiecewiseDecay
milestones: [48, 60]
gamma: [0.1, 0.1]
use_warmup: false
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
| PaddleDetection/configs/ssd/_base_/optimizer_70e.yml/0 | {
"file_path": "PaddleDetection/configs/ssd/_base_/optimizer_70e.yml",
"repo_id": "PaddleDetection",
"token_count": 122
} | 41 |
_BASE_: [
'../datasets/coco_detection.yml',
'../runtime.yml',
'./_base_/faster_rcnn_reader.yml',
'./_base_/optimizer_base_1x.yml'
]
weights: output/cascade_rcnn_vit_base_hrfpn_cae_1x_coco/model_final
# runtime
log_iter: 100
snapshot_epoch: 1
find_unused_parameters: True
use_gpu: true
norm_type: sync_bn
# reader
worker_num: 2
TrainReader:
batch_size: 1
# model
architecture: CascadeRCNN
CascadeRCNN:
backbone: VisionTransformer
neck: HRFPN
rpn_head: RPNHead
bbox_head: CascadeHead
# post process
bbox_post_process: BBoxPostProcess
VisionTransformer:
patch_size: 16
embed_dim: 768
depth: 12
num_heads: 12
mlp_ratio: 4
qkv_bias: True
drop_rate: 0.0
drop_path_rate: 0.2
init_values: 0.1
final_norm: False
use_rel_pos_bias: False
use_sincos_pos_emb: True
epsilon: 0.000001 # 1e-6
out_indices: [3, 5, 7, 11]
with_fpn: True
pretrained: https://bj.bcebos.com/v1/paddledet/models/pretrained/vit_base_cae_pretrained.pdparams
HRFPN:
out_channel: 256
use_bias: True
RPNHead:
anchor_generator:
aspect_ratios: [0.5, 1.0, 2.0]
anchor_sizes: [[32], [64], [128], [256], [512]]
strides: [4, 8, 16, 32, 64]
rpn_target_assign:
batch_size_per_im: 256
fg_fraction: 0.5
negative_overlap: 0.3
positive_overlap: 0.7
use_random: True
train_proposal:
min_size: 0.0
nms_thresh: 0.7
pre_nms_top_n: 2000
post_nms_top_n: 2000
topk_after_collect: True
test_proposal:
min_size: 0.0
nms_thresh: 0.7
pre_nms_top_n: 1000
post_nms_top_n: 1000
loss_rpn_bbox: SmoothL1Loss
SmoothL1Loss:
beta: 0.1111111111111111
CascadeHead:
head: CascadeXConvNormHead
roi_extractor:
resolution: 7
sampling_ratio: 0
aligned: True
bbox_assigner: BBoxAssigner
bbox_loss: GIoULoss
num_cascade_stages: 3
reg_class_agnostic: False
stage_loss_weights: [1, 0.5, 0.25]
loss_normalize_pos: True
add_gt_as_proposals: [True, True, True]
BBoxAssigner:
batch_size_per_im: 512
bg_thresh: 0.5
fg_thresh: 0.5
fg_fraction: 0.25
cascade_iou: [0.5, 0.6, 0.7]
use_random: True
CascadeXConvNormHead:
norm_type: bn
GIoULoss:
loss_weight: 10.
reduction: 'none'
eps: 0.000001
BBoxPostProcess:
decode:
name: RCNNBox
prior_box_var: [30.0, 30.0, 15.0, 15.0]
nms:
name: MultiClassNMS
keep_top_k: 100
score_threshold: 0.05
nms_threshold: 0.5
| PaddleDetection/configs/vitdet/cascade_rcnn_vit_base_hrfpn_cae_1x_coco.yml/0 | {
"file_path": "PaddleDetection/configs/vitdet/cascade_rcnn_vit_base_hrfpn_cae_1x_coco.yml",
"repo_id": "PaddleDetection",
"token_count": 1133
} | 42 |
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/MobileNetV3_large_x1_0_ssld_pretrained.pdparams
norm_type: sync_bn
YOLOv3:
backbone: MobileNetV3
neck: YOLOv3FPN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
MobileNetV3:
model_name: large
scale: 1.
with_extra_blocks: false
extra_block_filters: []
feature_maps: [7, 13, 16]
# use default config
# YOLOv3FPN:
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.005
downsample_ratio: 32
clip_bbox: true
nms:
name: MultiClassNMS
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.45
nms_top_k: 1000
| PaddleDetection/configs/yolov3/_base_/yolov3_mobilenet_v3_large.yml/0 | {
"file_path": "PaddleDetection/configs/yolov3/_base_/yolov3_mobilenet_v3_large.yml",
"repo_id": "PaddleDetection",
"token_count": 460
} | 43 |
metric: COCO
num_classes: 80
# Datset configuration
TrainDataset:
!COCODataSet
image_dir: train2017
anno_path: annotations/instances_train2017.json
dataset_dir: dataset/coco/
EvalDataset:
!COCODataSet
image_dir: val2017
anno_path: annotations/instances_val2017.json
dataset_dir: dataset/coco/
worker_num: 6
eval_height: &eval_height 416
eval_width: &eval_width 416
eval_size: &eval_size [*eval_height, *eval_width]
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {interp: 2, target_size: *eval_size, keep_ratio: False}
- NormalizeImage: {mean: [0, 0, 0], std: [1, 1, 1], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 8
shuffle: false
| PaddleDetection/deploy/auto_compression/configs/picodet_reader.yml/0 | {
"file_path": "PaddleDetection/deploy/auto_compression/configs/picodet_reader.yml",
"repo_id": "PaddleDetection",
"token_count": 302
} | 44 |
# C++端预测部署
## 各环境编译部署教程
- [Linux 编译部署](docs/linux_build.md)
- [Windows编译部署(使用Visual Studio 2019)](docs/windows_vs2019_build.md)
- [NV Jetson编译部署](docs/Jetson_build.md)
## C++部署总览
[1.说明](#1说明)
[2.主要目录和文件](#2主要目录和文件)
### 1.说明
本目录为用户提供一个跨平台的`C++`部署方案,让用户通过`PaddleDetection`训练的模型导出后,即可基于本项目快速运行,也可以快速集成代码结合到自己的项目实际应用中去。
主要设计的目标包括以下四点:
- 跨平台,支持在 `Windows` 和 `Linux` 完成编译、二次开发集成和部署运行
- 可扩展性,支持用户针对新模型开发自己特殊的数据预处理等逻辑
- 高性能,除了`PaddlePaddle`自身带来的性能优势,我们还针对图像检测的特点对关键步骤进行了性能优化
- 支持各种不同检测模型结构,包括`Yolov3`/`Faster_RCNN`/`SSD`等
### 2.主要目录和文件
```bash
deploy/cpp
|
├── src
│ ├── main.cc # 集成代码示例, 程序入口
│ ├── object_detector.cc # 模型加载和预测主要逻辑封装类实现
│ └── preprocess_op.cc # 预处理相关主要逻辑封装实现
|
├── include
│ ├── config_parser.h # 导出模型配置yaml文件解析
│ ├── object_detector.h # 模型加载和预测主要逻辑封装类
│ └── preprocess_op.h # 预处理相关主要逻辑类封装
|
├── docs
│ ├── linux_build.md # Linux 编译指南
│ └── windows_vs2019_build.md # Windows VS2019编译指南
│
├── build.sh # 编译命令脚本
│
├── CMakeList.txt # cmake编译入口文件
|
├── CMakeSettings.json # Visual Studio 2019 CMake项目编译设置
│
└── cmake # 依赖的外部项目cmake(目前仅有yaml-cpp)
```
| PaddleDetection/deploy/cpp/README.md/0 | {
"file_path": "PaddleDetection/deploy/cpp/README.md",
"repo_id": "PaddleDetection",
"token_count": 1201
} | 45 |
# 是否使用GPU(即是否使用 CUDA)
WITH_GPU=OFF
# 是否使用MKL or openblas,TX2需要设置为OFF
WITH_MKL=ON
# 是否集成 TensorRT(仅WITH_GPU=ON 有效)
WITH_TENSORRT=OFF
# paddle 预测库lib名称,由于不同平台不同版本预测库lib名称不同,请查看所下载的预测库中`paddle_inference/lib/`文件夹下`lib`的名称
PADDLE_LIB_NAME=libpaddle_inference
# TensorRT 的include路径
TENSORRT_INC_DIR=/path/to/tensorrt/include
# TensorRT 的lib路径
TENSORRT_LIB_DIR=/path/to/tensorrt/lib
# Paddle 预测库路径
PADDLE_DIR=/path/to/paddle_inference
# CUDA 的 lib 路径
CUDA_LIB=/path/to/cuda/lib
# CUDNN 的 lib 路径
CUDNN_LIB=/path/to/cudnn/lib
# 是否开启关键点模型预测功能
WITH_KEYPOINT=OFF
# 是否开启跟踪模型预测功能
WITH_MOT=OFF
MACHINE_TYPE=`uname -m`
echo "MACHINE_TYPE: "${MACHINE_TYPE}
if [ "$MACHINE_TYPE" = "x86_64" ]
then
echo "set OPENCV_DIR for x86_64"
# linux系统通过以下命令下载预编译的opencv
mkdir -p $(pwd)/deps && cd $(pwd)/deps
wget -c https://bj.bcebos.com/v1/paddledet/data/opencv-3.4.7.tar.gz
tar -xvf opencv-3.4.7.tar.gz
cd opencv-3.4.7
OPENCV_INSTALL_PATH=./opencv3
rm -rf build
mkdir build
cd build
cmake .. \
-DCMAKE_INSTALL_PREFIX=${OPENCV_INSTALL_PATH} \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=OFF \
-DWITH_IPP=OFF \
-DBUILD_IPP_IW=OFF \
-DWITH_LAPACK=OFF \
-DWITH_EIGEN=OFF \
-DCMAKE_INSTALL_LIBDIR=lib64 \
-DWITH_ZLIB=ON \
-DBUILD_ZLIB=ON \
-DWITH_JPEG=ON \
-DBUILD_JPEG=ON \
-DWITH_PNG=ON \
-DBUILD_PNG=ON \
-DWITH_TIFF=ON \
-DBUILD_TIFF=ON
make -j
make install
cd ../../../
# set OPENCV_DIR
OPENCV_DIR=$(pwd)/deps/opencv-3.4.7/build/opencv3
elif [ "$MACHINE_TYPE" = "aarch64" ]
then
echo "set OPENCV_DIR for aarch64"
# TX2平台通过以下命令下载预编译的opencv
mkdir -p $(pwd)/deps && cd $(pwd)/deps
wget -c https://bj.bcebos.com/v1/paddledet/data/TX2_JetPack4.3_opencv_3.4.6_gcc7.5.0.tar.gz
tar -xvf TX2_JetPack4.3_opencv_3.4.6_gcc7.5.0.tar.gz && cd ..
# set OPENCV_DIR
OPENCV_DIR=$(pwd)/deps/TX2_JetPack4.3_opencv_3.4.6_gcc7.5.0/
else
echo "Please set OPENCV_DIR manually"
fi
echo "OPENCV_DIR: "$OPENCV_DIR
# 以下无需改动
rm -rf build
mkdir -p build
cd build
cmake .. \
-DWITH_GPU=${WITH_GPU} \
-DWITH_MKL=${WITH_MKL} \
-DWITH_TENSORRT=${WITH_TENSORRT} \
-DTENSORRT_LIB_DIR=${TENSORRT_LIB_DIR} \
-DTENSORRT_INC_DIR=${TENSORRT_INC_DIR} \
-DPADDLE_DIR=${PADDLE_DIR} \
-DWITH_STATIC_LIB=${WITH_STATIC_LIB} \
-DCUDA_LIB=${CUDA_LIB} \
-DCUDNN_LIB=${CUDNN_LIB} \
-DOPENCV_DIR=${OPENCV_DIR} \
-DPADDLE_LIB_NAME=${PADDLE_LIB_NAME} \
-DWITH_KEYPOINT=${WITH_KEYPOINT} \
-DWITH_MOT=${WITH_MOT}
make
echo "make finished!"
| PaddleDetection/deploy/cpp/scripts/build.sh/0 | {
"file_path": "PaddleDetection/deploy/cpp/scripts/build.sh",
"repo_id": "PaddleDetection",
"token_count": 1592
} | 46 |
import sys
import requests
import cv2
import random
import time
import numpy as np
import cupy as cp
import tensorrt as trt
from PIL import Image
from collections import OrderedDict, namedtuple
from pathlib import Path
def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleup=True, stride=32):
# Resize and pad image while meeting stride-multiple constraints
shape = im.shape[:2] # current shape [height, width]
if isinstance(new_shape, int):
new_shape = (new_shape, new_shape)
# Scale ratio (new / old)
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
if not scaleup: # only scale down, do not scale up (for better val mAP)
r = min(r, 1.0)
# Compute padding
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
if auto: # minimum rectangle
dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
dw /= 2 # divide padding into 2 sides
dh /= 2
if shape[::-1] != new_unpad: # resize
im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
return im, r, (dw, dh)
names = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush']
colors = {name: [random.randint(0, 255) for _ in range(3)] for i, name in enumerate(names)}
url = 'https://oneflow-static.oss-cn-beijing.aliyuncs.com/tripleMu/image1.jpg'
file = requests.get(url)
img = cv2.imdecode(np.frombuffer(file.content, np.uint8), 1)
w = Path(sys.argv[1])
assert w.exists() and w.suffix in ('.engine', '.plan'), 'Wrong engine path'
mean = np.array([0.485, 0.456, 0.406], dtype=np.float32).reshape(1, 3, 1, 1)
std = np.array([0.229, 0.224, 0.225], dtype=np.float32).reshape(1, 3, 1, 1)
mean = cp.asarray(mean)
std = cp.asarray(std)
# Infer TensorRT Engine
Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
logger = trt.Logger(trt.Logger.INFO)
trt.init_libnvinfer_plugins(logger, namespace="")
with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
model = runtime.deserialize_cuda_engine(f.read())
bindings = OrderedDict()
fp16 = False # default updated below
for index in range(model.num_bindings):
name = model.get_binding_name(index)
dtype = trt.nptype(model.get_binding_dtype(index))
shape = tuple(model.get_binding_shape(index))
data = cp.empty(shape, dtype=cp.dtype(dtype))
bindings[name] = Binding(name, dtype, shape, data, int(data.data.ptr))
if model.binding_is_input(index) and dtype == np.float16:
fp16 = True
binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
context = model.create_execution_context()
image = img.copy()
image, ratio, dwdh = letterbox(image, auto=False)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_copy = image.copy()
image = image.transpose((2, 0, 1))
image = np.expand_dims(image, 0)
image = np.ascontiguousarray(image)
im = cp.asarray(image)
im = im.astype(cp.float32)
im /= 255
im -= mean
im /= std
# warmup for 10 times
for _ in range(10):
tmp = cp.random.randn(1, 3, 640, 640).astype(cp.float32)
binding_addrs['image'] = int(tmp.data.ptr)
context.execute_v2(list(binding_addrs.values()))
start = time.perf_counter()
binding_addrs['image'] = int(im.data.ptr)
context.execute_v2(list(binding_addrs.values()))
print(f'Cost {(time.perf_counter() - start) * 1000}ms')
nums = bindings['num_dets'].data
boxes = bindings['det_boxes'].data
scores = bindings['det_scores'].data
classes = bindings['det_classes'].data
num = int(nums[0][0])
box_img = boxes[0, :num].round().astype(cp.int32)
score_img = scores[0, :num]
clss_img = classes[0, :num]
for i, (box, score, clss) in enumerate(zip(box_img, score_img, clss_img)):
name = names[int(clss)]
color = colors[name]
cv2.rectangle(image_copy, box[:2].tolist(), box[2:].tolist(), color, 2)
cv2.putText(image_copy, name, (int(box[0]), int(box[1]) - 2), cv2.FONT_HERSHEY_SIMPLEX,
0.75, [225, 255, 255], thickness=2)
cv2.imshow('Result', cv2.cvtColor(image_copy, cv2.COLOR_RGB2BGR))
cv2.waitKey(0)
| PaddleDetection/deploy/end2end_ppyoloe/cupy-python.py/0 | {
"file_path": "PaddleDetection/deploy/end2end_ppyoloe/cupy-python.py",
"repo_id": "PaddleDetection",
"token_count": 2158
} | 47 |
[English](README.md) | 简体中文
# PaddleDetection CPU-GPU C++部署示例
本目录下提供`infer.cc`快速完成PPYOLOE模型包括PPYOLOE在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例。
## 1. 说明
PaddleDetection支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署PaddleDetection模型。FastDeploy目前支持的模型系列,包括但不限于`PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,`SSD`,`PaddleYOLOv5`,`PaddleYOLOv6`,`PaddleYOLOv7`,`RTMDet`,`CascadeRCNN`,`PSSDet`,`RetinaNet`,`PPYOLOESOD`,`FCOS`,`TTFNet`,`TOOD`,`GFL`所有类名的构造函数和预测函数在参数上完全一致。所有模型的调用,只需要参考PPYOLOE的示例,即可快速调用。
## 2. 部署环境准备
在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库。
## 3. 部署模型准备
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleDetection部署模型](../README.md)。
## 4. 运行部署示例
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
### 4.1 目标检测示例
```bash
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-x.x.x.tgz
tar xvf fastdeploy-linux-x64-gpu-x.x.x.tgz
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection/deploy/fastdeploy/cpu-gpu/cpp
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
# git checkout develop
# 编译部署示例
mkdir build && cd build
mv ../fastdeploy-linux-x64-gpu-x.x.x .
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-gpu-x.x.x
make -j
# 下载PPYOLOE模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
tar xvf ppyoloe_crn_l_300e_coco.tgz
# 运行部署示例
# CPU推理
./infer_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 0
# GPU推理
./infer_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
# GPU上Paddle-TensorRT推理(注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
./infer_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
```
运行完成可视化结果如下图所示
<div align="center">
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
</div>
### 4.2 关键点检测示例
```bash
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-x.x.x.tgz
tar xvf fastdeploy-linux-x64-gpu-x.x.x.tgz
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection/deploy/fastdeploy/cpu-gpu/cpp
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
# git checkout develop
# 编译部署示例
mkdir build && cd build
mv ../fastdeploy-linux-x64-gpu-x.x.x .
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-gpu-x.x.x
make -j
# 下载PP-TinyPose模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
tar -xvf PP_TinyPose_256x192_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/hrnet_demo.jpg
# 运行部署示例
# CPU推理
./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 0
# GPU推理
./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 1
# GPU上Paddle-TensorRT推理(注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 2
```
运行完成可视化结果如下图所示
<div align="center">
<img src="https://user-images.githubusercontent.com/16222477/196386764-dd51ad56-c410-4c54-9580-643f282f5a83.jpeg", width=359px, height=423px />
</div>
关于如何进行多人关键点检测,请参考[PPTinyPose Pipeline示例](./det_keypoint_unite/)
- 注意,以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md)
- 关于如何通过FastDeploy使用更多不同的推理后端,以及如何使用不同的硬件,请参考文档:[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
## 5. PaddleDetection C++接口
FastDeploy目前支持的模型系列,包括但不限于`PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,`SSD`,`PaddleYOLOv5`,`PaddleYOLOv6`,`PaddleYOLOv7`,`RTMDet`,`CascadeRCNN`,`PSSDet`,`RetinaNet`,`PPYOLOESOD`,`FCOS`,`TTFNet`,`TOOD`,`GFL`所有类名的构造函数和预测函数在参数上完全一致。所有模型的调用,只需要参考PPYOLOE的示例,即可快速调用。
### 5.1 目标检测及实例分割模型
```c++
fastdeploy::vision::detection::PicoDet(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::SOLOv2(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PPYOLOE(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PPYOLO(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::YOLOv3(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PaddleYOLOX(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::FasterRCNN(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::MaskRCNN(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::SSD(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PaddleYOLOv5(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PaddleYOLOv6(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PaddleYOLOv7(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PaddleYOLOv8(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::CascadeRCNN(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PSSDet(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::RetinaNet(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::PPYOLOESOD(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::FCOS(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::TOOD(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
fastdeploy::vision::detection::GFL(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
```
### 5.2 关键点检测模型
```C++
fastdeploy::vision::keypointdetection::PPTinyPose(const string& model_file, const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), const ModelFormat& model_format = ModelFormat::PADDLE);
```
PaddleDetection模型加载和初始化,其中model_file, params_file为导出的Paddle部署模型格式, config_file为PaddleDetection同时导出的部署配置yaml文件
## 6. 更多指南
- [PaddleDetection C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1detection.html)
- [FastDeploy部署PaddleDetection模型概览](../../)
- [Python部署](../python)
## 7. 常见问题
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
| PaddleDetection/deploy/fastdeploy/cpu-gpu/cpp/README.md/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/cpu-gpu/cpp/README.md",
"repo_id": "PaddleDetection",
"token_count": 5134
} | 48 |
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "fastdeploy/vision.h"
#include "fastdeploy/pipeline.h"
#ifdef WIN32
const char sep = '\\';
#else
const char sep = '/';
#endif
void KunlunXinInfer(const std::string& det_model_dir,
const std::string& tinypose_model_dir,
const std::string& image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseKunlunXin();
auto det_model_file = det_model_dir + sep + "model.pdmodel";
auto det_params_file = det_model_dir + sep + "model.pdiparams";
auto det_config_file = det_model_dir + sep + "infer_cfg.yml";
auto det_model = fastdeploy::vision::detection::PicoDet(
det_model_file, det_params_file, det_config_file, option);
if (!det_model.Initialized()) {
std::cerr << "Detection Model Failed to initialize." << std::endl;
return;
}
auto tinypose_model_file = tinypose_model_dir + sep + "model.pdmodel";
auto tinypose_params_file = tinypose_model_dir + sep + "model.pdiparams";
auto tinypose_config_file = tinypose_model_dir + sep + "infer_cfg.yml";
auto tinypose_model = fastdeploy::vision::keypointdetection::PPTinyPose(
tinypose_model_file, tinypose_params_file, tinypose_config_file, option);
if (!tinypose_model.Initialized()) {
std::cerr << "TinyPose Model Failed to initialize." << std::endl;
return;
}
auto im = cv::imread(image_file);
fastdeploy::vision::KeyPointDetectionResult res;
auto pipeline =
fastdeploy::pipeline::PPTinyPose(
&det_model, &tinypose_model);
pipeline.detection_model_score_threshold = 0.5;
if (!pipeline.Predict(&im, &res)) {
std::cerr << "TinyPose Prediction Failed." << std::endl;
return;
} else {
std::cout << "TinyPose Prediction Done!" << std::endl;
}
std::cout << res.Str() << std::endl;
auto vis_im =
fastdeploy::vision::VisKeypointDetection(im, res, 0.2);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "TinyPose visualized result saved in ./vis_result.jpg"
<< std::endl;
}
int main(int argc, char* argv[]) {
if (argc < 5) {
std::cout << "Usage: infer_demo path/to/detection_model_dir "
"path/to/pptinypose_model_dir path/to/image, "
"e.g ./infer_model ./picodet_model_dir ./pptinypose_model_dir "
"./test.jpeg 0"
<< std::endl;
return -1;
}
KunlunXinInfer(argv[1], argv[2], argv[3]);
return 0;
}
| PaddleDetection/deploy/fastdeploy/kunlunxin/cpp/det_keypoint_unite/det_keypoint_unite_infer.cc/0 | {
"file_path": "PaddleDetection/deploy/fastdeploy/kunlunxin/cpp/det_keypoint_unite/det_keypoint_unite_infer.cc",
"repo_id": "PaddleDetection",
"token_count": 1158
} | 49 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <ctime>
#include <memory>
#include <string>
#include <utility>
#include <vector>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "paddle_api.h" // NOLINT
#include "include/config_parser.h"
#include "include/keypoint_postprocess.h"
#include "include/preprocess_op.h"
using namespace paddle::lite_api; // NOLINT
namespace PaddleDetection {
// Object KeyPoint Result
struct KeyPointResult {
// Keypoints: shape(N x 3); N: number of Joints; 3: x,y,conf
std::vector<float> keypoints;
int num_joints = -1;
};
// Visualiztion KeyPoint Result
cv::Mat VisualizeKptsResult(const cv::Mat& img,
const std::vector<KeyPointResult>& results,
const std::vector<int>& colormap,
float threshold = 0.2);
class KeyPointDetector {
public:
explicit KeyPointDetector(const std::string& model_dir,
int cpu_threads = 1,
const int batch_size = 1,
bool use_dark = true) {
config_.load_config(model_dir);
threshold_ = config_.draw_threshold_;
use_dark_ = use_dark;
preprocessor_.Init(config_.preprocess_info_);
printf("before keypoint detector\n");
LoadModel(model_dir, cpu_threads);
printf("create keypoint detector\n");
}
// Load Paddle inference model
void LoadModel(std::string model_file, int num_theads);
// Run predictor
void Predict(const std::vector<cv::Mat> imgs,
std::vector<std::vector<float>>& center,
std::vector<std::vector<float>>& scale,
const int warmup = 0,
const int repeats = 1,
std::vector<KeyPointResult>* result = nullptr,
std::vector<double>* times = nullptr);
// Get Model Label list
const std::vector<std::string>& GetLabelList() const {
return config_.label_list_;
}
bool use_dark(){return this->use_dark_;}
inline float get_threshold() {return threshold_;};
private:
// Preprocess image and copy data to input buffer
void Preprocess(const cv::Mat& image_mat);
// Postprocess result
void Postprocess(std::vector<float>& output,
std::vector<int64_t>& output_shape,
std::vector<int64_t>& idxout,
std::vector<int64_t>& idx_shape,
std::vector<KeyPointResult>* result,
std::vector<std::vector<float>>& center,
std::vector<std::vector<float>>& scale);
std::shared_ptr<PaddlePredictor> predictor_;
Preprocessor preprocessor_;
ImageBlob inputs_;
std::vector<float> output_data_;
std::vector<int64_t> idx_data_;
float threshold_;
ConfigPaser config_;
bool use_dark_;
};
} // namespace PaddleDetection
| PaddleDetection/deploy/lite/include/keypoint_detector.h/0 | {
"file_path": "PaddleDetection/deploy/lite/include/keypoint_detector.h",
"repo_id": "PaddleDetection",
"token_count": 1395
} | 50 |
visual: True
warmup_frame: 50
VIDEO_ACTION:
model_dir: https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip
batch_size: 1
frame_len: 8
sample_freq: 7
short_size: 340
target_size: 320
enable: True
| PaddleDetection/deploy/pipeline/config/examples/infer_cfg_fight_recognition.yml/0 | {
"file_path": "PaddleDetection/deploy/pipeline/config/examples/infer_cfg_fight_recognition.yml",
"repo_id": "PaddleDetection",
"token_count": 95
} | 51 |
[English](PPHuman_QUICK_STARTED_en.md) | 简体中文
# PP-Human快速开始
## 目录
- [环境准备](#环境准备)
- [模型下载](#模型下载)
- [配置文件说明](#配置文件说明)
- [预测部署](#预测部署)
- [在线视频流](#在线视频流)
- [Jetson部署说明](#Jetson部署说明)
- [参数说明](#参数说明)
- [方案介绍](#方案介绍)
- [行人检测](#行人检测)
- [行人跟踪](#行人跟踪)
- [跨镜行人跟踪](#跨镜行人跟踪)
- [属性识别](#属性识别)
- [行为识别](#行为识别)
## 环境准备
环境要求: PaddleDetection版本 >= release/2.4 或 develop版本
PaddlePaddle和PaddleDetection安装
```
# PaddlePaddle CUDA10.1
python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html
# PaddlePaddle CPU
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
# 克隆PaddleDetection仓库
cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/PaddleDetection.git
# 安装其他依赖
cd PaddleDetection
pip install -r requirements.txt
```
1. 详细安装文档参考[文档](../../../../docs/tutorials/INSTALL_cn.md)
2. 如果需要TensorRT推理加速(测速方式),请安装带`TensorRT版本Paddle`。您可以从[Paddle安装包](https://paddleinference.paddlepaddle.org.cn/v2.2/user_guides/download_lib.html#python)下载安装,或者按照[指导文档](https://www.paddlepaddle.org.cn/inference/master/optimize/paddle_trt.html)使用docker或自编译方式准备Paddle环境。
## 模型下载
PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模型,以实现不同使用场景,用户可以直接下载使用
| 任务 | 端到端速度(ms)| 模型方案 | 模型体积 |
| :---------: | :-------: | :------: |:------: |
| 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| 行人检测(超轻量级) | 10ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M |
| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M |
| 行人跟踪(超轻量级) | 13.2ms(Jetson AGX) | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/pphuman/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.tar.gz) | 17M |
| 跨镜跟踪(REID) | 单人1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID:92M |
| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | 目标检测:182M<br>属性识别:86M |
| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | 目标检测:182M<br>属性识别:86M |
| 摔倒识别 | 单人10ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) <br> [关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) <br> [基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪:182M<br>关键点检测:101M<br>基于关键点行为识别:21.8M |
| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 多目标跟踪:182M |
| 打架识别 | 19.7ms | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M |
| 抽烟识别 | 单人15.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | 目标检测:182M<br>基于人体id的目标检测:27M |
| 打电话识别 | 单人6.0ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M<br>基于人体id的图像分类:45M |
下载模型后,解压至`./output_inference`文件夹。
在配置文件中,模型路径默认为模型的下载路径,如果用户不修改,则在推理时会自动下载对应的模型。
**注意:**
- 模型精度为融合数据集结果,数据集包含开源数据集和企业数据集
- ReID模型精度为Market1501数据集测试结果
- 预测速度为T4下,开启TensorRT FP16的效果, 模型预测速度包含数据预处理、模型预测、后处理全流程
## 配置文件说明
PP-Human相关配置位于```deploy/pipeline/config/infer_cfg_pphuman.yml```中,存放模型路径,该配置文件中包含了目前PP-Human支持的所有功能。如果想要查看某个单一功能的配置,请参见```deploy/pipeline/config/examples/```中相关配置。此外,配置文件中的内容可以通过```-o```命令行参数修改,如修改属性的模型目录,则可通过```-o ATTR.model_dir="DIR_PATH"```进行设置。
功能及任务类型对应表单如下:
| 输入类型 | 功能 | 任务类型 | 配置项 |
|-------|-------|----------|-----|
| 图片 | 属性识别 | 目标检测 属性识别 | DET ATTR |
| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT ATTR |
| 单镜头视频 | 行为识别 | 多目标跟踪 关键点检测 摔倒识别 | MOT KPT SKELETON_ACTION |
例如基于视频输入的属性识别,任务类型包含多目标跟踪和属性识别,具体配置如下:
```
crop_thresh: 0.5
attr_thresh: 0.5
visual: True
MOT:
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1
enable: True
ATTR:
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip
batch_size: 8
enable: True
```
**注意:**
- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True。
## 预测部署
1. 直接使用默认配置或者examples中配置文件,或者直接在`infer_cfg_pphuman.yml`中修改配置:
```
# 例:行人检测,指定配置文件路径和测试图片,图片输入默认打开检测模型
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu
# 例:行人属性识别,直接使用examples中配置
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu
```
2. 使用命令行进行功能开启,或者模型路径修改:
```
# 例:行人跟踪,指定配置文件路径,模型路径和测试视频, 命令行中指定的模型路径优先级高于配置文件
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o MOT.enable=True MOT.model_dir=ppyoloe_infer/ --video_file=test_video.mp4 --device=gpu
# 例:行为识别,以摔倒识别为例,命令行中开启SKELETON_ACTION模型
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enbale=True --video_file=test_video.mp4 --device=gpu
```
### 在线视频流
在线视频流解码功能基于opencv的capture函数,支持rtsp、rtmp格式。
- rtsp拉流预测
对rtsp拉流的支持,使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流,如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址),示例如下:
```
# 例:行人属性识别,单路视频流
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE] --device=gpu
# 例:行人属性识别,多路视频流
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1] rtsp://[YOUR_RTSP_SITE2] --device=gpu
```
- 视频结果推流rtsp
预测结果进行rtsp推流,使用--pushurl rtsp:[IP] 推流到IP地址端,PC端可以使用[VLC播放器](https://vlc.onl/)打开网络流进行播放,播放地址为 `rtsp:[IP]/videoname`。其中`videoname`是预测的视频文件名,如果视频来源是本地摄像头则`videoname`默认为`output`.
```
# 例:行人属性识别,单路视频流,该示例播放地址为 rtsp://[YOUR_SERVER_IP]:8554/test_video
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554
```
注:
1. rtsp推流服务基于 [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), 如使用推流功能请先开启该服务.
使用方法很简单,以linux平台为例:1)下载对应平台release包;2)解压后在命令行执行命令 `./rtsp-simple-server`即可,成功后进入服务开启状态就可以接收视频流了。
2. rtsp推流如果模型处理速度跟不上会出现很明显的卡顿现象,建议跟踪模型使用ppyoloe_s或ppyoloe-plus-tiny版本,方式为修改配置中跟踪模型mot_ppyoloe_l_36e_pipeline.zip替换为mot_ppyoloe_s_36e_pipeline.zip。
### Jetson部署说明
由于Jetson平台算力相比服务器有较大差距,有如下使用建议:
1. 模型选择轻量级版本,我们最新提供了轻量级[PP-YOLOE-Plus Tiny模型](../../../../configs/pphuman/README.md),该模型在Jetson AGX上可以实现4路视频流20fps实时跟踪。
2. 如果需进一步提升速度,建议开启跟踪跳帧功能,推荐使用2或者3: `skip_frame_num: 3`,该功能当前默认关闭。
上述修改可以直接修改配置文件(推荐),也可以在命令行中修改(字段较长,不推荐)。
PP-YOLOE-Plus Tiny模型在AGX平台不同功能开启时的速度如下:(跟踪人数为3人情况下,以属性为例,总耗时为跟踪13.3+5.2*3≈29ms)
| 功能 | 平均每帧耗时(ms) | 运行帧率(fps) |
|:----------|:----------|:----------|
| 跟踪 | 13 | 77 |
| 属性识别 | 29 | 34 |
| 摔倒识别 | 64.5 | 15.5 |
| 抽烟识别 | 68.8 | 14.5 |
| 打电话识别 | 22.5 | 44.5 |
| 打架识别 | 3.98 | 251 |
### 参数说明
| 参数 | 是否必须|含义 |
|-------|-------|----------|
| --config | Yes | 配置文件路径 |
| -o | Option | 覆盖配置文件中对应的配置 |
| --image_file | Option | 需要预测的图片 |
| --image_dir | Option | 要预测的图片文件夹路径 |
| --video_file | Option | 需要预测的视频,或者rtsp流地址(推荐使用rtsp参数) |
| --rtsp | Option | rtsp视频流地址,支持一路或者多路同时输入 |
| --camera_id | Option | 用来预测的摄像头ID,默认为-1(表示不使用摄像头预测,可设置为:0 - (摄像头数目-1) ),预测过程中在可视化界面按`q`退出输出预测结果到:output/output.mp4|
| --device | Option | 运行时的设备,可选择`CPU/GPU/XPU`,默认为`CPU`|
| --pushurl | Option| 对预测结果视频进行推流的地址,以rtsp://开头,该选项优先级高于视频结果本地存储,打开时不再另外存储本地预测结果视频|
| --output_dir | Option|可视化结果保存的根目录,默认为output/|
| --run_mode | Option |使用GPU时,默认为paddle, 可选(paddle/trt_fp32/trt_fp16/trt_int8)|
| --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False |
| --cpu_threads | Option| 设置cpu线程数,默认为1 |
| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False |
| --do_entrance_counting | Option | 是否统计出入口流量,默认为False |
| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False |
| --region_type | Option | 'horizontal'(默认值)、'vertical':表示流量统计方向选择;'custom':表示设置闯入区域 |
| --region_polygon | Option | 设置闯入区域多边形多点的坐标,无默认值 |
| --do_break_in_counting | Option | 此项表示做区域闯入检查 |
## 方案介绍
PP-Human v2整体方案如下图所示:
<div width="1000" align="center">
<img src="../../../../docs/images/pphumanv2.png"/>
</div>
### 行人检测
- 采用PP-YOLOE L 作为目标检测模型
- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)和[检测跟踪文档](pphuman_mot.md)
### 行人跟踪
- 采用SDE方案完成行人跟踪
- 检测模型使用PP-YOLOE L(高精度)和S(轻量级)
- 跟踪模块采用OC-SORT方案
- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)和[检测跟踪文档](pphuman_mot.md)
### 跨镜行人跟踪
- 使用PP-YOLOE + OC-SORT得到单镜头多目标跟踪轨迹
- 使用ReID(StrongBaseline网络)对每一帧的检测结果提取特征
- 多镜头轨迹特征进行匹配,得到跨镜头跟踪结果
- 详细文档参考[跨镜跟踪](pphuman_mtmct.md)
### 属性识别
- 使用PP-YOLOE + OC-SORT跟踪人体
- 使用PP-HGNet、PP-LCNet(多分类模型)完成识别属性,主要属性包括年龄、性别、帽子、眼睛、上衣下衣款式、背包等
- 详细文档参考[属性识别](pphuman_attribute.md)
### 行为识别:
- 提供四种行为识别方案
- 1. 基于骨骼点的行为识别,例如摔倒识别
- 2. 基于图像分类的行为识别,例如打电话识别
- 3. 基于检测的行为识别,例如吸烟识别
- 4. 基于视频分类的行为识别,例如打架识别
- 详细文档参考[行为识别](pphuman_action.md)
| PaddleDetection/deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md/0 | {
"file_path": "PaddleDetection/deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md",
"repo_id": "PaddleDetection",
"token_count": 9221
} | 52 |
[English](ppvehicle_mot_en.md) | 简体中文
# PP-Vehicle车辆跟踪模块
【应用介绍】
车辆检测与跟踪在交通监控、自动驾驶等方向都具有广泛应用,PP-Vehicle中集成了检测跟踪模块,是车牌检测、车辆属性识别等任务的基础。我们提供了预训练模型,用户可以直接下载使用。
【模型下载】
| 任务 | 算法 | 精度 | 预测速度(ms) |下载链接 |
|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: |
| 车辆检测/跟踪 | PP-YOLOE-l | mAP: 63.9 <br> MOTA: 50.1 | 检测: 25.1ms <br> 跟踪:31.8ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) |
| 车辆检测/跟踪 | PP-YOLOE-s | mAP: 61.3 <br> MOTA: 46.8 | 检测: 16.2ms <br> 跟踪:21.0ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_ppvehicle.zip) |
1. 检测/跟踪模型精度为PPVehicle数据集训练得到,整合了BDD100K-MOT和UA-DETRAC,是将BDD100K-MOT中的`car, truck, bus, van`和UA-DETRAC中的`car, bus, van`都合并为1类`vehicle(1)`后的数据集,检测精度mAP是PPVehicle的验证集上测得,跟踪精度MOTA是在BDD100K-MOT的验证集上测得(`car, truck, bus, van`合并为1类`vehicle`)。训练具体流程请参照[ppvehicle](../../../../configs/ppvehicle)。
2. 预测速度为T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。
## 使用方法
【配置项说明】
配置文件中与属性相关的参数如下:
```
DET:
model_dir: output_inference/mot_ppyoloe_l_36e_ppvehicle/ # 车辆检测模型调用路径
batch_size: 1 # 模型预测时的batch_size大小
MOT:
model_dir: output_inference/mot_ppyoloe_l_36e_ppvehicle/ # 车辆跟踪模型调用路径
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1 # 模型预测时的batch_size大小, 跟踪任务只能设置为1
skip_frame_num: -1 # 跳帧预测的帧数,-1表示不进行跳帧,建议跳帧帧数最大不超过3
enable: False # 是否开启该功能,使用跟踪前必须确保设置为True
```
【使用命令】
1. 从上表链接中下载模型并解压到```./output_inference```路径下,并修改配置文件中模型路径。默认为自动下载模型,无需做改动。
2. 图片输入时,是纯检测任务,启动命令如下
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--image_file=test_image.jpg \
--device=gpu
```
3. 视频输入时,是跟踪任务,注意首先设置infer_cfg_ppvehicle.yml中的MOT配置的`enable=True`,如果希望跳帧加速检测跟踪流程,可以设置`skip_frame_num: 2`,建议跳帧帧数最大不超过3:
```
MOT:
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip
tracker_config: deploy/pipeline/config/tracker_config.yml
batch_size: 1
skip_frame_num: 2
enable: True
```
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu
```
4. 若修改模型路径,有以下两种方式:
- 方法一:```./deploy/pipeline/config/infer_cfg_ppvehicle.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。
- 方法二:命令行中--config配置项后面增加`-o MOT.model_dir=[YOUR_DETMODEL_PATH]`修改模型路径。
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu \
--region_type=horizontal \
--do_entrance_counting \
--draw_center_traj \
-o MOT.model_dir=ppyoloe/
```
**注意:**
- `--do_entrance_counting`表示是否统计出入口流量,不设置即默认为False。
- `--draw_center_traj`表示是否绘制跟踪轨迹,不设置即默认为False。注意绘制跟踪轨迹的测试视频最好是静止摄像头拍摄的。
- `--region_type`表示流量计数的区域,当设置`--do_entrance_counting`时可选择`horizontal`或者`vertical`,默认是`horizontal`,表示以视频图片的中心水平线为出入口,同一物体框的中心点在相邻两秒内分别在区域中心水平线的两侧,即完成计数加一。
5. 区域闯入判断和计数
注意首先设置infer_cfg_ppvehicle.yml中的MOT配置的enable=True,然后启动命令如下
```python
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \
--video_file=test_video.mp4 \
--device=gpu \
--draw_center_traj \
--do_break_in_counting \
--region_type=custom \
--region_polygon 200 200 400 200 300 400 100 400
```
**注意:**
- 区域闯入的测试视频必须是静止摄像头拍摄的,镜头不能抖动或移动。
- `--do_break_in_counting`表示是否进行区域出入后计数,不设置即默认为False。
- `--region_type`表示流量计数的区域,当设置`--do_break_in_counting`时仅可选择`custom`,默认是`custom`,表示以用户自定义区域为出入口,同一物体框的下边界中点坐标在相邻两秒内从区域外到区域内,即完成计数加一。
- `--region_polygon`表示用户自定义区域的多边形的点坐标序列,每两个为一对点坐标(x,y),**按顺时针顺序**连成一个**封闭区域**,至少需要3对点也即6个整数,默认值是`[]`,需要用户自行设置点坐标,如是四边形区域,坐标顺序是`左上、右上、右下、左下`。用户可以运行[此段代码](../../tools/get_video_info.py)获取所测视频的分辨率帧数,以及可以自定义画出自己想要的多边形区域的可视化并自己调整。
自定义多边形区域的可视化代码运行如下:
```python
python get_video_info.py --video_file=demo.mp4 --region_polygon 200 200 400 200 300 400 100 400
```
快速画出想要的区域的小技巧:先任意取点得到图片,用画图工具打开,鼠标放到想要的区域点上会显示出坐标,记录下来并取整,作为这段可视化代码的region_polygon参数,并再次运行可视化,微调点坐标参数直至满意。
【效果展示】
<div width="600" align="center">
<img src="../images/mot_vehicle.gif"/>
</div>
## 方案说明
【实现方案及特色】
1. 使用目标检测/多目标跟踪技术来获取图片/视频输入中的车辆检测框,检测模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe)和[ppvehicle](../../../../configs/ppvehicle)。
2. 多目标跟踪模型方案采用[OC-SORT](https://arxiv.org/pdf/2203.14360.pdf),采用PP-YOLOE替换原文的YOLOX作为检测器,采用OCSORTTracker作为跟踪器,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。
## 参考文献
```
@article{cao2022observation,
title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking},
author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},
journal={arXiv preprint arXiv:2203.14360},
year={2022}
}
```
| PaddleDetection/deploy/pipeline/docs/tutorials/ppvehicle_mot.md/0 | {
"file_path": "PaddleDetection/deploy/pipeline/docs/tutorials/ppvehicle_mot.md",
"repo_id": "PaddleDetection",
"token_count": 5298
} | 53 |
English | [简体中文](README_cn.md)
# Real-time Multi-Object Tracking system PP-Tracking
PP-Tracking is the first open source real-time Multi-Object Tracking system, and it is based on PaddlePaddle deep learning framework. It has rich models, wide application and high efficiency deployment.
PP-Tracking supports two paradigms: single camera tracking (MOT) and multi-camera tracking (MTMCT). Aiming at the difficulties and pain points of actual business, PP-Tracking provides various MOT functions and applications such as pedestrian tracking, vehicle tracking, multi-class tracking, small object tracking, traffic statistics and multi-camera tracking. The deployment method supports API and GUI visual interface, and the deployment language supports Python and C++, The deployment platform environment supports Linux, NVIDIA Jetson, etc.
<div width="1000" align="center">
<img src="../../docs/images/pptracking_en.png"/>
</div>
<div width="1000" align="center">
<img src="https://user-images.githubusercontent.com/22989727/205546999-f847183d-73e5-4abe-9896-ce6a245efc79.gif"/>
<br>
video source:VisDrone and BDD100K dataset</div>
</div>
## 一、Quick Start
### AI studio public project case
PP-tracking provides AI studio public project cases. Please refer to this [tutorial](https://aistudio.baidu.com/aistudio/projectdetail/3022582).
### Python predict and deployment
PP-Tracking supports Python predict and deployment. Please refer to this [doc](python/README.md).
### C++ predict and deployment
PP-Tracking supports C++ predict and deployment. Please refer to this [doc](cpp/README.md).
### GUI predict and deployment
PP-Tracking supports GUI predict and deployment. Please refer to this [doc](https://github.com/yangyudong2020/PP-Tracking_GUi).
## 二、Model Zoo
PP-Tracking supports two paradigms: single camera tracking (MOT) and multi-camera tracking (MTMCT).
- Single camera tracking supports **FairMOT** and **DeepSORT** two MOT models, multi-camera tracking only support **DeepSORT**.
- The applications of single camera tracking include pedestrian tracking, vehicle tracking, multi-class tracking, small object tracking and traffic statistics. The models are mainly optimized based on FairMOT to achieve the effect of real-time tracking. At the same time, PP-Tracking provides pre-training models based on different application scenarios.
- In DeepSORT (including DeepSORT used in multi-camera tracking), the selected detectors are PaddeDetection's self-developed high-performance detector [PP-YOLOv2](../../configs/ppyolo/) and lightweight detector [PP-PicoDet](../../configs/picodet/), and the selected ReID model is PaddleClas's self-developed ultra lightweight backbone [PP-LCNet](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/models/PP-LCNet.md)
PP-Tracking provids multi-scenario pre-training models and the exported models for deployment:
| Scene | Dataset | MOTA | Speed(FPS) | config | model weights | inference model |
| :---------: |:--------------- | :-------: | :------: | :------:|:-----: | :------------: |
| pedestrian | MOT17 | 65.3 | 23.9 | [config](../../configs/mot/fairmot/fairmot_hrnetv2_w18_dlafpn_30e_576x320.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320.pdparams) | [download](https://bj.bcebos.com/v1/paddledet/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tar) |
| pedestrian(small objects) | VisDrone-pedestrian | 40.5| 8.35 | [config](../../configs/mot/pedestrian/fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_pedestrian.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_pedestrian.pdparams) | [download](https://bj.bcebos.com/v1/paddledet/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_pedestrian.tar) |
| vehicle | BDD100k-vehicle | 32.6 | 24.3 | [config](../../configs/mot/vehicle/fairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100kmot_vehicle.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100kmot_vehicle.pdparams)| [download](https://bj.bcebos.com/v1/paddledet/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100kmot_vehicle.tar) |
| vehicle(small objects)| VisDrone-vehicle | 39.8 | 22.8 | [config](../../configs/mot/vehicle/fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_vehicle.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_vehicle.pdparams) | [download](https://bj.bcebos.com/v1/paddledet/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_vehicle.tar)
| multi-class | BDD100k | - | 12.5 | [config](../../configs/mot/mcfairmot/mcfairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100k_mcmot.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/mcfairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100k_mcmot.pdparams) | [download](https://bj.bcebos.com/v1/paddledet/models/mot/mcfairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100k_mcmot.tar) |
| multi-class(small objects) | VisDrone | 20.4 | 6.74 | [config](../../configs/mot/mcfairmot/mcfairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/mcfairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone.pdparams) | [download](https://bj.bcebos.com/v1/paddledet/models/mot/mcfairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone.tar) |
**Note:**
1. The equipment predicted by the model is **NVIDIA Jetson Xavier NX**, the speed is tested by **TensorRT FP16**, and the test environment is CUDA 10.2, JETPACK 4.5.1, TensorRT 7.1.
2. `model weights` means the weights saved directly after PaddleDetection training. For more tracking model weights, please refer to [modelzoo](../../configs/mot/README.md#模型库), you can also train according to the corresponding model config file and get the model weights.
3. `inference model` means the model weights with only forward parameters after exported, because only forward parameters are required during the deployment of PP-Tracking project. It can be downloaded and exported according to [modelzoo](../../configs/mot/README.md#模型库), you can also train according to the corresponding model config file and get the model weights, and then export them。In exported model files, there should be `infer_cfg.yml`,`model.pdiparams`,`model.pdiparams.info` and `model.pdmodel` four files in total, which are generally packaged in tar format.
## Citations
```
@ARTICLE{9573394,
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/TPAMI.2021.3119563}
}
@InProceedings{bdd100k,
author = {Yu, Fisher and Chen, Haofeng and Wang, Xin and Xian, Wenqi and Chen,
Yingying and Liu, Fangchen and Madhavan, Vashisht and Darrell, Trevor},
title = {BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
@article{zhang2020fair,
title={FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking},
author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
journal={arXiv preprint arXiv:2004.01888},
year={2020}
}
@inproceedings{Wojke2018deep,
title={Deep Cosine Metric Learning for Person Re-identification},
author={Wojke, Nicolai and Bewley, Alex},
booktitle={2018 IEEE Winter Conference on Applications of Computer Vision (WACV)},
year={2018},
pages={748--756},
organization={IEEE},
doi={10.1109/WACV.2018.00087}
}
```
| PaddleDetection/deploy/pptracking/README.md/0 | {
"file_path": "PaddleDetection/deploy/pptracking/README.md",
"repo_id": "PaddleDetection",
"token_count": 2879
} | 54 |
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <algorithm>
#include <ctime>
#include <numeric>
#include <string>
#include <utility>
#include <vector>
#include "include/tracker.h"
namespace PaddleDetection {
struct Rect {
float left;
float top;
float right;
float bottom;
};
struct MOTTrack {
int ids;
float score;
Rect rects;
int class_id = -1;
};
typedef std::vector<MOTTrack> MOTResult;
} // namespace PaddleDetection
| PaddleDetection/deploy/pptracking/cpp/include/utils.h/0 | {
"file_path": "PaddleDetection/deploy/pptracking/cpp/include/utils.h",
"repo_id": "PaddleDetection",
"token_count": 318
} | 55 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This code is based on https://github.com/nwojke/deep_sort/blob/master/deep_sort/track.py
"""
import datetime
__all__ = ['TrackState', 'Track']
class TrackState(object):
"""
Enumeration type for the single target track state. Newly created tracks are
classified as `tentative` until enough evidence has been collected. Then,
the track state is changed to `confirmed`. Tracks that are no longer alive
are classified as `deleted` to mark them for removal from the set of active
tracks.
"""
Tentative = 1
Confirmed = 2
Deleted = 3
class Track(object):
"""
A single target track with state space `(x, y, a, h)` and associated
velocities, where `(x, y)` is the center of the bounding box, `a` is the
aspect ratio and `h` is the height.
Args:
mean (ndarray): Mean vector of the initial state distribution.
covariance (ndarray): Covariance matrix of the initial state distribution.
track_id (int): A unique track identifier.
n_init (int): Number of consecutive detections before the track is confirmed.
The track state is set to `Deleted` if a miss occurs within the first
`n_init` frames.
max_age (int): The maximum number of consecutive misses before the track
state is set to `Deleted`.
cls_id (int): The category id of the tracked box.
score (float): The confidence score of the tracked box.
feature (Optional[ndarray]): Feature vector of the detection this track
originates from. If not None, this feature is added to the `features` cache.
Attributes:
hits (int): Total number of measurement updates.
age (int): Total number of frames since first occurance.
time_since_update (int): Total number of frames since last measurement
update.
state (TrackState): The current track state.
features (List[ndarray]): A cache of features. On each measurement update,
the associated feature vector is added to this list.
"""
def __init__(self,
mean,
covariance,
track_id,
n_init,
max_age,
cls_id,
score,
feature=None):
self.mean = mean
self.covariance = covariance
self.track_id = track_id
self.hits = 1
self.age = 1
self.time_since_update = 0
self.cls_id = cls_id
self.score = score
self.start_time = datetime.datetime.now()
self.state = TrackState.Tentative
self.features = []
self.feat = feature
if feature is not None:
self.features.append(feature)
self._n_init = n_init
self._max_age = max_age
def to_tlwh(self):
"""Get position in format `(top left x, top left y, width, height)`."""
ret = self.mean[:4].copy()
ret[2] *= ret[3]
ret[:2] -= ret[2:] / 2
return ret
def to_tlbr(self):
"""Get position in bounding box format `(min x, miny, max x, max y)`."""
ret = self.to_tlwh()
ret[2:] = ret[:2] + ret[2:]
return ret
def predict(self, kalman_filter):
"""
Propagate the state distribution to the current time step using a Kalman
filter prediction step.
"""
self.mean, self.covariance = kalman_filter.predict(self.mean,
self.covariance)
self.age += 1
self.time_since_update += 1
def update(self, kalman_filter, detection):
"""
Perform Kalman filter measurement update step and update the associated
detection feature cache.
"""
self.mean, self.covariance = kalman_filter.update(self.mean,
self.covariance,
detection.to_xyah())
self.features.append(detection.feature)
self.feat = detection.feature
self.cls_id = detection.cls_id
self.score = detection.score
self.hits += 1
self.time_since_update = 0
if self.state == TrackState.Tentative and self.hits >= self._n_init:
self.state = TrackState.Confirmed
def mark_missed(self):
"""Mark this track as missed (no association at the current time step).
"""
if self.state == TrackState.Tentative:
self.state = TrackState.Deleted
elif self.time_since_update > self._max_age:
self.state = TrackState.Deleted
def is_tentative(self):
"""Returns True if this track is tentative (unconfirmed)."""
return self.state == TrackState.Tentative
def is_confirmed(self):
"""Returns True if this track is confirmed."""
return self.state == TrackState.Confirmed
def is_deleted(self):
"""Returns True if this track is dead and should be deleted."""
return self.state == TrackState.Deleted
| PaddleDetection/deploy/pptracking/python/mot/tracker/base_sde_tracker.py/0 | {
"file_path": "PaddleDetection/deploy/pptracking/python/mot/tracker/base_sde_tracker.py",
"repo_id": "PaddleDetection",
"token_count": 2350
} | 56 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import paddle
import paddle.nn as nn
from scipy.special import softmax
from scipy.interpolate import InterpolatedUnivariateSpline
def line_iou(pred, target, img_w, length=15, aligned=True):
'''
Calculate the line iou value between predictions and targets
Args:
pred: lane predictions, shape: (num_pred, 72)
target: ground truth, shape: (num_target, 72)
img_w: image width
length: extended radius
aligned: True for iou loss calculation, False for pair-wise ious in assign
'''
px1 = pred - length
px2 = pred + length
tx1 = target - length
tx2 = target + length
if aligned:
invalid_mask = target
ovr = paddle.minimum(px2, tx2) - paddle.maximum(px1, tx1)
union = paddle.maximum(px2, tx2) - paddle.minimum(px1, tx1)
else:
num_pred = pred.shape[0]
invalid_mask = target.tile([num_pred, 1, 1])
ovr = (paddle.minimum(px2[:, None, :], tx2[None, ...]) - paddle.maximum(
px1[:, None, :], tx1[None, ...]))
union = (paddle.maximum(px2[:, None, :], tx2[None, ...]) -
paddle.minimum(px1[:, None, :], tx1[None, ...]))
invalid_masks = (invalid_mask < 0) | (invalid_mask >= img_w)
ovr[invalid_masks] = 0.
union[invalid_masks] = 0.
iou = ovr.sum(axis=-1) / (union.sum(axis=-1) + 1e-9)
return iou
class Lane:
def __init__(self, points=None, invalid_value=-2., metadata=None):
super(Lane, self).__init__()
self.curr_iter = 0
self.points = points
self.invalid_value = invalid_value
self.function = InterpolatedUnivariateSpline(
points[:, 1], points[:, 0], k=min(3, len(points) - 1))
self.min_y = points[:, 1].min() - 0.01
self.max_y = points[:, 1].max() + 0.01
self.metadata = metadata or {}
def __repr__(self):
return '[Lane]\n' + str(self.points) + '\n[/Lane]'
def __call__(self, lane_ys):
lane_xs = self.function(lane_ys)
lane_xs[(lane_ys < self.min_y) | (lane_ys > self.max_y
)] = self.invalid_value
return lane_xs
def to_array(self, sample_y_range, img_w, img_h):
self.sample_y = range(sample_y_range[0], sample_y_range[1],
sample_y_range[2])
sample_y = self.sample_y
img_w, img_h = img_w, img_h
ys = np.array(sample_y) / float(img_h)
xs = self(ys)
valid_mask = (xs >= 0) & (xs < 1)
lane_xs = xs[valid_mask] * img_w
lane_ys = ys[valid_mask] * img_h
lane = np.concatenate(
(lane_xs.reshape(-1, 1), lane_ys.reshape(-1, 1)), axis=1)
return lane
def __iter__(self):
return self
def __next__(self):
if self.curr_iter < len(self.points):
self.curr_iter += 1
return self.points[self.curr_iter - 1]
self.curr_iter = 0
raise StopIteration
class CLRNetPostProcess(object):
"""
Args:
input_shape (int): network input image size
ori_shape (int): ori image shape of before padding
scale_factor (float): scale factor of ori image
enable_mkldnn (bool): whether to open MKLDNN
"""
def __init__(self, img_w, ori_img_h, cut_height, conf_threshold, nms_thres,
max_lanes, num_points):
self.img_w = img_w
self.conf_threshold = conf_threshold
self.nms_thres = nms_thres
self.max_lanes = max_lanes
self.num_points = num_points
self.n_strips = num_points - 1
self.n_offsets = num_points
self.ori_img_h = ori_img_h
self.cut_height = cut_height
self.prior_ys = paddle.linspace(
start=1, stop=0, num=self.n_offsets).astype('float64')
def predictions_to_pred(self, predictions):
"""
Convert predictions to internal Lane structure for evaluation.
"""
lanes = []
for lane in predictions:
lane_xs = lane[6:].clone()
start = min(
max(0, int(round(lane[2].item() * self.n_strips))),
self.n_strips)
length = int(round(lane[5].item()))
end = start + length - 1
end = min(end, len(self.prior_ys) - 1)
if start > 0:
mask = ((lane_xs[:start] >= 0.) &
(lane_xs[:start] <= 1.)).cpu().detach().numpy()[::-1]
mask = ~((mask.cumprod()[::-1]).astype(np.bool))
lane_xs[:start][mask] = -2
if end < len(self.prior_ys) - 1:
lane_xs[end + 1:] = -2
lane_ys = self.prior_ys[lane_xs >= 0].clone()
lane_xs = lane_xs[lane_xs >= 0]
lane_xs = lane_xs.flip(axis=0).astype('float64')
lane_ys = lane_ys.flip(axis=0)
lane_ys = (lane_ys *
(self.ori_img_h - self.cut_height) + self.cut_height
) / self.ori_img_h
if len(lane_xs) <= 1:
continue
points = paddle.stack(
x=(lane_xs.reshape([-1, 1]), lane_ys.reshape([-1, 1])),
axis=1).squeeze(axis=2)
lane = Lane(
points=points.cpu().numpy(),
metadata={
'start_x': lane[3],
'start_y': lane[2],
'conf': lane[1]
})
lanes.append(lane)
return lanes
def lane_nms(self, predictions, scores, nms_overlap_thresh, top_k):
"""
NMS for lane detection.
predictions: paddle.Tensor [num_lanes,conf,y,x,lenght,72offsets] [12,77]
scores: paddle.Tensor [num_lanes]
nms_overlap_thresh: float
top_k: int
"""
# sort by scores to get idx
idx = scores.argsort(descending=True)
keep = []
condidates = predictions.clone()
condidates = condidates.index_select(idx)
while len(condidates) > 0:
keep.append(idx[0])
if len(keep) >= top_k or len(condidates) == 1:
break
ious = []
for i in range(1, len(condidates)):
ious.append(1 - line_iou(
condidates[i].unsqueeze(0),
condidates[0].unsqueeze(0),
img_w=self.img_w,
length=15))
ious = paddle.to_tensor(ious)
mask = ious <= nms_overlap_thresh
id = paddle.where(mask == False)[0]
if id.shape[0] == 0:
break
condidates = condidates[1:].index_select(id)
idx = idx[1:].index_select(id)
keep = paddle.stack(keep)
return keep
def get_lanes(self, output, as_lanes=True):
"""
Convert model output to lanes.
"""
softmax = nn.Softmax(axis=1)
decoded = []
for predictions in output:
if len(predictions) == 0:
decoded.append([])
continue
threshold = self.conf_threshold
scores = softmax(predictions[:, :2])[:, 1]
keep_inds = scores >= threshold
predictions = predictions[keep_inds]
scores = scores[keep_inds]
if predictions.shape[0] == 0:
decoded.append([])
continue
nms_predictions = predictions.detach().clone()
nms_predictions = paddle.concat(
x=[nms_predictions[..., :4], nms_predictions[..., 5:]], axis=-1)
nms_predictions[..., 4] = nms_predictions[..., 4] * self.n_strips
nms_predictions[..., 5:] = nms_predictions[..., 5:] * (
self.img_w - 1)
keep = self.lane_nms(
nms_predictions[..., 5:],
scores,
nms_overlap_thresh=self.nms_thres,
top_k=self.max_lanes)
predictions = predictions.index_select(keep)
if predictions.shape[0] == 0:
decoded.append([])
continue
predictions[:, 5] = paddle.round(predictions[:, 5] * self.n_strips)
if as_lanes:
pred = self.predictions_to_pred(predictions)
else:
pred = predictions
decoded.append(pred)
return decoded
def __call__(self, lanes_list):
lanes = self.get_lanes(lanes_list)
return lanes
| PaddleDetection/deploy/python/clrnet_postprocess.py/0 | {
"file_path": "PaddleDetection/deploy/python/clrnet_postprocess.py",
"repo_id": "PaddleDetection",
"token_count": 4604
} | 57 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division
import os
import cv2
import math
import numpy as np
import PIL
from PIL import Image, ImageDraw, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def imagedraw_textsize_c(draw, text):
if int(PIL.__version__.split('.')[0]) < 10:
tw, th = draw.textsize(text)
else:
left, top, right, bottom = draw.textbbox((0, 0), text)
tw, th = right - left, bottom - top
return tw, th
def visualize_box_mask(im, results, labels, threshold=0.5):
"""
Args:
im (str/np.ndarray): path of image/np.ndarray read by cv2
results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
MaskRCNN's results include 'masks': np.ndarray:
shape:[N, im_h, im_w]
labels (list): labels:['class1', ..., 'classn']
threshold (float): Threshold of score.
Returns:
im (PIL.Image.Image): visualized image
"""
if isinstance(im, str):
im = Image.open(im).convert('RGB')
elif isinstance(im, np.ndarray):
im = Image.fromarray(im)
if 'masks' in results and 'boxes' in results and len(results['boxes']) > 0:
im = draw_mask(
im, results['boxes'], results['masks'], labels, threshold=threshold)
if 'boxes' in results and len(results['boxes']) > 0:
im = draw_box(im, results['boxes'], labels, threshold=threshold)
if 'segm' in results:
im = draw_segm(
im,
results['segm'],
results['label'],
results['score'],
labels,
threshold=threshold)
return im
def get_color_map_list(num_classes):
"""
Args:
num_classes (int): number of class
Returns:
color_map (list): RGB color list
"""
color_map = num_classes * [0, 0, 0]
for i in range(0, num_classes):
j = 0
lab = i
while lab:
color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j))
color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j))
color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j))
j += 1
lab >>= 3
color_map = [color_map[i:i + 3] for i in range(0, len(color_map), 3)]
return color_map
def draw_mask(im, np_boxes, np_masks, labels, threshold=0.5):
"""
Args:
im (PIL.Image.Image): PIL image
np_boxes (np.ndarray): shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
np_masks (np.ndarray): shape:[N, im_h, im_w]
labels (list): labels:['class1', ..., 'classn']
threshold (float): threshold of mask
Returns:
im (PIL.Image.Image): visualized image
"""
color_list = get_color_map_list(len(labels))
w_ratio = 0.4
alpha = 0.7
im = np.array(im).astype('float32')
clsid2color = {}
expect_boxes = (np_boxes[:, 1] > threshold) & (np_boxes[:, 0] > -1)
np_boxes = np_boxes[expect_boxes, :]
np_masks = np_masks[expect_boxes, :, :]
im_h, im_w = im.shape[:2]
np_masks = np_masks[:, :im_h, :im_w]
for i in range(len(np_masks)):
clsid, score = int(np_boxes[i][0]), np_boxes[i][1]
mask = np_masks[i]
if clsid not in clsid2color:
clsid2color[clsid] = color_list[clsid]
color_mask = clsid2color[clsid]
for c in range(3):
color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio * 255
idx = np.nonzero(mask)
color_mask = np.array(color_mask)
im[idx[0], idx[1], :] *= 1.0 - alpha
im[idx[0], idx[1], :] += alpha * color_mask
return Image.fromarray(im.astype('uint8'))
def draw_box(im, np_boxes, labels, threshold=0.5):
"""
Args:
im (PIL.Image.Image): PIL image
np_boxes (np.ndarray): shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
labels (list): labels:['class1', ..., 'classn']
threshold (float): threshold of box
Returns:
im (PIL.Image.Image): visualized image
"""
draw_thickness = min(im.size) // 320
draw = ImageDraw.Draw(im)
clsid2color = {}
color_list = get_color_map_list(len(labels))
expect_boxes = (np_boxes[:, 1] > threshold) & (np_boxes[:, 0] > -1)
np_boxes = np_boxes[expect_boxes, :]
for dt in np_boxes:
clsid, bbox, score = int(dt[0]), dt[2:], dt[1]
if clsid not in clsid2color:
clsid2color[clsid] = color_list[clsid]
color = tuple(clsid2color[clsid])
if len(bbox) == 4:
xmin, ymin, xmax, ymax = bbox
print('class_id:{:d}, confidence:{:.4f}, left_top:[{:.2f},{:.2f}],'
'right_bottom:[{:.2f},{:.2f}]'.format(
int(clsid), score, xmin, ymin, xmax, ymax))
# draw bbox
draw.line(
[(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
(xmin, ymin)],
width=draw_thickness,
fill=color)
elif len(bbox) == 8:
x1, y1, x2, y2, x3, y3, x4, y4 = bbox
draw.line(
[(x1, y1), (x2, y2), (x3, y3), (x4, y4), (x1, y1)],
width=2,
fill=color)
xmin = min(x1, x2, x3, x4)
ymin = min(y1, y2, y3, y4)
# draw label
text = "{} {:.4f}".format(labels[clsid], score)
tw, th = imagedraw_textsize_c(draw, text)
draw.rectangle(
[(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill=color)
draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255))
return im
def draw_segm(im,
np_segms,
np_label,
np_score,
labels,
threshold=0.5,
alpha=0.7):
"""
Draw segmentation on image
"""
mask_color_id = 0
w_ratio = .4
color_list = get_color_map_list(len(labels))
im = np.array(im).astype('float32')
clsid2color = {}
np_segms = np_segms.astype(np.uint8)
for i in range(np_segms.shape[0]):
mask, score, clsid = np_segms[i], np_score[i], np_label[i]
if score < threshold:
continue
if clsid not in clsid2color:
clsid2color[clsid] = color_list[clsid]
color_mask = clsid2color[clsid]
for c in range(3):
color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio * 255
idx = np.nonzero(mask)
color_mask = np.array(color_mask)
idx0 = np.minimum(idx[0], im.shape[0] - 1)
idx1 = np.minimum(idx[1], im.shape[1] - 1)
im[idx0, idx1, :] *= 1.0 - alpha
im[idx0, idx1, :] += alpha * color_mask
sum_x = np.sum(mask, axis=0)
x = np.where(sum_x > 0.5)[0]
sum_y = np.sum(mask, axis=1)
y = np.where(sum_y > 0.5)[0]
x0, x1, y0, y1 = x[0], x[-1], y[0], y[-1]
cv2.rectangle(im, (x0, y0), (x1, y1),
tuple(color_mask.astype('int32').tolist()), 1)
bbox_text = '%s %.2f' % (labels[clsid], score)
t_size = cv2.getTextSize(bbox_text, 0, 0.3, thickness=1)[0]
cv2.rectangle(im, (x0, y0), (x0 + t_size[0], y0 - t_size[1] - 3),
tuple(color_mask.astype('int32').tolist()), -1)
cv2.putText(
im,
bbox_text, (x0, y0 - 2),
cv2.FONT_HERSHEY_SIMPLEX,
0.3, (0, 0, 0),
1,
lineType=cv2.LINE_AA)
return Image.fromarray(im.astype('uint8'))
def get_color(idx):
idx = idx * 3
color = ((37 * idx) % 255, (17 * idx) % 255, (29 * idx) % 255)
return color
def visualize_pose(imgfile,
results,
visual_thresh=0.6,
save_name='pose.jpg',
save_dir='output',
returnimg=False,
ids=None):
try:
import matplotlib.pyplot as plt
import matplotlib
plt.switch_backend('agg')
except Exception as e:
print('Matplotlib not found, please install matplotlib.'
'for example: `pip install matplotlib`.')
raise e
skeletons, scores = results['keypoint']
skeletons = np.array(skeletons)
kpt_nums = 17
if len(skeletons) > 0:
kpt_nums = skeletons.shape[1]
if kpt_nums == 17: #plot coco keypoint
EDGES = [(0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 8),
(7, 9), (8, 10), (5, 11), (6, 12), (11, 13), (12, 14),
(13, 15), (14, 16), (11, 12)]
else: #plot mpii keypoint
EDGES = [(0, 1), (1, 2), (3, 4), (4, 5), (2, 6), (3, 6), (6, 7), (7, 8),
(8, 9), (10, 11), (11, 12), (13, 14), (14, 15), (8, 12),
(8, 13)]
NUM_EDGES = len(EDGES)
colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \
[0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \
[170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]]
cmap = matplotlib.cm.get_cmap('hsv')
plt.figure()
img = cv2.imread(imgfile) if type(imgfile) == str else imgfile
color_set = results['colors'] if 'colors' in results else None
if 'bbox' in results and ids is None:
bboxs = results['bbox']
for j, rect in enumerate(bboxs):
xmin, ymin, xmax, ymax = rect
color = colors[0] if color_set is None else colors[color_set[j] %
len(colors)]
cv2.rectangle(img, (xmin, ymin), (xmax, ymax), color, 1)
canvas = img.copy()
for i in range(kpt_nums):
for j in range(len(skeletons)):
if skeletons[j][i, 2] < visual_thresh:
continue
if ids is None:
color = colors[i] if color_set is None else colors[color_set[j]
%
len(colors)]
else:
color = get_color(ids[j])
cv2.circle(
canvas,
tuple(skeletons[j][i, 0:2].astype('int32')),
2,
color,
thickness=-1)
to_plot = cv2.addWeighted(img, 0.3, canvas, 0.7, 0)
fig = matplotlib.pyplot.gcf()
stickwidth = 2
for i in range(NUM_EDGES):
for j in range(len(skeletons)):
edge = EDGES[i]
if skeletons[j][edge[0], 2] < visual_thresh or skeletons[j][edge[
1], 2] < visual_thresh:
continue
cur_canvas = canvas.copy()
X = [skeletons[j][edge[0], 1], skeletons[j][edge[1], 1]]
Y = [skeletons[j][edge[0], 0], skeletons[j][edge[1], 0]]
mX = np.mean(X)
mY = np.mean(Y)
length = ((X[0] - X[1])**2 + (Y[0] - Y[1])**2)**0.5
angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))
polygon = cv2.ellipse2Poly((int(mY), int(mX)),
(int(length / 2), stickwidth),
int(angle), 0, 360, 1)
if ids is None:
color = colors[i] if color_set is None else colors[color_set[j]
%
len(colors)]
else:
color = get_color(ids[j])
cv2.fillConvexPoly(cur_canvas, polygon, color)
canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0)
if returnimg:
return canvas
save_name = os.path.join(
save_dir, os.path.splitext(os.path.basename(imgfile))[0] + '_vis.jpg')
plt.imsave(save_name, canvas[:, :, ::-1])
print("keypoint visualize image saved to: " + save_name)
plt.close()
def visualize_attr(im, results, boxes=None, is_mtmct=False):
if isinstance(im, str):
im = Image.open(im)
im = np.ascontiguousarray(np.copy(im))
im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
else:
im = np.ascontiguousarray(np.copy(im))
im_h, im_w = im.shape[:2]
text_scale = max(0.5, im.shape[0] / 3000.)
text_thickness = 1
line_inter = im.shape[0] / 40.
for i, res in enumerate(results):
if boxes is None:
text_w = 3
text_h = 1
elif is_mtmct:
box = boxes[i] # multi camera, bbox shape is x,y, w,h
text_w = int(box[0]) + 3
text_h = int(box[1])
else:
box = boxes[i] # single camera, bbox shape is 0, 0, x,y, w,h
text_w = int(box[2]) + 3
text_h = int(box[3])
for text in res:
text_h += int(line_inter)
text_loc = (text_w, text_h)
cv2.putText(
im,
text,
text_loc,
cv2.FONT_ITALIC,
text_scale, (0, 255, 255),
thickness=text_thickness)
return im
def visualize_action(im,
mot_boxes,
action_visual_collector=None,
action_text="",
video_action_score=None,
video_action_text=""):
im = cv2.imread(im) if isinstance(im, str) else im
im_h, im_w = im.shape[:2]
text_scale = max(1, im.shape[1] / 400.)
text_thickness = 2
if action_visual_collector:
id_action_dict = {}
for collector, action_type in zip(action_visual_collector, action_text):
id_detected = collector.get_visualize_ids()
for pid in id_detected:
id_action_dict[pid] = id_action_dict.get(pid, [])
id_action_dict[pid].append(action_type)
for mot_box in mot_boxes:
# mot_box is a format with [mot_id, class, score, xmin, ymin, w, h]
if mot_box[0] in id_action_dict:
text_position = (int(mot_box[3] + mot_box[5] * 0.75),
int(mot_box[4] - 10))
display_text = ', '.join(id_action_dict[mot_box[0]])
cv2.putText(im, display_text, text_position,
cv2.FONT_HERSHEY_PLAIN, text_scale, (0, 0, 255), 2)
if video_action_score:
cv2.putText(
im,
video_action_text + ': %.2f' % video_action_score,
(int(im_w / 2), int(15 * text_scale) + 5),
cv2.FONT_ITALIC,
text_scale, (0, 0, 255),
thickness=text_thickness)
return im
def visualize_vehicleplate(im, results, boxes=None):
if isinstance(im, str):
im = Image.open(im)
im = np.ascontiguousarray(np.copy(im))
im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
else:
im = np.ascontiguousarray(np.copy(im))
im_h, im_w = im.shape[:2]
text_scale = max(1.0, im.shape[0] / 400.)
text_thickness = 2
line_inter = im.shape[0] / 40.
for i, res in enumerate(results):
if boxes is None:
text_w = 3
text_h = 1
else:
box = boxes[i]
text = res
if text == "":
continue
text_w = int(box[2])
text_h = int(box[5] + box[3])
text_loc = (text_w, text_h)
cv2.putText(
im,
"LP: " + text,
text_loc,
cv2.FONT_ITALIC,
text_scale, (0, 255, 255),
thickness=text_thickness)
return im
def draw_press_box_lanes(im, np_boxes, labels, threshold=0.5):
"""
Args:
im (PIL.Image.Image): PIL image
np_boxes (np.ndarray): shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
labels (list): labels:['class1', ..., 'classn']
threshold (float): threshold of box
Returns:
im (PIL.Image.Image): visualized image
"""
if isinstance(im, str):
im = Image.open(im).convert('RGB')
elif isinstance(im, np.ndarray):
im = Image.fromarray(im)
draw_thickness = min(im.size) // 320
draw = ImageDraw.Draw(im)
clsid2color = {}
color_list = get_color_map_list(len(labels))
if np_boxes.shape[1] == 7:
np_boxes = np_boxes[:, 1:]
expect_boxes = (np_boxes[:, 1] > threshold) & (np_boxes[:, 0] > -1)
np_boxes = np_boxes[expect_boxes, :]
for dt in np_boxes:
clsid, bbox, score = int(dt[0]), dt[2:], dt[1]
if clsid not in clsid2color:
clsid2color[clsid] = color_list[clsid]
color = tuple(clsid2color[clsid])
if len(bbox) == 4:
xmin, ymin, xmax, ymax = bbox
# draw bbox
draw.line(
[(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
(xmin, ymin)],
width=draw_thickness,
fill=(0, 0, 255))
elif len(bbox) == 8:
x1, y1, x2, y2, x3, y3, x4, y4 = bbox
draw.line(
[(x1, y1), (x2, y2), (x3, y3), (x4, y4), (x1, y1)],
width=2,
fill=color)
xmin = min(x1, x2, x3, x4)
ymin = min(y1, y2, y3, y4)
# draw label
text = "{}".format(labels[clsid])
tw, th = imagedraw_textsize_c(draw, text)
draw.rectangle(
[(xmin + 1, ymax - th), (xmin + tw + 1, ymax)], fill=color)
draw.text((xmin + 1, ymax - th), text, fill=(0, 0, 255))
return im
def visualize_vehiclepress(im, results, threshold=0.5):
results = np.array(results)
labels = ['violation']
im = draw_press_box_lanes(im, results, labels, threshold=threshold)
return im
def visualize_lane(im, lanes):
if isinstance(im, str):
im = Image.open(im).convert('RGB')
elif isinstance(im, np.ndarray):
im = Image.fromarray(im)
draw_thickness = min(im.size) // 320
draw = ImageDraw.Draw(im)
if len(lanes) > 0:
for lane in lanes:
draw.line(
[(lane[0], lane[1]), (lane[2], lane[3])],
width=draw_thickness,
fill=(0, 0, 255))
return im
def visualize_vehicle_retrograde(im, mot_res, vehicle_retrograde_res):
if isinstance(im, str):
im = Image.open(im).convert('RGB')
elif isinstance(im, np.ndarray):
im = Image.fromarray(im)
draw_thickness = min(im.size) // 320
draw = ImageDraw.Draw(im)
lane = vehicle_retrograde_res['fence_line']
if lane is not None:
draw.line(
[(lane[0], lane[1]), (lane[2], lane[3])],
width=draw_thickness,
fill=(0, 0, 0))
mot_id = vehicle_retrograde_res['output']
if mot_id is None or len(mot_id) == 0:
return im
if mot_res is None:
return im
np_boxes = mot_res['boxes']
if np_boxes is not None:
for dt in np_boxes:
if dt[0] not in mot_id:
continue
bbox = dt[3:]
if len(bbox) == 4:
xmin, ymin, xmax, ymax = bbox
# draw bbox
draw.line(
[(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
(xmin, ymin)],
width=draw_thickness,
fill=(0, 255, 0))
# draw label
text = "retrograde"
tw, th = imagedraw_textsize_c(draw, text)
draw.rectangle(
[(xmax + 1, ymin - th), (xmax + tw + 1, ymin)],
fill=(0, 255, 0))
draw.text((xmax + 1, ymin - th), text, fill=(0, 255, 0))
return im
COLORS = [
(255, 0, 0),
(0, 255, 0),
(0, 0, 255),
(255, 255, 0),
(255, 0, 255),
(0, 255, 255),
(128, 255, 0),
(255, 128, 0),
(128, 0, 255),
(255, 0, 128),
(0, 128, 255),
(0, 255, 128),
(128, 255, 255),
(255, 128, 255),
(255, 255, 128),
(60, 180, 0),
(180, 60, 0),
(0, 60, 180),
(0, 180, 60),
(60, 0, 180),
(180, 0, 60),
(255, 0, 0),
(0, 255, 0),
(0, 0, 255),
(255, 255, 0),
(255, 0, 255),
(0, 255, 255),
(128, 255, 0),
(255, 128, 0),
(128, 0, 255),
]
def imshow_lanes(img, lanes, show=False, out_file=None, width=4):
lanes_xys = []
for _, lane in enumerate(lanes):
xys = []
for x, y in lane:
if x <= 0 or y <= 0:
continue
x, y = int(x), int(y)
xys.append((x, y))
lanes_xys.append(xys)
lanes_xys.sort(key=lambda xys: xys[0][0] if len(xys) > 0 else 0)
for idx, xys in enumerate(lanes_xys):
for i in range(1, len(xys)):
cv2.line(img, xys[i - 1], xys[i], COLORS[idx], thickness=width)
if show:
cv2.imshow('view', img)
cv2.waitKey(0)
if out_file:
if not os.path.exists(os.path.dirname(out_file)):
os.makedirs(os.path.dirname(out_file))
cv2.imwrite(out_file, img) | PaddleDetection/deploy/python/visualize.py/0 | {
"file_path": "PaddleDetection/deploy/python/visualize.py",
"repo_id": "PaddleDetection",
"token_count": 11921
} | 58 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import glob
import base64
import argparse
from paddle_serving_client import Client
from paddle_serving_client.proto import general_model_config_pb2 as m_config
import google.protobuf.text_format
parser = argparse.ArgumentParser(description="args for paddleserving")
parser.add_argument(
"--serving_client", type=str, help="the directory of serving_client")
parser.add_argument("--image_dir", type=str)
parser.add_argument("--image_file", type=str)
parser.add_argument("--http_port", type=int, default=9997)
parser.add_argument(
"--threshold", type=float, default=0.5, help="Threshold of score.")
args = parser.parse_args()
def get_test_images(infer_dir, infer_img):
"""
Get image path list in TEST mode
"""
assert infer_img is not None or infer_dir is not None, \
"--image_file or --image_dir should be set"
assert infer_img is None or os.path.isfile(infer_img), \
"{} is not a file".format(infer_img)
assert infer_dir is None or os.path.isdir(infer_dir), \
"{} is not a directory".format(infer_dir)
# infer_img has a higher priority
if infer_img and os.path.isfile(infer_img):
return [infer_img]
images = set()
infer_dir = os.path.abspath(infer_dir)
assert os.path.isdir(infer_dir), \
"infer_dir {} is not a directory".format(infer_dir)
exts = ['jpg', 'jpeg', 'png', 'bmp']
exts += [ext.upper() for ext in exts]
for ext in exts:
images.update(glob.glob('{}/*.{}'.format(infer_dir, ext)))
images = list(images)
assert len(images) > 0, "no image found in {}".format(infer_dir)
print("Found {} inference images in total.".format(len(images)))
return images
def postprocess(fetch_dict, fetch_vars, draw_threshold=0.5):
result = []
if "conv2d_441.tmp_1" in fetch_dict:
heatmap = fetch_dict["conv2d_441.tmp_1"]
print(heatmap)
result.append(heatmap)
else:
bboxes = fetch_dict[fetch_vars[0]]
for bbox in bboxes:
if bbox[0] > -1 and bbox[1] > draw_threshold:
print(f"{int(bbox[0])} {bbox[1]} "
f"{bbox[2]} {bbox[3]} {bbox[4]} {bbox[5]}")
result.append(f"{int(bbox[0])} {bbox[1]} "
f"{bbox[2]} {bbox[3]} {bbox[4]} {bbox[5]}")
return result
def get_model_vars(client_config_dir):
# read original serving_client_conf.prototxt
client_config_file = os.path.join(client_config_dir,
"serving_client_conf.prototxt")
with open(client_config_file, 'r') as f:
model_var = google.protobuf.text_format.Merge(
str(f.read()), m_config.GeneralModelConfig())
# modify feed_var to run core/general-server/op/
[model_var.feed_var.pop() for _ in range(len(model_var.feed_var))]
feed_var = m_config.FeedVar()
feed_var.name = "input"
feed_var.alias_name = "input"
feed_var.is_lod_tensor = False
feed_var.feed_type = 20
feed_var.shape.extend([1])
model_var.feed_var.extend([feed_var])
with open(
os.path.join(client_config_dir, "serving_client_conf_cpp.prototxt"),
"w") as f:
f.write(str(model_var))
# get feed_vars/fetch_vars
feed_vars = [var.name for var in model_var.feed_var]
fetch_vars = [var.name for var in model_var.fetch_var]
return feed_vars, fetch_vars
if __name__ == '__main__':
url = f"127.0.0.1:{args.http_port}"
logid = 10000
img_list = get_test_images(args.image_dir, args.image_file)
feed_vars, fetch_vars = get_model_vars(args.serving_client)
client = Client()
client.load_client_config(
os.path.join(args.serving_client, "serving_client_conf_cpp.prototxt"))
client.connect([url])
for img_file in img_list:
with open(img_file, 'rb') as file:
image_data = file.read()
image = base64.b64encode(image_data).decode('utf8')
fetch_dict = client.predict(
feed={feed_vars[0]: image}, fetch=fetch_vars)
result = postprocess(fetch_dict, fetch_vars, args.threshold)
| PaddleDetection/deploy/serving/cpp/serving_client.py/0 | {
"file_path": "PaddleDetection/deploy/serving/cpp/serving_client.py",
"repo_id": "PaddleDetection",
"token_count": 1983
} | 59 |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import os
import pathlib
import re
import sys
import cv2
import math
from PIL import Image
import numpy as np
def resize_norm_img(img, image_shape, padding=True):
imgC, imgH, imgW = image_shape
img = cv2.resize(img, (imgW, imgH), interpolation=cv2.INTER_LINEAR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = np.transpose(img, [2, 0, 1]) / 255
img = np.expand_dims(img, 0)
img_mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1))
img_std = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1))
img -= img_mean
img /= img_std
return img.astype(np.float32)
def create_header_file(name, tensor_name, tensor_data, output_path):
"""
This function generates a header file containing the data from the numpy array provided.
"""
file_path = pathlib.Path(f"{output_path}/" + name).resolve()
# Create header file with npy_data as a C array
raw_path = file_path.with_suffix(".h").resolve()
with open(raw_path, "a") as header_file:
header_file.write(
"\n" + f"const size_t {tensor_name}_len = {tensor_data.size};\n" +
f'__attribute__((section(".data.tvm"), aligned(16))) float {tensor_name}[] = '
)
header_file.write("{")
for i in np.ndindex(tensor_data.shape):
header_file.write(f"{tensor_data[i]}, ")
header_file.write("};\n\n")
def create_headers(image_name):
"""
This function generates C header files for the input and output arrays required to run inferences
"""
img_path = os.path.join("./", f"{image_name}")
# Resize image to 32x320
img = cv2.imread(img_path)
img = resize_norm_img(img, [3, 32, 320])
img_data = img.astype("float32")
# # Add the batch dimension, as we are expecting 4-dimensional input: NCHW.
img_data = np.expand_dims(img_data, axis=0)
if os.path.exists("./include/inputs.h"):
os.remove("./include/inputs.h")
if os.path.exists("./include/outputs.h"):
os.remove("./include/outputs.h")
# Create input header file
create_header_file("inputs", "input", img_data, "./include")
# Create output header file
output_data = np.zeros([8500], np.float32)
create_header_file(
"outputs",
"output0",
output_data,
"./include", )
output_data = np.zeros([170000], np.float32)
create_header_file(
"outputs",
"output1",
output_data,
"./include", )
if __name__ == "__main__":
create_headers(sys.argv[1])
| PaddleDetection/deploy/third_engine/demo_avh/convert_image.py/0 | {
"file_path": "PaddleDetection/deploy/third_engine/demo_avh/convert_image.py",
"repo_id": "PaddleDetection",
"token_count": 1294
} | 60 |
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <sstream>
// for setprecision
#include <chrono>
#include <iomanip>
#include "keypoint_detector.h"
namespace PaddleDetection {
// Visualiztion MaskDetector results
cv::Mat VisualizeKptsResult(const cv::Mat& img,
const std::vector<KeyPointResult>& results,
const std::vector<int>& colormap,
float threshold) {
const int edge[][2] = {{0, 1},
{0, 2},
{1, 3},
{2, 4},
{3, 5},
{4, 6},
{5, 7},
{6, 8},
{7, 9},
{8, 10},
{5, 11},
{6, 12},
{11, 13},
{12, 14},
{13, 15},
{14, 16},
{11, 12}};
cv::Mat vis_img = img.clone();
for (int batchid = 0; batchid < results.size(); batchid++) {
for (int i = 0; i < results[batchid].num_joints; i++) {
if (results[batchid].keypoints[i * 3] > threshold) {
int x_coord = int(results[batchid].keypoints[i * 3 + 1]);
int y_coord = int(results[batchid].keypoints[i * 3 + 2]);
cv::circle(vis_img,
cv::Point2d(x_coord, y_coord),
1,
cv::Scalar(0, 0, 255),
2);
}
}
for (int i = 0; i < results[batchid].num_joints; i++) {
if (results[batchid].keypoints[edge[i][0] * 3] > threshold &&
results[batchid].keypoints[edge[i][1] * 3] > threshold) {
int x_start = int(results[batchid].keypoints[edge[i][0] * 3 + 1]);
int y_start = int(results[batchid].keypoints[edge[i][0] * 3 + 2]);
int x_end = int(results[batchid].keypoints[edge[i][1] * 3 + 1]);
int y_end = int(results[batchid].keypoints[edge[i][1] * 3 + 2]);
cv::line(vis_img,
cv::Point2d(x_start, y_start),
cv::Point2d(x_end, y_end),
colormap[i],
1);
}
}
}
return vis_img;
}
void KeyPointDetector::Postprocess(std::vector<float>& output,
std::vector<int>& output_shape,
std::vector<int>& idxout,
std::vector<int>& idx_shape,
std::vector<KeyPointResult>* result,
std::vector<std::vector<float>>& center_bs,
std::vector<std::vector<float>>& scale_bs) {
std::vector<float> preds(output_shape[1] * 3, 0);
for (int batchid = 0; batchid < output_shape[0]; batchid++) {
get_final_preds(output,
output_shape,
idxout,
idx_shape,
center_bs[batchid],
scale_bs[batchid],
preds,
batchid,
this->use_dark());
KeyPointResult result_item;
result_item.num_joints = output_shape[1];
result_item.keypoints.clear();
for (int i = 0; i < output_shape[1]; i++) {
result_item.keypoints.emplace_back(preds[i * 3]);
result_item.keypoints.emplace_back(preds[i * 3 + 1]);
result_item.keypoints.emplace_back(preds[i * 3 + 2]);
}
result->push_back(result_item);
}
}
void KeyPointDetector::Predict(const std::vector<cv::Mat> imgs,
std::vector<std::vector<float>>& center_bs,
std::vector<std::vector<float>>& scale_bs,
std::vector<KeyPointResult>* result) {
int batch_size = imgs.size();
KeyPointDet_interpreter->resizeTensor(input_tensor,
{batch_size, 3, in_h, in_w});
KeyPointDet_interpreter->resizeSession(KeyPointDet_session);
auto insize = 3 * in_h * in_w;
// Preprocess image
cv::Mat resized_im;
for (int bs_idx = 0; bs_idx < batch_size; bs_idx++) {
cv::Mat im = imgs.at(bs_idx);
cv::resize(im, resized_im, cv::Size(in_w, in_h));
std::shared_ptr<MNN::CV::ImageProcess> pretreat(
MNN::CV::ImageProcess::create(
MNN::CV::BGR, MNN::CV::RGB, mean_vals, 3, norm_vals, 3));
pretreat->convert(
resized_im.data, in_w, in_h, resized_im.step[0], input_tensor);
}
// Run predictor
auto inference_start = std::chrono::steady_clock::now();
KeyPointDet_interpreter->runSession(KeyPointDet_session);
// Get output tensor
auto out_tensor = KeyPointDet_interpreter->getSessionOutput(
KeyPointDet_session, "conv2d_441.tmp_1");
auto nchwoutTensor = new Tensor(out_tensor, Tensor::CAFFE);
out_tensor->copyToHostTensor(nchwoutTensor);
auto output_shape = nchwoutTensor->shape();
// Calculate output length
int output_size = 1;
for (int j = 0; j < output_shape.size(); ++j) {
output_size *= output_shape[j];
}
output_data_.resize(output_size);
std::copy_n(nchwoutTensor->host<float>(), output_size, output_data_.data());
delete nchwoutTensor;
auto idx_tensor = KeyPointDet_interpreter->getSessionOutput(
KeyPointDet_session, "argmax_0.tmp_0");
auto idxhostTensor = new Tensor(idx_tensor, Tensor::CAFFE);
idx_tensor->copyToHostTensor(idxhostTensor);
auto idx_shape = idxhostTensor->shape();
// Calculate output length
output_size = 1;
for (int j = 0; j < idx_shape.size(); ++j) {
output_size *= idx_shape[j];
}
idx_data_.resize(output_size);
std::copy_n(idxhostTensor->host<int>(), output_size, idx_data_.data());
delete idxhostTensor;
auto inference_end = std::chrono::steady_clock::now();
std::chrono::duration<double> elapsed = inference_end - inference_start;
printf("keypoint inference time: %f s\n", elapsed.count());
// Postprocessing result
Postprocess(output_data_,
output_shape,
idx_data_,
idx_shape,
result,
center_bs,
scale_bs);
}
} // namespace PaddleDetection
| PaddleDetection/deploy/third_engine/demo_mnn_kpts/keypoint_detector.cpp/0 | {
"file_path": "PaddleDetection/deploy/third_engine/demo_mnn_kpts/keypoint_detector.cpp",
"repo_id": "PaddleDetection",
"token_count": 3408
} | 61 |
# PicoDet ONNX Runtime Demo
本文件夹提供利用[ONNX Runtime](https://onnxruntime.ai/docs/)进行 PicoDet 部署与Inference images 的 Demo。
## 安装 ONNX Runtime
本demo采用的是 ONNX Runtime 1.10.0,可直接运行如下指令安装:
```shell
pip install onnxruntime
```
详细安装步骤,可参考 [Install ONNX Runtime](https://onnxruntime.ai/docs/install/)。
## Inference images
- 准备测试模型:根据[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)中【导出及转换模型】步骤,采用包含后处理的方式导出模型(`-o export.benchmark=False` ),并生成待测试模型简化后的onnx模型(可在下文链接中直接下载)。同时在本目录下新建```onnx_file```文件夹,将导出的onnx模型放在该目录下。
- 准备测试所用图片:将待测试图片放在```./imgs```文件夹下,本demo已提供了两张测试图片。
- 在本目录下直接运行:
```shell
python infer_demo.py --modelpath ./onnx_file/picodet_s_320_lcnet_postprocessed.onnx
```
将会对```./imgs```文件夹下所有图片进行识别,并将识别结果保存在```./results```文件夹下。
- 结果:
<div align="center">
<img src="../../../docs/images/bus.jpg" height="300px" ><img src="../../../docs/images/dog.jpg" height="300px" >
</div>
## 模型下载
| 模型 | 输入尺寸 | ONNX( w/ 后处理) |
| :-------- | :--------: | :---------------------: |
| PicoDet-XS | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_lcnet_postprocessed.onnx) |
| PicoDet-XS | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_lcnet_postprocessed.onnx) |
| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_lcnet_postprocessed.onnx) |
| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_lcnet_postprocessed.onnx) |
| PicoDet-M | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_lcnet_postprocessed.onnx) |
| PicoDet-M | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_lcnet_postprocessed.onnx) |
| PicoDet-L | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_lcnet_postprocessed.onnx) |
| PicoDet-L | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_lcnet_postprocessed.onnx) |
| PicoDet-L | 640*640 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_lcnet_postprocessed.onnx) |
| PaddleDetection/deploy/third_engine/demo_onnxruntime/README.md/0 | {
"file_path": "PaddleDetection/deploy/third_engine/demo_onnxruntime/README.md",
"repo_id": "PaddleDetection",
"token_count": 1391
} | 62 |
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <sstream>
// for setprecision
#include <chrono>
#include <iomanip>
#include "keypoint_detector.h"
namespace PaddleDetection {
// Visualiztion MaskDetector results
cv::Mat VisualizeKptsResult(const cv::Mat& img,
const std::vector<KeyPointResult>& results,
const std::vector<int>& colormap,
float threshold) {
const int edge[][2] = {{0, 1},
{0, 2},
{1, 3},
{2, 4},
{3, 5},
{4, 6},
{5, 7},
{6, 8},
{7, 9},
{8, 10},
{5, 11},
{6, 12},
{11, 13},
{12, 14},
{13, 15},
{14, 16},
{11, 12}};
cv::Mat vis_img = img.clone();
for (int batchid = 0; batchid < results.size(); batchid++) {
for (int i = 0; i < results[batchid].num_joints; i++) {
if (results[batchid].keypoints[i * 3] > threshold) {
int x_coord = int(results[batchid].keypoints[i * 3 + 1]);
int y_coord = int(results[batchid].keypoints[i * 3 + 2]);
cv::circle(vis_img,
cv::Point2d(x_coord, y_coord),
1,
cv::Scalar(0, 0, 255),
2);
}
}
for (int i = 0; i < results[batchid].num_joints; i++) {
if (results[batchid].keypoints[edge[i][0] * 3] > threshold &&
results[batchid].keypoints[edge[i][1] * 3] > threshold) {
int x_start = int(results[batchid].keypoints[edge[i][0] * 3 + 1]);
int y_start = int(results[batchid].keypoints[edge[i][0] * 3 + 2]);
int x_end = int(results[batchid].keypoints[edge[i][1] * 3 + 1]);
int y_end = int(results[batchid].keypoints[edge[i][1] * 3 + 2]);
cv::line(vis_img,
cv::Point2d(x_start, y_start),
cv::Point2d(x_end, y_end),
colormap[i],
1);
}
}
}
return vis_img;
}
void KeyPointDetector::Postprocess(std::vector<float>& output,
std::vector<uint64_t>& output_shape,
std::vector<float>& idxout,
std::vector<uint64_t>& idx_shape,
std::vector<KeyPointResult>* result,
std::vector<std::vector<float>>& center_bs,
std::vector<std::vector<float>>& scale_bs) {
std::vector<float> preds(output_shape[1] * 3, 0);
for (int batchid = 0; batchid < output_shape[0]; batchid++) {
get_final_preds(output,
output_shape,
idxout,
idx_shape,
center_bs[batchid],
scale_bs[batchid],
preds,
batchid,
this->use_dark());
KeyPointResult result_item;
result_item.num_joints = output_shape[1];
result_item.keypoints.clear();
for (int i = 0; i < output_shape[1]; i++) {
result_item.keypoints.emplace_back(preds[i * 3]);
result_item.keypoints.emplace_back(preds[i * 3 + 1]);
result_item.keypoints.emplace_back(preds[i * 3 + 2]);
}
result->push_back(result_item);
}
}
void KeyPointDetector::Predict(const std::vector<cv::Mat> imgs,
std::vector<std::vector<float>>& center_bs,
std::vector<std::vector<float>>& scale_bs,
std::vector<KeyPointResult>* result) {
int batch_size = imgs.size();
auto insize = 3 * in_h * in_w;
InferenceEngine::Blob::Ptr input_blob = infer_request_.GetBlob(input_name_);
// Preprocess image
InferenceEngine::MemoryBlob::Ptr mblob =
InferenceEngine::as<InferenceEngine::MemoryBlob>(input_blob);
if (!mblob) {
THROW_IE_EXCEPTION
<< "We expect blob to be inherited from MemoryBlob in matU8ToBlob, "
<< "but by fact we were not able to cast inputBlob to MemoryBlob";
}
auto mblobHolder = mblob->wmap();
float* blob_data = mblobHolder.as<float*>();
cv::Mat resized_im;
for (int bs_idx = 0; bs_idx < batch_size; bs_idx++) {
cv::Mat im = imgs.at(bs_idx);
cv::resize(im, resized_im, cv::Size(in_w, in_h));
for (size_t c = 0; c < 3; c++) {
for (size_t h = 0; h < in_h; h++) {
for (size_t w = 0; w < in_w; w++) {
blob_data[c * in_w * in_h + h * in_w + w] =
(float)resized_im.at<cv::Vec3b>(h, w)[c];
}
}
}
}
// Run predictor
auto inference_start = std::chrono::steady_clock::now();
// do inference
infer_request_.Infer();
InferenceEngine::Blob::Ptr output_blob =
infer_request_.GetBlob("conv2d_441.tmp_1");
auto output_shape = output_blob->getTensorDesc().getDims();
InferenceEngine::MemoryBlob::Ptr moutput =
InferenceEngine::as<InferenceEngine::MemoryBlob>(output_blob);
if (moutput) {
// locked memory holder should be alive all time while access to its
// buffer happens
auto minputHolder = moutput->rmap();
auto data = minputHolder.as<const InferenceEngine::PrecisionTrait<
InferenceEngine::Precision::FP32>::value_type*>();
// Calculate output length
int output_size = 1;
for (int j = 0; j < output_shape.size(); ++j) {
output_size *= output_shape[j];
}
output_data_.resize(output_size);
std::copy_n(data, output_size, output_data_.data());
}
InferenceEngine::Blob::Ptr output_blob2 =
infer_request_.GetBlob("argmax_0.tmp_0");
auto idx_shape = output_blob2->getTensorDesc().getDims();
InferenceEngine::MemoryBlob::Ptr moutput2 =
InferenceEngine::as<InferenceEngine::MemoryBlob>(output_blob2);
if (moutput2) {
// locked memory holder should be alive all time while access to its
// buffer happens
auto minputHolder = moutput2->rmap();
// Original I64 precision was converted to I32
auto data = minputHolder.as<const InferenceEngine::PrecisionTrait<
InferenceEngine::Precision::FP32>::value_type*>();
// Calculate output length
int output_size = 1;
for (int j = 0; j < idx_shape.size(); ++j) {
output_size *= idx_shape[j];
}
idx_data_.resize(output_size);
std::copy_n(data, output_size, idx_data_.data());
}
auto inference_end = std::chrono::steady_clock::now();
std::chrono::duration<double> elapsed = inference_end - inference_start;
printf("keypoint inference time: %f s\n", elapsed.count());
// Postprocessing result
Postprocess(output_data_,
output_shape,
idx_data_,
idx_shape,
result,
center_bs,
scale_bs);
}
} // namespace PaddleDetection
| PaddleDetection/deploy/third_engine/demo_openvino_kpts/keypoint_detector.cpp/0 | {
"file_path": "PaddleDetection/deploy/third_engine/demo_openvino_kpts/keypoint_detector.cpp",
"repo_id": "PaddleDetection",
"token_count": 3732
} | 63 |
# How to Create Model Algorithm
In order to make better use of PaddleDetection, we will introduce the main model technical details and application of PaddleDetection in this document
## Directory
- [How to Create Model Algorithm](#how-to-create-model-algorithm)
- [Directory](#directory)
- [1. Introduction](#1-introduction)
- [2. Create Model](#2-create-model)
- [2.1 Create Model Structure](#21-create-model-structure)
- [2.1.1 Create Backbone](#211-create-backbone)
- [2.1.2 Create Neck](#212-create-neck)
- [2.1.3 Create Head](#213-create-head)
- [2.1.4 Create Loss](#214-create-loss)
- [2.1.5 Create Post-processing Module](#215-create-post-processing-module)
- [2.1.6 Create Architecture](#216-create-architecture)
- [2.2 Create Configuration File](#22-create-configuration-file)
- [2.2.1 Network Structure Configuration File](#221-network-structure-configuration-file)
- [2.2.2 Optimizer configuration file](#222-optimizer-configuration-file)
- [2.2.3 Reader Configuration File](#223-reader-configuration-file)
### 1. Introduction
Each model in the PaddleDetecion corresponds to a folder. In the case of Yolov3, models in the Yolov3 family correspond to the `configs/yolov3` folder. Yolov3 Darknet's general configuration file `configs/yolov3/yolov3_darknet53_270e_coco.yml`.
```
_BASE_: [
'../datasets/coco_detection.yml', # Dataset configuration file shared by all models
'../runtime.yml', # Runtime configuration
'_base_/optimizer_270e.yml', # Optimizer related configuration
'_base_/yolov3_darknet53.yml', # yolov3 Network structure configuration file
'_base_/yolov3_reader.yml', # yolov3 Reader module configuration
]
# The relevant configuration defined here can override the configuration of the same name in the above file
snapshot_epoch: 5
weights: output/yolov3_darknet53_270e_coco/model_final
```
As you can see, the modules in the configuration file are clearly divided into optimizer, network structure, and reader modules, with the exception of the common dataset configuration and runtime configuration. Rich optimizers, learning rate adjustment strategies, preprocessing operators, etc., are supported in PaddleDetection, so most of the time you don't need to write the optimizer and reader-related code, just configure it in the configuration file. Therefore, the main purpose of adding a new model is to build the network structure.
In `ppdet/modeling/`, all of the Paddle Detection network structures are defined and combined in the form of components. The main components of the network structure are as follows:
```
ppdet/modeling/
├── architectures
│ ├── faster_rcnn.py # Faster Rcnn model
│ ├── ssd.py # SSD model
│ ├── yolo.py # YOLOv3 model
│ │ ...
├── heads # detection head module
│ ├── xxx_head.py # define various detection heads
│ ├── roi_extractor.py # detection of region of interest extraction
├── backbones # backbone network module
│ ├── resnet.py # ResNet network
│ ├── mobilenet.py # MobileNet network
│ │ ...
├── losses # loss function module
│ ├── xxx_loss.py # define and register various loss functions
├── necks # feature fusion module
│ ├── xxx_fpn.py # define various FPN modules
├── proposal_generator # anchor & proposal generate and match modules
│ ├── anchor_generator.py # anchor generate modules
│ ├── proposal_generator.py # proposal generate modules
│ ├── target.py # anchor & proposal Matching function
│ ├── target_layer.py # anchor & proposal Matching function
├── tests # unit test module
│ ├── test_xxx.py # the operator and module structure in the network are unit tested
├── ops.py # encapsulates all kinds of common detection components/operators related to the detection of PaddlePaddle objects
├── layers.py # encapsulates and register all kinds of PaddlePaddle object detection related public detection components/operators
├── bbox_utils.py # encapsulates the box-related functions
├── post_process.py # encapsulate and process related modules after registration
├── shape_spec.py # defines a class for the module to output shape
```

### 2. Create Model
Next, the modeling process is described in detail by taking the single-stage detector YOLOv3 as an example, so that you can quickly build a new model according to this idea.
#### 2.1 Create Model Structure
##### 2.1.1 Create Backbone
All existing Backbone network code in PaddleDetection is placed under `ppdet/modeling/backbones` directory, so we created `darknet.py` as follows:
```python
import paddle.nn as nn
from ppdet.core.workspace import register, serializable
@register
@serializable
class DarkNet(nn.Layer):
__shared__ = ['norm_type']
def __init__(self,
depth=53,
return_idx=[2, 3, 4],
norm_type='bn',
norm_decay=0.):
super(DarkNet, self).__init__()
# Omit the content
def forward(self, inputs):
# Ellipsis processing logic
pass
@property
def out_shape(self):
# Omit the content
pass
```
Then add a reference to `backbones/__init__.py`:
```python
from . import darknet
from .darknet import *
```
**A few notes:**
- To flexibly configure networks in the YAML configuration file, all backbone nodes need to register in `ppdet.core.workspace` as shown in the preceding example. In addition, `serializable` can be used to enable backbone to support serialization;
- All backbone needs to inherit the `paddle.nn.Layer` class and implement the forward function. In addition, it is necessary to implement the out shape attribute to define the channel information of the output feature map. For details, please refer to the source code.
- `__shared__` To realize global sharing of configuration parameters, these parameters can be shared by all registration modules, such as backbone, neck, head, and loss.
##### 2.1.2 Create Neck
The feature fusion module is placed under the `ppdet/modeling/necks` directory and we create the following `yolo_fpn.py`:
``` python
import paddle.nn as nn
from ppdet.core.workspace import register, serializable
@register
@serializable
class YOLOv3FPN(nn.Layer):
__shared__ = ['norm_type']
def __init__(self,
in_channels=[256, 512, 1024],
norm_type='bn'):
super(YOLOv3FPN, self).__init__()
# Omit the content
def forward(self, blocks):
# Omit the content
pass
@classmethod
def from_config(cls, cfg, input_shape):
# Omit the content
pass
@property
def out_shape(self):
# Omit the content
pass
```
Then add a reference to `necks/__init__.py`:
```python
from . import yolo_fpn
from .yolo_fpn import *
```
**A few notes:**
- The neck module needs to be registered with `register` and can be serialized with `serializable`.
- The neck module needs to inherit the `paddle.nn.Layer` class and implement the forward function. In addition, the `out_shape` attribute needs to be implemented to define the channel information of the output feature map, and the class function `from_config` needs to be implemented to deduce the input channel in the configuration file and initialize `YOLOv3FPN`.
- The neck module can use `shared` to implement global sharing of configuration parameters.
##### 2.1.3 Create Head
The head module is all stored in the `ppdet/modeling/heads` directory, where we create `yolo_head.py` as follows
``` python
import paddle.nn as nn
from ppdet.core.workspace import register
@register
class YOLOv3Head(nn.Layer):
__shared__ = ['num_classes']
__inject__ = ['loss']
def __init__(self,
anchors=[[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45],[59, 119],
[116, 90], [156, 198], [373, 326]],
anchor_masks=[[6, 7, 8], [3, 4, 5], [0, 1, 2]],
num_classes=80,
loss='YOLOv3Loss',
iou_aware=False,
iou_aware_factor=0.4):
super(YOLOv3Head, self).__init__()
# Omit the content
def forward(self, feats, targets=None):
# Omit the content
pass
```
Then add a reference to `heads/__init__.py`:
```python
from . import yolo_head
from .yolo_head import *
```
**A few notes:**
- The head module needs to register with `register`.
- The head module needs to inherit the `paddle.nn.Layer` class and implement the forward function.
- `__inject__` indicates that the module encapsulated in the global dictionary is imported. Such as loss, etc.
##### 2.1.4 Create Loss
The loss modules are all stored under `ppdet/modeling/losses` directory, where we created `yolo_loss.py`
```python
import paddle.nn as nn
from ppdet.core.workspace import register
@register
class YOLOv3Loss(nn.Layer):
__inject__ = ['iou_loss', 'iou_aware_loss']
__shared__ = ['num_classes']
def __init__(self,
num_classes=80,
ignore_thresh=0.7,
label_smooth=False,
downsample=[32, 16, 8],
scale_x_y=1.,
iou_loss=None,
iou_aware_loss=None):
super(YOLOv3Loss, self).__init__()
# Omit the content
def forward(self, inputs, targets, anchors):
# Omit the content
pass
```
Then add a reference to `losses/__init__.py`:
```python
from . import yolo_loss
from .yolo_loss import *
```
**A few notes:**
- The loss module needs to register with `register`.
- The loss module needs to inherit the `paddle.nn.Layer` class and implement the forward function.
- `__inject__` modules that have been encapsulated in the global dictionary can be used. Some parameters can be globally shared with `__shared__` configuration.
##### 2.1.5 Create Post-processing Module
The post-processing module is defined in `ppdet/modeling/post_process.py`, where the `BBoxPostProcess` class is defined for post-processing operations, as follows:
``` python
from ppdet.core.workspace import register
@register
class BBoxPostProcess(object):
__shared__ = ['num_classes']
__inject__ = ['decode', 'nms']
def __init__(self, num_classes=80, decode=None, nms=None):
# Omit the content
pass
def __call__(self, head_out, rois, im_shape, scale_factor):
# Omit the content
pass
```
**A few notes:**
- Post-processing modules need to register with `register`
- `__inject__` modules encapsulated in the global dictionary, such as decode and NMS. Decode and NMS are defined in `ppdet/modeling/layers.py`.
##### 2.1.6 Create Architecture
All architecture network code is placed in `ppdet/modeling/architectures` directory, `meta_arch.py` defines the `BaseArch` class, the code is as follows:
``` python
import paddle.nn as nn
from ppdet.core.workspace import register
@register
class BaseArch(nn.Layer):
def __init__(self):
super(BaseArch, self).__init__()
def forward(self, inputs):
self.inputs = inputs
self.model_arch()
if self.training:
out = self.get_loss()
else:
out = self.get_pred()
return out
def model_arch(self, ):
pass
def get_loss(self, ):
raise NotImplementedError("Should implement get_loss method!")
def get_pred(self, ):
raise NotImplementedError("Should implement get_pred method!")
```
All architecture needs to inherit from the `BaseArch` class, as defined by `yolo.py` in `YOLOv3` as follows:
``` python
@register
class YOLOv3(BaseArch):
__category__ = 'architecture'
__inject__ = ['post_process']
def __init__(self,
backbone='DarkNet',
neck='YOLOv3FPN',
yolo_head='YOLOv3Head',
post_process='BBoxPostProcess'):
super(YOLOv3, self).__init__()
self.backbone = backbone
self.neck = neck
self.yolo_head = yolo_head
self.post_process = post_process
@classmethod
def from_config(cls, cfg, *args, **kwargs):
# Omit the content
pass
def get_loss(self):
# Omit the content
pass
def get_pred(self):
# Omit the content
pass
```
**A few notes:**
- All architecture needs to be registered using a `register`
- When constructing a complete network, `__category__ = 'architecture'` must be set to represent a complete object detection model;
- Backbone, neck, YOLO head, post-process and other inspection components are passed into the architecture to form the final network. Modularization of detection like this improves the reusability of detection models, and multiple models can be obtained by combining different detection components.
- The from config class function implements the automatic configuration of channels when modules are combined.
#### 2.2 Create Configuration File
##### 2.2.1 Network Structure Configuration File
The configuration of the yolov3 network structure is defined in the `configs/yolov3/_base_/` folder. For example, `yolov3_darknet53.yml` defines the network structure of Yolov3 Darknet as follows:
```
architecture: YOLOv3
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/DarkNet53_pretrained.pdparams
norm_type: sync_bn
YOLOv3:
backbone: DarkNet
neck: YOLOv3FPN
yolo_head: YOLOv3Head
post_process: BBoxPostProcess
DarkNet:
depth: 53
return_idx: [2, 3, 4]
# use default config
# YOLOv3FPN:
YOLOv3Head:
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
loss: YOLOv3Loss
YOLOv3Loss:
ignore_thresh: 0.7
downsample: [32, 16, 8]
label_smooth: false
BBoxPostProcess:
decode:
name: YOLOBox
conf_thresh: 0.005
downsample_ratio: 32
clip_bbox: true
nms:
name: MultiClassNMS
keep_top_k: 100
score_threshold: 0.01
nms_threshold: 0.45
nms_top_k: 1000
```
In the configuration file, you need to specify the network architecture, pretrain weights to specify the URL or path of the training model, and norm type to share as global parameters. The definition of the model is defined in the file from top to bottom, corresponding to the model components in the previous section. For some model components, if the default parameters are used, you do not need to configure them, such as `yolo_fpn` above. By changing related configuration, we can easily combine another model, such as `configs/yolov3/_base_/yolov3_mobilenet_v1.yml` to switch backbone from Darknet to MobileNet.
##### 2.2.2 Optimizer configuration file
The optimizer profile defines the optimizer used by the model and the learning rate scheduling strategy. Currently, a variety of optimizers and learning rate strategies have been integrated in PaddleDetection, as described in the code `ppdet/optimizer.py`. For example, the optimizer configuration file for yolov3 is defined in `configs/yolov3/_base_/optimizer_270e.yml` as follows:
```
epoch: 270
LearningRate:
base_lr: 0.001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
# epoch number
- 216
- 243
- !LinearWarmup
start_factor: 0.
steps: 4000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
```
**A few notes:**
- Optimizer builder. Optimizer specifies the type and parameters of the Optimizer. Currently support the optimizer can reference [PaddlePaddle official documentation](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/Overview_cn.html)
- The `LearningRate.schedulers` sets the combination of different Learning Rate adjustment strategies. Paddle currently supports a variety of Learning Rate adjustment strategies. Specific also can reference [Paddle Paddle official documentation](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/Overview_cn.html). It is important to note that you need to simply package the learning rate adjustment strategy in Paddle, which can be found in the source code `ppdet/optimizer.py`.
##### 2.2.3 Reader Configuration File
For Reader configuration, see [Reader configuration documentation](./READER_en.md#5.Configuration-and-Operation).
> After reading this document, you should have some experience in model construction and configuration of Paddle Detection, and you will understand it more thoroughly with the source code. If you have other questions or suggestions about model technology, please send us an issue. We welcome your feedback.
| PaddleDetection/docs/advanced_tutorials/MODEL_TECHNICAL_en.md/0 | {
"file_path": "PaddleDetection/docs/advanced_tutorials/MODEL_TECHNICAL_en.md",
"repo_id": "PaddleDetection",
"token_count": 5807
} | 64 |
简体中文 | [English](./pphuman_attribute_en.md)
# 行人属性识别任务二次开发
## 数据准备
### 数据格式
格式采用PA100K的属性标注格式,共有26位属性。
这26位属性的名称、位置、种类数量见下表。
| Attribute | index | length |
|:----------|:----------|:----------|
| 'Hat','Glasses' | [0, 1] | 2 |
| 'ShortSleeve','LongSleeve','UpperStride','UpperLogo','UpperPlaid','UpperSplice' | [2, 3, 4, 5, 6, 7] | 6 |
| 'LowerStripe','LowerPattern','LongCoat','Trousers','Shorts','Skirt&Dress' | [8, 9, 10, 11, 12, 13] | 6 |
| 'boots' | [14, ] | 1 |
| 'HandBag','ShoulderBag','Backpack','HoldObjectsInFront' | [15, 16, 17, 18] | 4 |
| 'AgeOver60', 'Age18-60', 'AgeLess18' | [19, 20, 21] | 3 |
| 'Female' | [22, ] | 1 |
| 'Front','Side','Back' | [23, 24, 25] | 3 |
举例:
[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
第一组,位置[0, 1]数值分别是[0, 1],表示'no hat'、'has glasses'。
第二组,位置[22, ]数值分别是[0, ], 表示gender属性是'male', 否则是'female'。
第三组,位置[23, 24, 25]数值分别是[0, 1, 0], 表示方向属性是侧面'side'。
其他组依次类推
### 数据标注
理解了上面`属性标注`格式的含义后,就可以进行数据标注的工作。其本质是:每张单人图建立一组26个长度的标注项,分别与26个位置的属性值对应。
举例:
对于一张原始图片,
1) 使用检测框,标注图片中每一个人的位置。
2) 每一个检测框(对应每一个人),包含一组26位的属性值数组,数组的每一位以0或1表示。对应上述26个属性。例如,如果图片是'Female',则数组第22位为0,如果满足'Age18-60',则位置[19, 20, 21]对应的数值是[0, 1, 0], 或者满足'AgeOver60',则相应数值为[1, 0, 0].
标注完成后利用检测框将每一个人截取成单人图,其图片与26位属性标注建立对应关系。也可先截成单人图再进行标注,效果相同。
## 模型训练
数据标注完成后,就可以拿来做模型的训练,完成自定义模型的优化工作。
其主要有两步工作需要完成:1)将数据与标注数据整理成训练格式。2)修改配置文件开始训练。
### 训练数据格式
训练数据包括训练使用的图片和一个训练列表train.txt,其具体位置在训练配置中指定,其放置方式示例如下:
```
Attribute/
|-- data 训练图片文件夹
| |-- 00001.jpg
| |-- 00002.jpg
| `-- 0000x.jpg
`-- train.txt 训练数据列表
```
train.txt文件内为所有训练图片名称(相对于根路径的文件路径)+ 26个标注值
其每一行表示一个人的图片和标注结果。其格式为:
```
00001.jpg 0,0,1,0,....
```
注意:1)图片与标注值之间是以Tab[\t]符号隔开, 2)标注值之间是以逗号[,]隔开。该格式不能错,否则解析失败。
### 修改配置开始训练
首先执行以下命令下载训练代码(更多环境问题请参考[Install_PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/install_paddleclas_en.md)):
```shell
git clone https://github.com/PaddlePaddle/PaddleClas
```
需要在配置文件`PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml`中,修改的配置项如下:
```
DataLoader:
Train:
dataset:
name: MultiLabelDataset
image_root: "dataset/pa100k/" #指定训练图片所在根路径
cls_label_path: "dataset/pa100k/train_list.txt" #指定训练列表文件位置
label_ratio: True
transform_ops:
Eval:
dataset:
name: MultiLabelDataset
image_root: "dataset/pa100k/" #指定评估图片所在根路径
cls_label_path: "dataset/pa100k/val_list.txt" #指定评估列表文件位置
label_ratio: True
transform_ops:
```
注意:
1. 这里image_root路径+train.txt中图片相对路径,对应图片的完整路径位置。
2. 如果有修改属性数量,则还需修改内容配置项中属性种类数量:
```
# model architecture
Arch:
name: "PPLCNet_x1_0"
pretrained: True
use_ssld: True
class_num: 26 #属性种类数量
```
然后运行以下命令开始训练。
```
#多卡训练
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml
#单卡训练
python3 tools/train.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml
```
训练完成后可以执行以下命令进行性能评估:
```
#多卡评估
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/eval.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=./output/PPLCNet_x1_0/best_model
#单卡评估
python3 tools/eval.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=./output/PPLCNet_x1_0/best_model
```
### 模型导出
使用下述命令将训练好的模型导出为预测部署模型。
```
python3 tools/export_model.py \
-c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \
-o Global.pretrained_model=output/PPLCNet_x1_0/best_model \
-o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_person_attribute_infer
```
导出模型后,需要下载[infer_cfg.yml](https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_cfg.yml)文件,并放置到导出的模型文件夹`PPLCNet_x1_0_person_attribute_infer`中。
使用时在PP-Human中的配置文件`./deploy/pipeline/config/infer_cfg_pphuman.yml`中修改新的模型路径`model_dir`项,并开启功能`enable: True`。
```
ATTR:
model_dir: [YOUR_DEPLOY_MODEL_DIR]/PPLCNet_x1_0_person_attribute_infer/ #新导出的模型路径位置
enable: True #开启功能
```
然后可以使用-->至此即完成新增属性类别识别任务。
## 属性增减
上述是以26个属性为例的标注、训练过程。
如果需要增加、减少属性数量,则需要:
1)标注时需增加新属性类别信息或删减属性类别信息;
2)对应修改训练中train.txt所使用的属性数量和名称;
3)修改训练配置,例如``PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml``文件中的属性数量,详细见上述`修改配置开始训练`部分。
增加属性示例:
1. 在标注数据时在26位后继续增加新的属性标注数值;
2. 在train.txt文件的标注数值中也增加新的属性数值。
3. 注意属性类型在train.txt中属性数值列表中的位置的对应关系需要时固定的,例如第[19, 20, 21]位表示年龄,所有图片都要使用[19, 20, 21]位置表示年龄,不再赘述。
<div width="500" align="center">
<img src="../../images/add_attribute.png"/>
</div>
删减属性同理。
例如,如果不需要年龄属性,则位置[19, 20, 21]的数值可以去掉。只需在train.txt中标注的26个数字中全部删除第19-21位数值即可,同时标注数据时也不再需要标注这3位属性值。
## 修改后处理代码
修改了属性定义后,pipeline后处理部分也需要做相应修改,主要影响结果可视化时的显示结果。
相应代码在路径`deploy/pipeline/pphuman/attr_infer.py`文件中`postprocess`函数。
其函数实现说明如下:
```
# 函数入口
def postprocess(self, inputs, result):
# postprocess output of predictor
im_results = result['output']
# 1) 定义各组属性实际意义,其数量及位置与输出结果中占用位数一一对应。
labels = self.pred_config.labels
age_list = ['AgeLess18', 'Age18-60', 'AgeOver60']
direct_list = ['Front', 'Side', 'Back']
bag_list = ['HandBag', 'ShoulderBag', 'Backpack']
upper_list = ['UpperStride', 'UpperLogo', 'UpperPlaid', 'UpperSplice']
lower_list = [
'LowerStripe', 'LowerPattern', 'LongCoat', 'Trousers', 'Shorts',
'Skirt&Dress'
]
# 2) 部分属性所用阈值与通用值有明显区别,单独设置
glasses_threshold = 0.3
hold_threshold = 0.6
batch_res = []
for res in im_results:
res = res.tolist()
label_res = []
# gender
# 3) 单个位置属性类别,判断该位置是否大于阈值,来分配二分类结果
gender = 'Female' if res[22] > self.threshold else 'Male'
label_res.append(gender)
# age
# 4)多个位置属性类别,N选一形式,选择得分最高的属性
age = age_list[np.argmax(res[19:22])]
label_res.append(age)
# direction
direction = direct_list[np.argmax(res[23:])]
label_res.append(direction)
# glasses
glasses = 'Glasses: '
if res[1] > glasses_threshold:
glasses += 'True'
else:
glasses += 'False'
label_res.append(glasses)
# hat
hat = 'Hat: '
if res[0] > self.threshold:
hat += 'True'
else:
hat += 'False'
label_res.append(hat)
# hold obj
hold_obj = 'HoldObjectsInFront: '
if res[18] > hold_threshold:
hold_obj += 'True'
else:
hold_obj += 'False'
label_res.append(hold_obj)
# bag
bag = bag_list[np.argmax(res[15:18])]
bag_score = res[15 + np.argmax(res[15:18])]
bag_label = bag if bag_score > self.threshold else 'No bag'
label_res.append(bag_label)
# upper
# 5)同一类属性,分为两组(这里是款式和花色),每小组内单独选择,相当于两组不同属性。
upper_label = 'Upper:'
sleeve = 'LongSleeve' if res[3] > res[2] else 'ShortSleeve'
upper_label += ' {}'.format(sleeve)
upper_res = res[4:8]
if np.max(upper_res) > self.threshold:
upper_label += ' {}'.format(upper_list[np.argmax(upper_res)])
label_res.append(upper_label)
# lower
lower_res = res[8:14]
lower_label = 'Lower: '
has_lower = False
for i, l in enumerate(lower_res):
if l > self.threshold:
lower_label += ' {}'.format(lower_list[i])
has_lower = True
if not has_lower:
lower_label += ' {}'.format(lower_list[np.argmax(lower_res)])
label_res.append(lower_label)
# shoe
shoe = 'Boots' if res[14] > self.threshold else 'No boots'
label_res.append(shoe)
batch_res.append(label_res)
result = {'output': batch_res}
return result
```
| PaddleDetection/docs/advanced_tutorials/customization/pphuman_attribute.md/0 | {
"file_path": "PaddleDetection/docs/advanced_tutorials/customization/pphuman_attribute.md",
"repo_id": "PaddleDetection",
"token_count": 6881
} | 65 |
English | [简体中文](DistributedTraining_cn.md)
## 1. Usage
### 1.1 Single-machine
* Take PP-YOLOE-s as an example, after preparing the data locally, use the interface of `paddle.distributed.launch` or `fleetrun` to start the training task. Below is an example of running the script.
```bash
fleetrun \
--selected_gpu 0,1,2,3,4,5,6,7 \
tools/train.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml \
--eval &>logs.txt 2>&1 &
```
### 1.2 Multi-machine
* Compared with single-machine training, when training on multiple machines, you only need to add the `--ips` parameter, which indicates the ip list of machines that need to participate in distributed training. The ips of different machines are separated by commas. Below is an example of running code.
```shell
ip_list="10.127.6.17,10.127.5.142,10.127.45.13,10.127.44.151"
fleetrun \
--ips=${ip_list} \
--selected_gpu 0,1,2,3,4,5,6,7 \
tools/train.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml \
--eval &>logs.txt 2>&1 &
```
**Note:**
* The ip information of different machines needs to be separated by commas, which can be viewed through `ifconfig` or `ipconfig`.
* Password-free settings are required between different machines, and they can be pinged directly, otherwise the communication cannot be completed.
* The code, data, and running commands or scripts between different machines need to be consistent, and the set training commands or scripts need to be run on all machines. The first device of the first machine in the final `ip_list` is trainer0, and so on.
* The starting port of different machines may be different. It is recommended to set the same starting port for multi-machine running in different machines before starting the multi-machine task. The command is `export FLAGS_START_PORT=17000`, and the port value is recommended to be `10000~20000`.
## 2. Performance
* We conducted model training on 3x8 V100 GPUs. Accuracy, training time, and multi machine acceleration ratio of different models are shown below.
| Model | Dataset | Configuration | 8 GPU training time / Accuracy | 3x8 GPU training time / Accuracy | Acceleration ratio |
|:---------:|:--------:|:--------:|:--------:|:--------:|:------:|
| PP-YOLOE-s | Objects365 | [ppyoloe_crn_s_300e_coco.yml](../../configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml) | 301h/- | 162h/17.7% | **1.85** |
| PP-YOLOE-l | Objects365 | [ppyoloe_crn_l_300e_coco.yml](../../configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | 401h/- | 178h/30.3% | **2.25** |
* We conducted model training on 4x8 V100 GPUs. Accuracy, training time, and multi machine acceleration ratio of different models are shown below.
| Model | Dataset | Configuration | 8 GPU training time / Accuracy | 4x8 GPU training time / Accuracy | Acceleration ratio |
|:---------:|:--------:|:--------:|:--------:|:--------:|:------:|
| PP-YOLOE-s | COCO | [ppyoloe_crn_s_300e_coco.yml](../../configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml) | 39h/42.7% | 13h/42.1% | **3.0** |
| PP-YOLOE-m | Objects365 | [ppyoloe_crn_m_300e_coco.yml](../../configs/ppyoloe/ppyoloe_crn_m_300e_coco.yml) | 337h/- | 112h/24.6% | **3.0** |
| PP-YOLOE-x | Objects365 | [ppyoloe_crn_x_300e_coco.yml](../../configs/ppyoloe/ppyoloe_crn_x_300e_coco.yml) | 464h/- | 125h/32.1% | **3.4** |
* **Note**
* When the number of GPU cards for training is too large, the accuracy will be slightly lost (about 1%). At this time, you can try to warmup the training process or increase some training epochs to reduce the lost.
* The configuration files here are provided based on COCO datasets. If you need to train on other datasets, you need to modify the dataset path.
* For the multi-machine training process of `PP-YOLOE` series, the batch size of single card is set as 8 and learning rate is same as that of single machine.
| PaddleDetection/docs/tutorials/DistributedTraining_en.md/0 | {
"file_path": "PaddleDetection/docs/tutorials/DistributedTraining_en.md",
"repo_id": "PaddleDetection",
"token_count": 1305
} | 66 |
# Multi Scale Test Configuration
Tags: Configuration
---
```yaml
##################################### Multi scale test configuration #####################################
EvalReader:
sample_transforms:
- Decode: {}
- MultiscaleTestResize: {origin_target_size: [800, 1333], target_size: [700 , 900]}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
TestReader:
sample_transforms:
- Decode: {}
- MultiscaleTestResize: {origin_target_size: [800, 1333], target_size: [700 , 900]}
- NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
- Permute: {}
```
---
Multi Scale Test is a TTA (Test Time Augmentation) method, it can improve object detection performance.
The input image will be scaled into different scales, then model generated predictions (bboxes) at different scales, finally all the predictions will be combined to generate final prediction. (Here **NMS** is used to aggregate the predictions.)
## _MultiscaleTestResize_ option
`MultiscaleTestResize` option is used to enable multi scale test prediction.
`origin_target_size: [800, 1333]` means the input image will be scaled to 800 (for short edge) and 1333 (max edge length cannot be greater than 1333) at first
`target_size: [700 , 900]` property is used to specify different scales.
It can be plugged into evaluation process or test (inference) process, by adding `MultiscaleTestResize` entry to `EvalReader.sample_transforms` or `TestReader.sample_transforms`
---
###Note
Now only CascadeRCNN, FasterRCNN and MaskRCNN are supported for multi scale testing. And batch size must be 1. | PaddleDetection/docs/tutorials/config_annotation/multi_scale_test_config.md/0 | {
"file_path": "PaddleDetection/docs/tutorials/config_annotation/multi_scale_test_config.md",
"repo_id": "PaddleDetection",
"token_count": 503
} | 67 |
# Logging
This document talks about how to track metrics and visualize model performance during training. The library currently supports [VisualDL](https://www.paddlepaddle.org.cn/documentation/docs/en/guides/03_VisualDL/visualdl_usage_en.html) and [Weights & Biases](https://docs.wandb.ai).
## VisualDL
Logging to VisualDL is supported only in python >= 3.5. To install VisualDL
```
pip install visualdl
```
PaddleDetection uses a callback to log the training metrics at the end of every step and metrics from the validation step at the end of every epoch. To use VisualDL for visualization, add the `--use_vdl` flag to the training command and `--vdl_log_dir <logs>` to set the directory which stores the records.
For example
```
python tools/train -c config.yml --use_vdl --vdl_log_dir ./logs
```
Another possible way to do this is to add the aforementioned flags to the `config.yml` file.
## Weights & Biases
W&B is a MLOps tool that can be used for experiment tracking, dataset/model versioning, visualizing results and collaborating with colleagues. A W&B logger is integrated directly into PaddleDetection and to use it, first you need to install the wandb sdk and login to your wandb account.
```
pip install wandb
wandb login
```
To use wandb to log metrics while training add the `--use_wandb` flag to the training command and any other arguments for the W&B logger can be provided like this -
```
python tools/train -c config.yml --use_wandb -o wandb-project=MyDetector wandb-entity=MyTeam wandb-save_dir=./logs
```
The arguments to the W&B logger must be proceeded by `-o` and each invidiual argument must contain the prefix "wandb-".
If this is too tedious, an alternative way is to add the arguments to the `config.yml` file under the `wandb` header. For example
```
use_wandb: True
wandb:
project: MyProject
entity: MyTeam
save_dir: ./logs
```
| PaddleDetection/docs/tutorials/logging_en.md/0 | {
"file_path": "PaddleDetection/docs/tutorials/logging_en.md",
"repo_id": "PaddleDetection",
"token_count": 564
} | 68 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
SIZE_UNIT = ['K', 'M', 'G', 'T']
SHM_QUERY_CMD = 'df -h'
SHM_KEY = 'shm'
SHM_DEFAULT_MOUNT = '/dev/shm'
# [ shared memory size check ]
# In detection models, image/target data occupies a lot of memory, and
# will occupy lots of shared memory in multi-process DataLoader, we use
# following code to get shared memory size and perform a size check to
# disable shared memory use if shared memory size is not enough.
# Shared memory getting process as follows:
# 1. use `df -h` get all mount info
# 2. pick up spaces whose mount info contains 'shm'
# 3. if 'shm' space number is only 1, return its size
# 4. if there are multiple 'shm' space, try to find the default mount
# directory '/dev/shm' is Linux-like system, otherwise return the
# biggest space size.
def _parse_size_in_M(size_str):
if size_str[-1] == 'B':
num, unit = size_str[:-2], size_str[-2]
else:
num, unit = size_str[:-1], size_str[-1]
assert unit in SIZE_UNIT, \
"unknown shm size unit {}".format(unit)
return float(num) * \
(1024 ** (SIZE_UNIT.index(unit) - 1))
def _get_shared_memory_size_in_M():
try:
df_infos = os.popen(SHM_QUERY_CMD).readlines()
except:
return None
else:
shm_infos = []
for df_info in df_infos:
info = df_info.strip()
if info.find(SHM_KEY) >= 0:
shm_infos.append(info.split())
if len(shm_infos) == 0:
return None
elif len(shm_infos) == 1:
return _parse_size_in_M(shm_infos[0][3])
else:
default_mount_infos = [
si for si in shm_infos if si[-1] == SHM_DEFAULT_MOUNT
]
if default_mount_infos:
return _parse_size_in_M(default_mount_infos[0][3])
else:
return max([_parse_size_in_M(si[3]) for si in shm_infos])
| PaddleDetection/ppdet/data/shm_utils.py/0 | {
"file_path": "PaddleDetection/ppdet/data/shm_utils.py",
"repo_id": "PaddleDetection",
"token_count": 1030
} | 69 |
import numpy as np
import imgaug.augmenters as iaa
from .operators import BaseOperator, register_op
from ppdet.utils.logger import setup_logger
from ppdet.data.culane_utils import linestrings_to_lanes, transform_annotation
logger = setup_logger(__name__)
__all__ = [
"CULaneTrainProcess", "CULaneDataProcess", "HorizontalFlip",
"ChannelShuffle", "CULaneAffine", "CULaneResize", "OneOfBlur",
"MultiplyAndAddToBrightness", "AddToHueAndSaturation"
]
def trainTransforms(img_h, img_w):
transforms = [{
'name': 'Resize',
'parameters': dict(size=dict(
height=img_h, width=img_w)),
'p': 1.0
}, {
'name': 'HorizontalFlip',
'parameters': dict(p=1.0),
'p': 0.5
}, {
'name': 'ChannelShuffle',
'parameters': dict(p=1.0),
'p': 0.1
}, {
'name': 'MultiplyAndAddToBrightness',
'parameters': dict(
mul=(0.85, 1.15), add=(-10, 10)),
'p': 0.6
}, {
'name': 'AddToHueAndSaturation',
'parameters': dict(value=(-10, 10)),
'p': 0.7
}, {
'name': 'OneOf',
'transforms': [
dict(
name='MotionBlur', parameters=dict(k=(3, 5))), dict(
name='MedianBlur', parameters=dict(k=(3, 5)))
],
'p': 0.2
}, {
'name': 'Affine',
'parameters': dict(
translate_percent=dict(
x=(-0.1, 0.1), y=(-0.1, 0.1)),
rotate=(-10, 10),
scale=(0.8, 1.2)),
'p': 0.7
}, {
'name': 'Resize',
'parameters': dict(size=dict(
height=img_h, width=img_w)),
'p': 1.0
}]
return transforms
@register_op
class CULaneTrainProcess(BaseOperator):
def __init__(self, img_w, img_h):
super(CULaneTrainProcess, self).__init__()
self.img_w = img_w
self.img_h = img_h
self.transforms = trainTransforms(self.img_h, self.img_w)
if self.transforms is not None:
img_transforms = []
for aug in self.transforms:
p = aug['p']
if aug['name'] != 'OneOf':
img_transforms.append(
iaa.Sometimes(
p=p,
then_list=getattr(iaa, aug['name'])(**aug[
'parameters'])))
else:
img_transforms.append(
iaa.Sometimes(
p=p,
then_list=iaa.OneOf([
getattr(iaa, aug_['name'])(**aug_['parameters'])
for aug_ in aug['transforms']
])))
else:
img_transforms = []
self.iaa_transform = iaa.Sequential(img_transforms)
def apply(self, sample, context=None):
img, line_strings, seg = self.iaa_transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
return sample
@register_op
class CULaneDataProcess(BaseOperator):
def __init__(self, img_w, img_h, num_points, max_lanes):
super(CULaneDataProcess, self).__init__()
self.img_w = img_w
self.img_h = img_h
self.num_points = num_points
self.n_offsets = num_points
self.n_strips = num_points - 1
self.strip_size = self.img_h / self.n_strips
self.max_lanes = max_lanes
self.offsets_ys = np.arange(self.img_h, -1, -self.strip_size)
def apply(self, sample, context=None):
data = {}
line_strings = sample['lanes']
line_strings.clip_out_of_image_()
new_anno = {'lanes': linestrings_to_lanes(line_strings)}
for i in range(30):
try:
annos = transform_annotation(
self.img_w, self.img_h, self.max_lanes, self.n_offsets,
self.offsets_ys, self.n_strips, self.strip_size, new_anno)
label = annos['label']
lane_endpoints = annos['lane_endpoints']
break
except:
if (i + 1) == 30:
logger.critical('Transform annotation failed 30 times :(')
exit()
sample['image'] = sample['image'].astype(np.float32) / 255.
data['image'] = sample['image'].transpose(2, 0, 1)
data['lane_line'] = label
data['seg'] = sample['seg']
data['full_img_path'] = sample['full_img_path']
data['img_name'] = sample['img_name']
data['im_id'] = sample['im_id']
if 'mask' in sample.keys():
data['seg'] = sample['mask'].get_arr()
data['im_shape'] = np.array([self.img_w, self.img_h], dtype=np.float32)
data['scale_factor'] = np.array([1., 1.], dtype=np.float32)
return data
@register_op
class CULaneResize(BaseOperator):
def __init__(self, img_h, img_w, prob=0.5):
super(CULaneResize, self).__init__()
self.img_h = img_h
self.img_w = img_w
self.prob = prob
def apply(self, sample, context=None):
transform = iaa.Sometimes(self.prob,
iaa.Resize({
"height": self.img_h,
"width": self.img_w
}))
if 'mask' in sample.keys():
img, line_strings, seg = transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
else:
img, line_strings = transform(
image=sample['image'].copy().astype(np.uint8),
line_strings=sample['lanes'])
sample['image'] = img
sample['lanes'] = line_strings
return sample
@register_op
class HorizontalFlip(BaseOperator):
def __init__(self, prob=0.5):
super(HorizontalFlip, self).__init__()
self.prob = prob
def apply(self, sample, context=None):
transform = iaa.Sometimes(self.prob, iaa.HorizontalFlip(1.0))
if 'mask' in sample.keys():
img, line_strings, seg = transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
else:
img, line_strings = transform(
image=sample['image'], line_strings=sample['lanes'])
sample['image'] = img
sample['lanes'] = line_strings
return sample
@register_op
class ChannelShuffle(BaseOperator):
def __init__(self, prob=0.1):
super(ChannelShuffle, self).__init__()
self.prob = prob
def apply(self, sample, context=None):
transform = iaa.Sometimes(self.prob, iaa.ChannelShuffle(1.0))
if 'mask' in sample.keys():
img, line_strings, seg = transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
else:
img, line_strings = transform(
image=sample['image'], line_strings=sample['lanes'])
sample['image'] = img
sample['lanes'] = line_strings
return sample
@register_op
class MultiplyAndAddToBrightness(BaseOperator):
def __init__(self, mul=(0.85, 1.15), add=(-10, 10), prob=0.5):
super(MultiplyAndAddToBrightness, self).__init__()
self.mul = tuple(mul)
self.add = tuple(add)
self.prob = prob
def apply(self, sample, context=None):
transform = iaa.Sometimes(
self.prob,
iaa.MultiplyAndAddToBrightness(
mul=self.mul, add=self.add))
if 'mask' in sample.keys():
img, line_strings, seg = transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
else:
img, line_strings = transform(
image=sample['image'], line_strings=sample['lanes'])
sample['image'] = img
sample['lanes'] = line_strings
return sample
@register_op
class AddToHueAndSaturation(BaseOperator):
def __init__(self, value=(-10, 10), prob=0.5):
super(AddToHueAndSaturation, self).__init__()
self.value = tuple(value)
self.prob = prob
def apply(self, sample, context=None):
transform = iaa.Sometimes(
self.prob, iaa.AddToHueAndSaturation(value=self.value))
if 'mask' in sample.keys():
img, line_strings, seg = transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
else:
img, line_strings = transform(
image=sample['image'], line_strings=sample['lanes'])
sample['image'] = img
sample['lanes'] = line_strings
return sample
@register_op
class OneOfBlur(BaseOperator):
def __init__(self, MotionBlur_k=(3, 5), MedianBlur_k=(3, 5), prob=0.5):
super(OneOfBlur, self).__init__()
self.MotionBlur_k = tuple(MotionBlur_k)
self.MedianBlur_k = tuple(MedianBlur_k)
self.prob = prob
def apply(self, sample, context=None):
transform = iaa.Sometimes(
self.prob,
iaa.OneOf([
iaa.MotionBlur(k=self.MotionBlur_k),
iaa.MedianBlur(k=self.MedianBlur_k)
]))
if 'mask' in sample.keys():
img, line_strings, seg = transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
else:
img, line_strings = transform(
image=sample['image'], line_strings=sample['lanes'])
sample['image'] = img
sample['lanes'] = line_strings
return sample
@register_op
class CULaneAffine(BaseOperator):
def __init__(self,
translate_percent_x=(-0.1, 0.1),
translate_percent_y=(-0.1, 0.1),
rotate=(3, 5),
scale=(0.8, 1.2),
prob=0.5):
super(CULaneAffine, self).__init__()
self.translate_percent = {
'x': tuple(translate_percent_x),
'y': tuple(translate_percent_y)
}
self.rotate = tuple(rotate)
self.scale = tuple(scale)
self.prob = prob
def apply(self, sample, context=None):
transform = iaa.Sometimes(
self.prob,
iaa.Affine(
translate_percent=self.translate_percent,
rotate=self.rotate,
scale=self.scale))
if 'mask' in sample.keys():
img, line_strings, seg = transform(
image=sample['image'],
line_strings=sample['lanes'],
segmentation_maps=sample['mask'])
sample['image'] = img
sample['lanes'] = line_strings
sample['mask'] = seg
else:
img, line_strings = transform(
image=sample['image'], line_strings=sample['lanes'])
sample['image'] = img
sample['lanes'] = line_strings
return sample
| PaddleDetection/ppdet/data/transform/culane_operators.py/0 | {
"file_path": "PaddleDetection/ppdet/data/transform/culane_operators.py",
"repo_id": "PaddleDetection",
"token_count": 6404
} | 70 |
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import copy
import time
import typing
import numpy as np
import paddle
import paddle.nn as nn
import paddle.distributed as dist
from paddle.distributed import fleet
from ppdet.optimizer import ModelEMA, SimpleModelEMA
from ppdet.core.workspace import create
from ppdet.utils.checkpoint import load_weight, load_pretrain_weight, save_model
import ppdet.utils.stats as stats
from ppdet.utils import profiler
from ppdet.modeling.ssod.utils import align_weak_strong_shape
from .trainer import Trainer
from ppdet.utils.logger import setup_logger
from paddle.static import InputSpec
from ppdet.engine.export_utils import _dump_infer_config, _prune_input_spec
MOT_ARCH = ['JDE', 'FairMOT', 'DeepSORT', 'ByteTrack', 'CenterTrack']
logger = setup_logger('ppdet.engine')
__all__ = ['Trainer_DenseTeacher', 'Trainer_ARSL', 'Trainer_Semi_RTDETR']
class Trainer_DenseTeacher(Trainer):
def __init__(self, cfg, mode='train'):
self.cfg = cfg
assert mode.lower() in ['train', 'eval', 'test'], \
"mode should be 'train', 'eval' or 'test'"
self.mode = mode.lower()
self.optimizer = None
self.is_loaded_weights = False
self.use_amp = self.cfg.get('amp', False)
self.amp_level = self.cfg.get('amp_level', 'O1')
self.custom_white_list = self.cfg.get('custom_white_list', None)
self.custom_black_list = self.cfg.get('custom_black_list', None)
# build data loader
capital_mode = self.mode.capitalize()
self.dataset = self.cfg['{}Dataset'.format(capital_mode)] = create(
'{}Dataset'.format(capital_mode))()
if self.mode == 'train':
self.dataset_unlabel = self.cfg['UnsupTrainDataset'] = create(
'UnsupTrainDataset')
self.loader = create('SemiTrainReader')(
self.dataset, self.dataset_unlabel, cfg.worker_num)
# build model
if 'model' not in self.cfg:
self.model = create(cfg.architecture)
else:
self.model = self.cfg.model
self.is_loaded_weights = True
# EvalDataset build with BatchSampler to evaluate in single device
# TODO: multi-device evaluate
if self.mode == 'eval':
self._eval_batch_sampler = paddle.io.BatchSampler(
self.dataset, batch_size=self.cfg.EvalReader['batch_size'])
# If metric is VOC, need to be set collate_batch=False.
if cfg.metric == 'VOC':
cfg['EvalReader']['collate_batch'] = False
self.loader = create('EvalReader')(self.dataset, cfg.worker_num,
self._eval_batch_sampler)
# TestDataset build after user set images, skip loader creation here
# build optimizer in train mode
if self.mode == 'train':
steps_per_epoch = len(self.loader)
if steps_per_epoch < 1:
logger.warning(
"Samples in dataset are less than batch_size, please set smaller batch_size in TrainReader."
)
self.lr = create('LearningRate')(steps_per_epoch)
self.optimizer = create('OptimizerBuilder')(self.lr, self.model)
# Unstructured pruner is only enabled in the train mode.
if self.cfg.get('unstructured_prune'):
self.pruner = create('UnstructuredPruner')(self.model,
steps_per_epoch)
if self.use_amp and self.amp_level == 'O2':
self.model, self.optimizer = paddle.amp.decorate(
models=self.model,
optimizers=self.optimizer,
level=self.amp_level)
self.use_ema = ('use_ema' in cfg and cfg['use_ema'])
if self.use_ema:
ema_decay = self.cfg.get('ema_decay', 0.9998)
ema_decay_type = self.cfg.get('ema_decay_type', 'threshold')
cycle_epoch = self.cfg.get('cycle_epoch', -1)
ema_black_list = self.cfg.get('ema_black_list', None)
self.ema = ModelEMA(
self.model,
decay=ema_decay,
ema_decay_type=ema_decay_type,
cycle_epoch=cycle_epoch,
ema_black_list=ema_black_list)
self.ema_start_iters = self.cfg.get('ema_start_iters', 0)
# simple_ema for SSOD
self.use_simple_ema = ('use_simple_ema' in cfg and
cfg['use_simple_ema'])
if self.use_simple_ema:
self.use_ema = True
ema_decay = self.cfg.get('ema_decay', 0.9996)
self.ema = SimpleModelEMA(self.model, decay=ema_decay)
self.ema_start_iters = self.cfg.get('ema_start_iters', 0)
self._nranks = dist.get_world_size()
self._local_rank = dist.get_rank()
self.status = {}
self.start_epoch = 0
self.end_epoch = 0 if 'epoch' not in cfg else cfg.epoch
# initial default callbacks
self._init_callbacks()
# initial default metrics
self._init_metrics()
self._reset_metrics()
def load_weights(self, weights):
if self.is_loaded_weights:
return
self.start_epoch = 0
load_pretrain_weight(self.model, weights)
load_pretrain_weight(self.ema.model, weights)
logger.info("Load weights {} to start training for teacher and student".
format(weights))
def resume_weights(self, weights, exchange=True):
# support Distill resume weights
if hasattr(self.model, 'student_model'):
self.start_epoch = load_weight(self.model.student_model, weights,
self.optimizer, exchange)
else:
self.start_epoch = load_weight(self.model, weights, self.optimizer,
self.ema
if self.use_ema else None, exchange)
logger.debug("Resume weights of epoch {}".format(self.start_epoch))
def train(self, validate=False):
self.semi_start_iters = self.cfg.get('semi_start_iters', 5000)
Init_mark = False
if validate:
self.cfg['EvalDataset'] = self.cfg.EvalDataset = create(
"EvalDataset")()
sync_bn = (getattr(self.cfg, 'norm_type', None) == 'sync_bn' and
self.cfg.use_gpu and self._nranks > 1)
if sync_bn:
self.model = paddle.nn.SyncBatchNorm.convert_sync_batchnorm(
self.model)
if self.cfg.get('fleet', False):
self.model = fleet.distributed_model(self.model)
self.optimizer = fleet.distributed_optimizer(self.optimizer)
elif self._nranks > 1:
find_unused_parameters = self.cfg[
'find_unused_parameters'] if 'find_unused_parameters' in self.cfg else False
self.model = paddle.DataParallel(
self.model, find_unused_parameters=find_unused_parameters)
self.ema.model = paddle.DataParallel(
self.ema.model, find_unused_parameters=find_unused_parameters)
self.status.update({
'epoch_id': self.start_epoch,
'step_id': 0,
'steps_per_epoch': len(self.loader),
'exchange_save_model': True,
})
# Note: exchange_save_model
# in DenseTeacher SSOD, the teacher model will be higher, so exchange when saving pdparams
self.status['batch_time'] = stats.SmoothedValue(
self.cfg.log_iter, fmt='{avg:.4f}')
self.status['data_time'] = stats.SmoothedValue(
self.cfg.log_iter, fmt='{avg:.4f}')
self.status['training_staus'] = stats.TrainingStats(self.cfg.log_iter)
profiler_options = self.cfg.get('profiler_options', None)
self._compose_callback.on_train_begin(self.status)
train_cfg = self.cfg.DenseTeacher['train_cfg']
concat_sup_data = train_cfg.get('concat_sup_data', True)
for param in self.ema.model.parameters():
param.stop_gradient = True
for epoch_id in range(self.start_epoch, self.cfg.epoch):
self.status['mode'] = 'train'
self.status['epoch_id'] = epoch_id
self._compose_callback.on_epoch_begin(self.status)
self.loader.dataset_label.set_epoch(epoch_id)
self.loader.dataset_unlabel.set_epoch(epoch_id)
iter_tic = time.time()
loss_dict = {
'loss': paddle.to_tensor([0]),
'loss_sup_sum': paddle.to_tensor([0]),
'loss_unsup_sum': paddle.to_tensor([0]),
'fg_sum': paddle.to_tensor([0]),
}
if self._nranks > 1:
for k in self.model._layers.get_loss_keys():
loss_dict.update({k: paddle.to_tensor([0.])})
for k in self.model._layers.get_loss_keys():
loss_dict.update({'distill_' + k: paddle.to_tensor([0.])})
else:
for k in self.model.get_loss_keys():
loss_dict.update({k: paddle.to_tensor([0.])})
for k in self.model.get_loss_keys():
loss_dict.update({'distill_' + k: paddle.to_tensor([0.])})
# Note: for step_id, data in enumerate(self.loader): # enumerate bug
for step_id in range(len(self.loader)):
data = next(self.loader)
self.model.train()
self.ema.model.eval()
data_sup_w, data_sup_s, data_unsup_w, data_unsup_s = data
self.status['data_time'].update(time.time() - iter_tic)
self.status['step_id'] = step_id
profiler.add_profiler_step(profiler_options)
self._compose_callback.on_step_begin(self.status)
if data_sup_w['image'].shape != data_sup_s['image'].shape:
data_sup_w, data_sup_s = align_weak_strong_shape(data_sup_w,
data_sup_s)
data_sup_w['epoch_id'] = epoch_id
data_sup_s['epoch_id'] = epoch_id
if concat_sup_data:
for k, v in data_sup_s.items():
if k in ['epoch_id']:
continue
data_sup_s[k] = paddle.concat([v, data_sup_w[k]])
loss_dict_sup = self.model(data_sup_s)
else:
loss_dict_sup_w = self.model(data_sup_w)
loss_dict_sup = self.model(data_sup_s)
for k, v in loss_dict_sup_w.items():
loss_dict_sup[k] = (loss_dict_sup[k] + v) * 0.5
losses_sup = loss_dict_sup['loss'] * train_cfg['sup_weight']
losses_sup.backward()
losses = losses_sup.detach()
loss_dict.update(loss_dict_sup)
loss_dict.update({'loss_sup_sum': loss_dict['loss']})
curr_iter = len(self.loader) * epoch_id + step_id
st_iter = self.semi_start_iters
if curr_iter == st_iter:
logger.info("***" * 30)
logger.info('Semi starting ...')
logger.info("***" * 30)
if curr_iter > st_iter:
unsup_weight = train_cfg['unsup_weight']
if train_cfg['suppress'] == 'linear':
tar_iter = st_iter * 2
if curr_iter <= tar_iter:
unsup_weight *= (curr_iter - st_iter) / st_iter
elif train_cfg['suppress'] == 'exp':
tar_iter = st_iter + 2000
if curr_iter <= tar_iter:
scale = np.exp((curr_iter - tar_iter) / 1000)
unsup_weight *= scale
elif train_cfg['suppress'] == 'step':
tar_iter = st_iter * 2
if curr_iter <= tar_iter:
unsup_weight *= 0.25
else:
raise ValueError
if data_unsup_w['image'].shape != data_unsup_s[
'image'].shape:
data_unsup_w, data_unsup_s = align_weak_strong_shape(
data_unsup_w, data_unsup_s)
data_unsup_w['epoch_id'] = epoch_id
data_unsup_s['epoch_id'] = epoch_id
data_unsup_s['get_data'] = True
student_preds = self.model(data_unsup_s)
with paddle.no_grad():
data_unsup_w['is_teacher'] = True
teacher_preds = self.ema.model(data_unsup_w)
train_cfg['curr_iter'] = curr_iter
train_cfg['st_iter'] = st_iter
if self._nranks > 1:
loss_dict_unsup = self.model._layers.get_ssod_loss(
student_preds, teacher_preds, train_cfg)
else:
loss_dict_unsup = self.model.get_ssod_loss(
student_preds, teacher_preds, train_cfg)
fg_num = loss_dict_unsup["fg_sum"]
del loss_dict_unsup["fg_sum"]
distill_weights = train_cfg['loss_weight']
loss_dict_unsup = {
k: v * distill_weights[k]
for k, v in loss_dict_unsup.items()
}
losses_unsup = sum([
metrics_value
for metrics_value in loss_dict_unsup.values()
]) * unsup_weight
losses_unsup.backward()
loss_dict.update(loss_dict_unsup)
loss_dict.update({'loss_unsup_sum': losses_unsup})
losses += losses_unsup.detach()
loss_dict.update({"fg_sum": fg_num})
loss_dict['loss'] = losses
self.optimizer.step()
curr_lr = self.optimizer.get_lr()
self.lr.step()
self.optimizer.clear_grad()
self.status['learning_rate'] = curr_lr
if self._nranks < 2 or self._local_rank == 0:
self.status['training_staus'].update(loss_dict)
self.status['batch_time'].update(time.time() - iter_tic)
self._compose_callback.on_step_end(self.status)
# Note: ema_start_iters
if self.use_ema and curr_iter == self.ema_start_iters:
logger.info("***" * 30)
logger.info('EMA starting ...')
logger.info("***" * 30)
self.ema.update(self.model, decay=0)
elif self.use_ema and curr_iter > self.ema_start_iters:
self.ema.update(self.model)
iter_tic = time.time()
is_snapshot = (self._nranks < 2 or self._local_rank == 0) \
and ((epoch_id + 1) % self.cfg.snapshot_epoch == 0 or epoch_id == self.end_epoch - 1)
if is_snapshot and self.use_ema:
# apply ema weight on model
weight = copy.deepcopy(self.ema.model.state_dict())
for k, v in weight.items():
if paddle.is_floating_point(v):
weight[k].stop_gradient = True
self.status['weight'] = weight
self._compose_callback.on_epoch_end(self.status)
if validate and is_snapshot:
if not hasattr(self, '_eval_loader'):
# build evaluation dataset and loader
self._eval_dataset = self.cfg.EvalDataset
self._eval_batch_sampler = \
paddle.io.BatchSampler(
self._eval_dataset,
batch_size=self.cfg.EvalReader['batch_size'])
# If metric is VOC, need to be set collate_batch=False.
if self.cfg.metric == 'VOC':
self.cfg['EvalReader']['collate_batch'] = False
self._eval_loader = create('EvalReader')(
self._eval_dataset,
self.cfg.worker_num,
batch_sampler=self._eval_batch_sampler)
# if validation in training is enabled, metrics should be re-init
# Init_mark makes sure this code will only execute once
if validate and Init_mark == False:
Init_mark = True
self._init_metrics(validate=validate)
self._reset_metrics()
with paddle.no_grad():
self.status['save_best_model'] = True
self._eval_with_loader(self._eval_loader)
if is_snapshot and self.use_ema:
self.status.pop('weight')
self._compose_callback.on_train_end(self.status)
def evaluate(self):
# get distributed model
if self.cfg.get('fleet', False):
self.model = fleet.distributed_model(self.model)
self.optimizer = fleet.distributed_optimizer(self.optimizer)
elif self._nranks > 1:
find_unused_parameters = self.cfg[
'find_unused_parameters'] if 'find_unused_parameters' in self.cfg else False
self.model = paddle.DataParallel(
self.model, find_unused_parameters=find_unused_parameters)
with paddle.no_grad():
self._eval_with_loader(self.loader)
def _eval_with_loader(self, loader):
sample_num = 0
tic = time.time()
self._compose_callback.on_epoch_begin(self.status)
self.status['mode'] = 'eval'
test_cfg = self.cfg.DenseTeacher['test_cfg']
if test_cfg['inference_on'] == 'teacher':
logger.info("***** teacher model evaluating *****")
eval_model = self.ema.model
else:
logger.info("***** student model evaluating *****")
eval_model = self.model
eval_model.eval()
if self.cfg.get('print_flops', False):
flops_loader = create('{}Reader'.format(self.mode.capitalize()))(
self.dataset, self.cfg.worker_num, self._eval_batch_sampler)
self._flops(flops_loader)
for step_id, data in enumerate(loader):
self.status['step_id'] = step_id
self._compose_callback.on_step_begin(self.status)
# forward
if self.use_amp:
with paddle.amp.auto_cast(
enable=self.cfg.use_gpu or self.cfg.use_mlu,
custom_white_list=self.custom_white_list,
custom_black_list=self.custom_black_list,
level=self.amp_level):
outs = eval_model(data)
else:
outs = eval_model(data)
# update metrics
for metric in self._metrics:
metric.update(data, outs)
# multi-scale inputs: all inputs have same im_id
if isinstance(data, typing.Sequence):
sample_num += data[0]['im_id'].numpy().shape[0]
else:
sample_num += data['im_id'].numpy().shape[0]
self._compose_callback.on_step_end(self.status)
self.status['sample_num'] = sample_num
self.status['cost_time'] = time.time() - tic
# accumulate metric to log out
for metric in self._metrics:
metric.accumulate()
metric.log()
self._compose_callback.on_epoch_end(self.status)
self._reset_metrics()
class Trainer_ARSL(Trainer):
def __init__(self, cfg, mode='train'):
self.cfg = cfg
assert mode.lower() in ['train', 'eval', 'test'], \
"mode should be 'train', 'eval' or 'test'"
self.mode = mode.lower()
self.optimizer = None
self.is_loaded_weights = False
capital_mode = self.mode.capitalize()
self.use_ema = False
self.dataset = self.cfg['{}Dataset'.format(capital_mode)] = create(
'{}Dataset'.format(capital_mode))()
if self.mode == 'train':
self.dataset_unlabel = self.cfg['UnsupTrainDataset'] = create(
'UnsupTrainDataset')
self.loader = create('SemiTrainReader')(
self.dataset, self.dataset_unlabel, cfg.worker_num)
# build model
if 'model' not in self.cfg:
self.student_model = create(cfg.architecture)
self.teacher_model = create(cfg.architecture)
self.model = EnsembleTSModel(self.teacher_model, self.student_model)
else:
self.model = self.cfg.model
self.is_loaded_weights = True
# save path for burn-in model
self.base_path = cfg.get('weights')
self.base_path = os.path.dirname(self.base_path)
# EvalDataset build with BatchSampler to evaluate in single device
# TODO: multi-device evaluate
if self.mode == 'eval':
self._eval_batch_sampler = paddle.io.BatchSampler(
self.dataset, batch_size=self.cfg.EvalReader['batch_size'])
self.loader = create('{}Reader'.format(self.mode.capitalize()))(
self.dataset, cfg.worker_num, self._eval_batch_sampler)
# TestDataset build after user set images, skip loader creation here
self.start_epoch = 0
self.end_epoch = 0 if 'epoch' not in cfg else cfg.epoch
self.epoch_iter = self.cfg.epoch_iter # set fixed iter in each epoch to control checkpoint
# build optimizer in train mode
if self.mode == 'train':
steps_per_epoch = self.epoch_iter
self.lr = create('LearningRate')(steps_per_epoch)
self.optimizer = create('OptimizerBuilder')(self.lr,
self.model.modelStudent)
self._nranks = dist.get_world_size()
self._local_rank = dist.get_rank()
self.status = {}
# initial default callbacks
self._init_callbacks()
# initial default metrics
self._init_metrics()
self._reset_metrics()
self.iter = 0
def resume_weights(self, weights):
# support Distill resume weights
if hasattr(self.model, 'student_model'):
self.start_epoch = load_weight(self.model.student_model, weights,
self.optimizer)
else:
self.start_epoch = load_weight(self.model, weights, self.optimizer)
logger.debug("Resume weights of epoch {}".format(self.start_epoch))
def train(self, validate=False):
assert self.mode == 'train', "Model not in 'train' mode"
Init_mark = False
# if validation in training is enabled, metrics should be re-init
if validate:
self._init_metrics(validate=validate)
self._reset_metrics()
if self.cfg.get('fleet', False):
self.model.modelStudent = fleet.distributed_model(
self.model.modelStudent)
self.optimizer = fleet.distributed_optimizer(self.optimizer)
elif self._nranks > 1:
find_unused_parameters = self.cfg[
'find_unused_parameters'] if 'find_unused_parameters' in self.cfg else False
self.model.modelStudent = paddle.DataParallel(
self.model.modelStudent,
find_unused_parameters=find_unused_parameters)
# set fixed iter in each epoch to control checkpoint
self.status.update({
'epoch_id': self.start_epoch,
'step_id': 0,
'steps_per_epoch': self.epoch_iter
})
print('338 Len of DataLoader: {}'.format(len(self.loader)))
self.status['batch_time'] = stats.SmoothedValue(
self.cfg.log_iter, fmt='{avg:.4f}')
self.status['data_time'] = stats.SmoothedValue(
self.cfg.log_iter, fmt='{avg:.4f}')
self.status['training_staus'] = stats.TrainingStats(self.cfg.log_iter)
self._compose_callback.on_train_begin(self.status)
epoch_id = self.start_epoch
self.iter = self.start_epoch * self.epoch_iter
# use iter rather than epoch to control training schedule
while self.iter < self.cfg.max_iter:
# epoch loop
self.status['mode'] = 'train'
self.status['epoch_id'] = epoch_id
self._compose_callback.on_epoch_begin(self.status)
self.loader.dataset_label.set_epoch(epoch_id)
self.loader.dataset_unlabel.set_epoch(epoch_id)
paddle.device.cuda.empty_cache() # clear GPU memory
# set model status
self.model.modelStudent.train()
self.model.modelTeacher.eval()
iter_tic = time.time()
# iter loop in each eopch
for step_id in range(self.epoch_iter):
data = next(self.loader)
self.status['data_time'].update(time.time() - iter_tic)
self.status['step_id'] = step_id
# profiler.add_profiler_step(profiler_options)
self._compose_callback.on_step_begin(self.status)
# model forward and calculate loss
loss_dict = self.run_step_full_semisup(data)
if (step_id + 1) % self.cfg.optimize_rate == 0:
self.optimizer.step()
self.optimizer.clear_grad()
curr_lr = self.optimizer.get_lr()
self.lr.step()
# update log status
self.status['learning_rate'] = curr_lr
if self._nranks < 2 or self._local_rank == 0:
self.status['training_staus'].update(loss_dict)
self.status['batch_time'].update(time.time() - iter_tic)
self._compose_callback.on_step_end(self.status)
self.iter += 1
iter_tic = time.time()
self._compose_callback.on_epoch_end(self.status)
if validate and (self._nranks < 2 or self._local_rank == 0) \
and ((epoch_id + 1) % self.cfg.snapshot_epoch == 0 \
or epoch_id == self.end_epoch - 1):
if not hasattr(self, '_eval_loader'):
# build evaluation dataset and loader
self._eval_dataset = self.cfg.EvalDataset
self._eval_batch_sampler = \
paddle.io.BatchSampler(
self._eval_dataset,
batch_size=self.cfg.EvalReader['batch_size'])
self._eval_loader = create('EvalReader')(
self._eval_dataset,
self.cfg.worker_num,
batch_sampler=self._eval_batch_sampler)
if validate and Init_mark == False:
Init_mark = True
self._init_metrics(validate=validate)
self._reset_metrics()
with paddle.no_grad():
self.status['save_best_model'] = True
# before burn-in stage, eval student. after burn-in stage, eval teacher
if self.iter <= self.cfg.SEMISUPNET['BURN_UP_STEP']:
print("start eval student model")
self._eval_with_loader(
self._eval_loader, mode="student")
else:
print("start eval teacher model")
self._eval_with_loader(
self._eval_loader, mode="teacher")
epoch_id += 1
self._compose_callback.on_train_end(self.status)
def merge_data(self, data1, data2):
data = copy.deepcopy(data1)
for k, v in data1.items():
if type(v) is paddle.Tensor:
data[k] = paddle.concat(x=[data[k], data2[k]], axis=0)
elif type(v) is list:
data[k].extend(data2[k])
return data
def run_step_full_semisup(self, data):
label_data_k, label_data_q, unlabel_data_k, unlabel_data_q = data
data_merge = self.merge_data(label_data_k, label_data_q)
loss_sup_dict = self.model.modelStudent(data_merge, branch="supervised")
loss_dict = {}
for key in loss_sup_dict.keys():
if key[:4] == "loss":
loss_dict[key] = loss_sup_dict[key] * 1
losses_sup = paddle.add_n(list(loss_dict.values()))
# norm loss when using gradient accumulation
losses_sup = losses_sup / self.cfg.optimize_rate
losses_sup.backward()
for key in loss_sup_dict.keys():
loss_dict[key + "_pseudo"] = paddle.to_tensor([0])
loss_dict["loss_tot"] = losses_sup
"""
semi-supervised training after burn-in stage
"""
if self.iter >= self.cfg.SEMISUPNET['BURN_UP_STEP']:
# init teacher model with burn-up weight
if self.iter == self.cfg.SEMISUPNET['BURN_UP_STEP']:
print(
'Starting semi-supervised learning and load the teacher model.'
)
self._update_teacher_model(keep_rate=0.00)
# save burn-in model
if dist.get_world_size() < 2 or dist.get_rank() == 0:
print('saving burn-in model.')
save_name = 'burnIn'
epoch_id = self.iter // self.epoch_iter
save_model(self.model, self.optimizer, self.base_path,
save_name, epoch_id)
# Update teacher model with EMA
elif (self.iter + 1) % self.cfg.optimize_rate == 0:
self._update_teacher_model(
keep_rate=self.cfg.SEMISUPNET['EMA_KEEP_RATE'])
#warm-up weight for pseudo loss
pseudo_weight = self.cfg.SEMISUPNET['UNSUP_LOSS_WEIGHT']
pseudo_warmup_iter = self.cfg.SEMISUPNET['PSEUDO_WARM_UP_STEPS']
temp = self.iter - self.cfg.SEMISUPNET['BURN_UP_STEP']
if temp <= pseudo_warmup_iter:
pseudo_weight *= (temp / pseudo_warmup_iter)
# get teacher predictions on weak-augmented unlabeled data
with paddle.no_grad():
teacher_pred = self.model.modelTeacher(
unlabel_data_k, branch='semi_supervised')
# calculate unsupervised loss on strong-augmented unlabeled data
loss_unsup_dict = self.model.modelStudent(
unlabel_data_q,
branch="semi_supervised",
teacher_prediction=teacher_pred, )
for key in loss_unsup_dict.keys():
if key[-6:] == "pseudo":
loss_unsup_dict[key] = loss_unsup_dict[key] * pseudo_weight
losses_unsup = paddle.add_n(list(loss_unsup_dict.values()))
# norm loss when using gradient accumulation
losses_unsup = losses_unsup / self.cfg.optimize_rate
losses_unsup.backward()
loss_dict.update(loss_unsup_dict)
loss_dict["loss_tot"] += losses_unsup
return loss_dict
def export(self, output_dir='output_inference'):
self.model.eval()
model_name = os.path.splitext(os.path.split(self.cfg.filename)[-1])[0]
save_dir = os.path.join(output_dir, model_name)
if not os.path.exists(save_dir):
os.makedirs(save_dir)
image_shape = None
if self.cfg.architecture in MOT_ARCH:
test_reader_name = 'TestMOTReader'
else:
test_reader_name = 'TestReader'
if 'inputs_def' in self.cfg[test_reader_name]:
inputs_def = self.cfg[test_reader_name]['inputs_def']
image_shape = inputs_def.get('image_shape', None)
# set image_shape=[3, -1, -1] as default
if image_shape is None:
image_shape = [3, -1, -1]
self.model.modelTeacher.eval()
if hasattr(self.model.modelTeacher, 'deploy'):
self.model.modelTeacher.deploy = True
# Save infer cfg
_dump_infer_config(self.cfg,
os.path.join(save_dir, 'infer_cfg.yml'), image_shape,
self.model.modelTeacher)
input_spec = [{
"image": InputSpec(
shape=[None] + image_shape, name='image'),
"im_shape": InputSpec(
shape=[None, 2], name='im_shape'),
"scale_factor": InputSpec(
shape=[None, 2], name='scale_factor')
}]
if self.cfg.architecture == 'DeepSORT':
input_spec[0].update({
"crops": InputSpec(
shape=[None, 3, 192, 64], name='crops')
})
static_model = paddle.jit.to_static(
self.model.modelTeacher, input_spec=input_spec)
# NOTE: dy2st do not pruned program, but jit.save will prune program
# input spec, prune input spec here and save with pruned input spec
pruned_input_spec = _prune_input_spec(input_spec,
static_model.forward.main_program,
static_model.forward.outputs)
# dy2st and save model
if 'slim' not in self.cfg or self.cfg['slim_type'] != 'QAT':
paddle.jit.save(
static_model,
os.path.join(save_dir, 'model'),
input_spec=pruned_input_spec)
else:
self.cfg.slim.save_quantized_model(
self.model.modelTeacher,
os.path.join(save_dir, 'model'),
input_spec=pruned_input_spec)
logger.info("Export model and saved in {}".format(save_dir))
def _eval_with_loader(self, loader, mode="teacher"):
sample_num = 0
tic = time.time()
self._compose_callback.on_epoch_begin(self.status)
self.status['mode'] = 'eval'
# self.model.eval()
self.model.modelTeacher.eval()
self.model.modelStudent.eval()
for step_id, data in enumerate(loader):
self.status['step_id'] = step_id
self._compose_callback.on_step_begin(self.status)
if mode == "teacher":
outs = self.model.modelTeacher(data)
else:
outs = self.model.modelStudent(data)
# update metrics
for metric in self._metrics:
metric.update(data, outs)
sample_num += data['im_id'].numpy().shape[0]
self._compose_callback.on_step_end(self.status)
self.status['sample_num'] = sample_num
self.status['cost_time'] = time.time() - tic
# accumulate metric to log out
for metric in self._metrics:
metric.accumulate()
metric.log()
self._compose_callback.on_epoch_end(self.status)
# reset metric states for metric may performed multiple times
self._reset_metrics()
def evaluate(self):
with paddle.no_grad():
self._eval_with_loader(self.loader)
@paddle.no_grad()
def _update_teacher_model(self, keep_rate=0.996):
student_model_dict = copy.deepcopy(self.model.modelStudent.state_dict())
new_teacher_dict = dict()
for key, value in self.model.modelTeacher.state_dict().items():
if key in student_model_dict.keys():
v = student_model_dict[key] * (1 - keep_rate
) + value * keep_rate
v.stop_gradient = True
new_teacher_dict[key] = v
else:
raise Exception("{} is not found in student model".format(key))
self.model.modelTeacher.set_dict(new_teacher_dict)
class EnsembleTSModel(nn.Layer):
def __init__(self, modelTeacher, modelStudent):
super(EnsembleTSModel, self).__init__()
self.modelTeacher = modelTeacher
self.modelStudent = modelStudent
class Trainer_Semi_RTDETR(Trainer):
def __init__(self, cfg, mode='train'):
self.cfg = cfg
assert mode.lower() in ['train', 'eval', 'test'], \
"mode should be 'train', 'eval' or 'test'"
self.mode = mode.lower()
self.optimizer = None
self.is_loaded_weights = False
self.use_amp = self.cfg.get('amp', False)
self.amp_level = self.cfg.get('amp_level', 'O1')
self.custom_white_list = self.cfg.get('custom_white_list', None)
self.custom_black_list = self.cfg.get('custom_black_list', None)
# build data loader
capital_mode = self.mode.capitalize()
self.dataset = self.cfg['{}Dataset'.format(capital_mode)] = create(
'{}Dataset'.format(capital_mode))()
if self.mode == 'train':
self.dataset_unlabel = self.cfg['UnsupTrainDataset'] = create(
'UnsupTrainDataset')
self.loader = create('SemiTrainReader')(
self.dataset, self.dataset_unlabel, cfg.worker_num)
# build model
if 'model' not in self.cfg:
self.model = create(cfg.SSOD)
else:
self.model = self.cfg.model
self.is_loaded_weights = True
# EvalDataset build with BatchSampler to evaluate in single device
# TODO: multi-device evaluate
if self.mode == 'eval':
self._eval_batch_sampler = paddle.io.BatchSampler(
self.dataset, batch_size=self.cfg.EvalReader['batch_size'])
# If metric is VOC, need to be set collate_batch=False.
if cfg.metric == 'VOC':
cfg['EvalReader']['collate_batch'] = False
self.loader = create('EvalReader')(self.dataset, cfg.worker_num,
self._eval_batch_sampler)
# TestDataset build after user set images, skip loader creation here
# build optimizer in train mode
if self.mode == 'train':
steps_per_epoch = len(self.loader)
if steps_per_epoch < 1:
logger.warning(
"Samples in dataset are less than batch_size, please set smaller batch_size in TrainReader."
)
self.lr = create('LearningRate')(steps_per_epoch)
self.optimizer = create('OptimizerBuilder')(self.lr, self.model)
# Unstructured pruner is only enabled in the train mode.
if self.cfg.get('unstructured_prune'):
self.pruner = create('UnstructuredPruner')(self.model,
steps_per_epoch)
if self.use_amp and self.amp_level == 'O2':
self.model, self.optimizer = paddle.amp.decorate(
models=self.model,
optimizers=self.optimizer,
level=self.amp_level)
self._nranks = dist.get_world_size()
self._local_rank = dist.get_rank()
self.status = {}
self.start_epoch = 0
self.start_iter = 0
self.end_epoch = 0 if 'epoch' not in cfg else cfg.epoch
# initial default callbacks
self._init_callbacks()
# initial default metrics
self._init_metrics()
self._reset_metrics()
def load_semi_weights(self, t_weights, s_weights):
if self.is_loaded_weights:
return
self.start_epoch = 0
load_pretrain_weight(self.model.teacher, t_weights)
load_pretrain_weight(self.model.student, s_weights)
logger.info("Load teacher weights {} to start training".format(
t_weights))
logger.info("Load student weights {} to start training".format(
s_weights))
def resume_weights(self, weights, exchange=True):
# support Distill resume weights
if hasattr(self.model, 'student_model'):
self.start_epoch = load_weight(self.model.student_model, weights,
self.optimizer, exchange)
else:
self.start_iter, self.start_epoch = load_weight(
self.model, weights, self.optimizer, self.ema
if self.use_ema else None, exchange)
logger.debug("Resume weights of epoch {}".format(self.start_epoch))
logger.debug("Resume weights of iter {}".format(self.start_iter))
def train(self, validate=False):
assert self.mode == 'train', "Model not in 'train' mode"
Init_mark = False
if validate:
self.cfg.EvalDataset = create("EvalDataset")()
model = self.model
sync_bn = (getattr(self.cfg, 'norm_type', None) == 'sync_bn' and
self.cfg.use_gpu and self._nranks > 1)
if sync_bn:
# self.model = paddle.nn.SyncBatchNorm.convert_sync_batchnorm(
# self.model)
model.teacher = paddle.nn.SyncBatchNorm.convert_sync_batchnorm(
model.teacher)
model.student = paddle.nn.SyncBatchNorm.convert_sync_batchnorm(
self.model.student)
if self.cfg.get('fleet', False):
# model = fleet.distributed_model(model)
model = fleet.distributed_model(model)
self.optimizer = fleet.distributed_optimizer(self.optimizer)
elif self._nranks > 1:
find_unused_parameters = self.cfg[
'find_unused_parameters'] if 'find_unused_parameters' in self.cfg else False
model = paddle.DataParallel(
model, find_unused_parameters=find_unused_parameters)
if self.cfg.get('amp', False):
scaler = amp.GradScaler(
enable=self.cfg.use_gpu or self.cfg.use_npu,
init_loss_scaling=1024)
self.status.update({
'epoch_id': self.start_epoch,
'iter_id': self.start_iter,
# 'step_id': self.start_step,
'steps_per_epoch': len(self.loader),
})
self.status['batch_time'] = stats.SmoothedValue(
self.cfg.log_iter, fmt='{avg:.4f}')
self.status['data_time'] = stats.SmoothedValue(
self.cfg.log_iter, fmt='{avg:.4f}')
self.status['training_staus'] = stats.TrainingStats(self.cfg.log_iter)
if self.cfg.get('print_flops', False):
flops_loader = create('{}Reader'.format(self.mode.capitalize()))(
self.dataset, self.cfg.worker_num)
self._flops(flops_loader)
profiler_options = self.cfg.get('profiler_options', None)
self._compose_callback.on_train_begin(self.status)
iter_id = self.start_iter
self.status['iter_id'] = iter_id
self.status['eval_interval'] = self.cfg.eval_interval
self.status['save_interval'] = self.cfg.save_interval
for epoch_id in range(self.start_epoch, self.cfg.epoch):
self.status['mode'] = 'train'
self.status['epoch_id'] = epoch_id
self._compose_callback.on_epoch_begin(self.status)
self.loader.dataset_label.set_epoch(epoch_id)
self.loader.dataset_unlabel.set_epoch(epoch_id)
iter_tic = time.time()
if self._nranks > 1:
# print(model)
model._layers.teacher.eval()
model._layers.student.train()
else:
model.teacher.eval()
model.student.train()
iter_tic = time.time()
for step_id in range(len(self.loader)):
data = next(self.loader)
data_sup_w, data_sup_s, data_unsup_w, data_unsup_s = data
data_sup_w['epoch_id'] = epoch_id
data_sup_s['epoch_id'] = epoch_id
data_unsup_w['epoch_id'] = epoch_id
data_unsup_s['epoch_id'] = epoch_id
data = [data_sup_w, data_sup_s, data_unsup_w, data_unsup_s]
iter_id += 1
self.status['data_time'].update(time.time() - iter_tic)
self.status['step_id'] = step_id
self.status['iter_id'] = iter_id
data.append(iter_id)
profiler.add_profiler_step(profiler_options)
self._compose_callback.on_step_begin(self.status)
if self.cfg.get('amp', False):
with amp.auto_cast(enable=self.cfg.use_gpu):
# model forward
if self._nranks > 1:
outputs = model._layers(data)
else:
outputs = model(data)
loss = outputs['loss']
scaled_loss = scaler.scale(loss)
scaled_loss.backward()
scaler.minimize(self.optimizer, scaled_loss)
else:
outputs = model(data)
loss = outputs['loss']
# model backward
loss.backward()
self.optimizer.step()
curr_lr = self.optimizer.get_lr()
self.lr.step()
if self.cfg.get('unstructured_prune'):
self.pruner.step()
self.optimizer.clear_grad()
# print(outputs)
# outputs=reduce_dict(outputs)
# if self.model.debug:
# check_gradient(model)
# self.check_gradient()
self.status['learning_rate'] = curr_lr
if self._nranks < 2 or self._local_rank == 0:
self.status['training_staus'].update(outputs)
self.status['batch_time'].update(time.time() - iter_tic)
if validate and (self._nranks < 2 or self._local_rank == 0) and \
((iter_id + 1) % self.cfg.eval_interval == 0):
if not hasattr(self, '_eval_loader'):
# build evaluation dataset and loader
self._eval_dataset = self.cfg.EvalDataset
self._eval_batch_sampler = \
paddle.io.BatchSampler(
self._eval_dataset,
batch_size=self.cfg.EvalReader['batch_size'])
# If metric is VOC, need to be set collate_batch=False.
if self.cfg.metric == 'VOC':
self.cfg['EvalReader']['collate_batch'] = False
self._eval_loader = create('EvalReader')(
self._eval_dataset,
self.cfg.worker_num,
batch_sampler=self._eval_batch_sampler)
# if validation in training is enabled, metrics should be re-init
# Init_mark makes sure this code will only execute once
if validate and Init_mark == False:
Init_mark = True
self._init_metrics(validate=validate)
self._reset_metrics()
with paddle.no_grad():
self.status['save_best_model'] = True
self._eval_with_loader(self._eval_loader)
model._layers.student.train()
self._compose_callback.on_step_end(self.status)
iter_tic = time.time()
if self.cfg.get('unstructured_prune'):
self.pruner.update_params()
self._compose_callback.on_epoch_end(self.status)
self._compose_callback.on_train_end(self.status)
def _eval_with_loader(self, loader):
sample_num = 0
tic = time.time()
self._compose_callback.on_epoch_begin(self.status)
self.status['mode'] = 'eval'
self.model.eval()
if self.cfg.get('print_flops', False):
flops_loader = create('{}Reader'.format(self.mode.capitalize()))(
self.dataset, self.cfg.worker_num, self._eval_batch_sampler)
self._flops(flops_loader)
print("*****teacher evaluate*****")
for step_id, data in enumerate(loader):
self.status['step_id'] = step_id
self._compose_callback.on_step_begin(self.status)
# forward
outs = self.model.teacher(data)
# update metrics
for metric in self._metrics:
metric.update(data, outs)
# multi-scale inputs: all inputs have same im_id
if isinstance(data, typing.Sequence):
sample_num += data[0]['im_id'].numpy().shape[0]
else:
sample_num += data['im_id'].numpy().shape[0]
self._compose_callback.on_step_end(self.status)
self.status['sample_num'] = sample_num
self.status['cost_time'] = time.time() - tic
# accumulate metric to log out
for metric in self._metrics:
metric.accumulate()
metric.log()
self._compose_callback.on_epoch_end(self.status)
# reset metric states for metric may performed multiple times
self._reset_metrics()
print("*****student evaluate*****")
for step_id, data in enumerate(loader):
self.status['step_id'] = step_id
self._compose_callback.on_step_begin(self.status)
# forward
outs = self.model.student(data)
# update metrics
for metric in self._metrics:
metric.update(data, outs)
# multi-scale inputs: all inputs have same im_id
if isinstance(data, typing.Sequence):
sample_num += data[0]['im_id'].numpy().shape[0]
else:
sample_num += data['im_id'].numpy().shape[0]
self._compose_callback.on_step_end(self.status)
self.status['sample_num'] = sample_num
self.status['cost_time'] = time.time() - tic
# accumulate metric to log out
for metric in self._metrics:
metric.accumulate()
metric.log()
# reset metric states for metric may performed multiple times
self._reset_metrics()
self.status['mode'] = 'train'
def evaluate(self):
with paddle.no_grad():
self._eval_with_loader(self.loader)
| PaddleDetection/ppdet/engine/trainer_ssod.py/0 | {
"file_path": "PaddleDetection/ppdet/engine/trainer_ssod.py",
"repo_id": "PaddleDetection",
"token_count": 26540
} | 71 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import json
from collections import defaultdict, OrderedDict
import numpy as np
import paddle
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
from ..modeling.keypoint_utils import oks_nms, keypoint_pck_accuracy, keypoint_auc, keypoint_epe
from scipy.io import loadmat, savemat
from ppdet.utils.logger import setup_logger
logger = setup_logger(__name__)
__all__ = [
'KeyPointTopDownCOCOEval', 'KeyPointTopDownCOCOWholeBadyHandEval',
'KeyPointTopDownMPIIEval'
]
class KeyPointTopDownCOCOEval(object):
"""refer to
https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
Copyright (c) Microsoft, under the MIT License.
"""
def __init__(self,
anno_file,
num_samples,
num_joints,
output_eval,
iou_type='keypoints',
in_vis_thre=0.2,
oks_thre=0.9,
save_prediction_only=False):
super(KeyPointTopDownCOCOEval, self).__init__()
self.coco = COCO(anno_file)
self.num_samples = num_samples
self.num_joints = num_joints
self.iou_type = iou_type
self.in_vis_thre = in_vis_thre
self.oks_thre = oks_thre
self.output_eval = output_eval
self.res_file = os.path.join(output_eval, "keypoints_results.json")
self.save_prediction_only = save_prediction_only
self.reset()
def reset(self):
self.results = {
'all_preds': np.zeros(
(self.num_samples, self.num_joints, 3), dtype=np.float32),
'all_boxes': np.zeros((self.num_samples, 6)),
'image_path': []
}
self.eval_results = {}
self.idx = 0
def update(self, inputs, outputs):
kpts, _ = outputs['keypoint'][0]
num_images = inputs['image'].shape[0]
self.results['all_preds'][self.idx:self.idx + num_images, :, 0:
3] = kpts[:, :, 0:3]
self.results['all_boxes'][self.idx:self.idx + num_images, 0:2] = inputs[
'center'].numpy()[:, 0:2] if isinstance(
inputs['center'], paddle.Tensor) else inputs['center'][:, 0:2]
self.results['all_boxes'][self.idx:self.idx + num_images, 2:4] = inputs[
'scale'].numpy()[:, 0:2] if isinstance(
inputs['scale'], paddle.Tensor) else inputs['scale'][:, 0:2]
self.results['all_boxes'][self.idx:self.idx + num_images, 4] = np.prod(
inputs['scale'].numpy() * 200,
1) if isinstance(inputs['scale'], paddle.Tensor) else np.prod(
inputs['scale'] * 200, 1)
self.results['all_boxes'][
self.idx:self.idx + num_images,
5] = np.squeeze(inputs['score'].numpy()) if isinstance(
inputs['score'], paddle.Tensor) else np.squeeze(inputs['score'])
if isinstance(inputs['im_id'], paddle.Tensor):
self.results['image_path'].extend(inputs['im_id'].numpy())
else:
self.results['image_path'].extend(inputs['im_id'])
self.idx += num_images
def _write_coco_keypoint_results(self, keypoints):
data_pack = [{
'cat_id': 1,
'cls': 'person',
'ann_type': 'keypoints',
'keypoints': keypoints
}]
results = self._coco_keypoint_results_one_category_kernel(data_pack[0])
if not os.path.exists(self.output_eval):
os.makedirs(self.output_eval)
with open(self.res_file, 'w') as f:
json.dump(results, f, sort_keys=True, indent=4)
logger.info(f'The keypoint result is saved to {self.res_file}.')
try:
json.load(open(self.res_file))
except Exception:
content = []
with open(self.res_file, 'r') as f:
for line in f:
content.append(line)
content[-1] = ']'
with open(self.res_file, 'w') as f:
for c in content:
f.write(c)
def _coco_keypoint_results_one_category_kernel(self, data_pack):
cat_id = data_pack['cat_id']
keypoints = data_pack['keypoints']
cat_results = []
for img_kpts in keypoints:
if len(img_kpts) == 0:
continue
_key_points = np.array(
[img_kpts[k]['keypoints'] for k in range(len(img_kpts))])
_key_points = _key_points.reshape(_key_points.shape[0], -1)
result = [{
'image_id': img_kpts[k]['image'],
'category_id': cat_id,
'keypoints': _key_points[k].tolist(),
'score': img_kpts[k]['score'],
'center': list(img_kpts[k]['center']),
'scale': list(img_kpts[k]['scale'])
} for k in range(len(img_kpts))]
cat_results.extend(result)
return cat_results
def get_final_results(self, preds, all_boxes, img_path):
_kpts = []
for idx, kpt in enumerate(preds):
_kpts.append({
'keypoints': kpt,
'center': all_boxes[idx][0:2],
'scale': all_boxes[idx][2:4],
'area': all_boxes[idx][4],
'score': all_boxes[idx][5],
'image': int(img_path[idx])
})
# image x person x (keypoints)
kpts = defaultdict(list)
for kpt in _kpts:
kpts[kpt['image']].append(kpt)
# rescoring and oks nms
num_joints = preds.shape[1]
in_vis_thre = self.in_vis_thre
oks_thre = self.oks_thre
oks_nmsed_kpts = []
for img in kpts.keys():
img_kpts = kpts[img]
for n_p in img_kpts:
box_score = n_p['score']
kpt_score = 0
valid_num = 0
for n_jt in range(0, num_joints):
t_s = n_p['keypoints'][n_jt][2]
if t_s > in_vis_thre:
kpt_score = kpt_score + t_s
valid_num = valid_num + 1
if valid_num != 0:
kpt_score = kpt_score / valid_num
# rescoring
n_p['score'] = kpt_score * box_score
keep = oks_nms([img_kpts[i] for i in range(len(img_kpts))],
oks_thre)
if len(keep) == 0:
oks_nmsed_kpts.append(img_kpts)
else:
oks_nmsed_kpts.append([img_kpts[_keep] for _keep in keep])
self._write_coco_keypoint_results(oks_nmsed_kpts)
def accumulate(self):
self.get_final_results(self.results['all_preds'],
self.results['all_boxes'],
self.results['image_path'])
if self.save_prediction_only:
logger.info(f'The keypoint result is saved to {self.res_file} '
'and do not evaluate the mAP.')
return
coco_dt = self.coco.loadRes(self.res_file)
coco_eval = COCOeval(self.coco, coco_dt, 'keypoints')
coco_eval.params.useSegm = None
coco_eval.evaluate()
coco_eval.accumulate()
coco_eval.summarize()
keypoint_stats = []
for ind in range(len(coco_eval.stats)):
keypoint_stats.append((coco_eval.stats[ind]))
self.eval_results['keypoint'] = keypoint_stats
def log(self):
if self.save_prediction_only:
return
stats_names = [
'AP', 'Ap .5', 'AP .75', 'AP (M)', 'AP (L)', 'AR', 'AR .5',
'AR .75', 'AR (M)', 'AR (L)'
]
num_values = len(stats_names)
print(' '.join(['| {}'.format(name) for name in stats_names]) + ' |')
print('|---' * (num_values + 1) + '|')
print(' '.join([
'| {:.3f}'.format(value) for value in self.eval_results['keypoint']
]) + ' |')
def get_results(self):
return self.eval_results
class KeyPointTopDownCOCOWholeBadyHandEval(object):
def __init__(self,
anno_file,
num_samples,
num_joints,
output_eval,
save_prediction_only=False):
super(KeyPointTopDownCOCOWholeBadyHandEval, self).__init__()
self.coco = COCO(anno_file)
self.num_samples = num_samples
self.num_joints = num_joints
self.output_eval = output_eval
self.res_file = os.path.join(output_eval, "keypoints_results.json")
self.save_prediction_only = save_prediction_only
self.parse_dataset()
self.reset()
def parse_dataset(self):
gt_db = []
num_joints = self.num_joints
coco = self.coco
img_ids = coco.getImgIds()
for img_id in img_ids:
ann_ids = coco.getAnnIds(imgIds=img_id, iscrowd=False)
objs = coco.loadAnns(ann_ids)
for obj in objs:
for type in ['left', 'right']:
if (obj[f'{type}hand_valid'] and
max(obj[f'{type}hand_kpts']) > 0):
joints = np.zeros((num_joints, 3), dtype=np.float32)
joints_vis = np.zeros((num_joints, 3), dtype=np.float32)
keypoints = np.array(obj[f'{type}hand_kpts'])
keypoints = keypoints.reshape(-1, 3)
joints[:, :2] = keypoints[:, :2]
joints_vis[:, :2] = np.minimum(1, keypoints[:, 2:3])
gt_db.append({
'bbox': obj[f'{type}hand_box'],
'gt_joints': joints,
'joints_vis': joints_vis,
})
self.db = gt_db
def reset(self):
self.results = {
'preds': np.zeros(
(self.num_samples, self.num_joints, 3), dtype=np.float32),
}
self.eval_results = {}
self.idx = 0
def update(self, inputs, outputs):
kpts, _ = outputs['keypoint'][0]
num_images = inputs['image'].shape[0]
self.results['preds'][self.idx:self.idx + num_images, :, 0:
3] = kpts[:, :, 0:3]
self.idx += num_images
def accumulate(self):
self.get_final_results(self.results['preds'])
if self.save_prediction_only:
logger.info(f'The keypoint result is saved to {self.res_file} '
'and do not evaluate the mAP.')
return
self.eval_results = self.evaluate(self.res_file, ('PCK', 'AUC', 'EPE'))
def get_final_results(self, preds):
kpts = []
for idx, kpt in enumerate(preds):
kpts.append({'keypoints': kpt.tolist()})
self._write_keypoint_results(kpts)
def _write_keypoint_results(self, keypoints):
if not os.path.exists(self.output_eval):
os.makedirs(self.output_eval)
with open(self.res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
logger.info(f'The keypoint result is saved to {self.res_file}.')
try:
json.load(open(self.res_file))
except Exception:
content = []
with open(self.res_file, 'r') as f:
for line in f:
content.append(line)
content[-1] = ']'
with open(self.res_file, 'w') as f:
for c in content:
f.write(c)
def log(self):
if self.save_prediction_only:
return
for item, value in self.eval_results.items():
print("{} : {}".format(item, value))
def get_results(self):
return self.eval_results
def evaluate(self, res_file, metrics, pck_thr=0.2, auc_nor=30):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'AUC', 'EPE'.
pck_thr (float): PCK threshold, default as 0.2.
auc_nor (float): AUC normalization factor, default as 30 pixel.
Returns:
List: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
threshold_bbox = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['gt_joints'])[:, :-1])
masks.append((np.array(item['joints_vis'])[:, 0]) > 0)
if 'PCK' in metrics:
bbox = np.array(item['bbox'])
bbox_thr = np.max(bbox[2:])
threshold_bbox.append(np.array([bbox_thr, bbox_thr]))
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
threshold_bbox = np.array(threshold_bbox)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
threshold_bbox)
info_str.append(('PCK', pck))
if 'AUC' in metrics:
info_str.append(('AUC', keypoint_auc(outputs, gts, masks, auc_nor)))
if 'EPE' in metrics:
info_str.append(('EPE', keypoint_epe(outputs, gts, masks)))
name_value = OrderedDict(info_str)
return name_value
class KeyPointTopDownMPIIEval(object):
def __init__(self,
anno_file,
num_samples,
num_joints,
output_eval,
oks_thre=0.9,
save_prediction_only=False):
super(KeyPointTopDownMPIIEval, self).__init__()
self.ann_file = anno_file
self.res_file = os.path.join(output_eval, "keypoints_results.json")
self.save_prediction_only = save_prediction_only
self.reset()
def reset(self):
self.results = []
self.eval_results = {}
self.idx = 0
def update(self, inputs, outputs):
kpts, _ = outputs['keypoint'][0]
num_images = inputs['image'].shape[0]
results = {}
results['preds'] = kpts[:, :, 0:3]
results['boxes'] = np.zeros((num_images, 6))
results['boxes'][:, 0:2] = inputs['center'].numpy()[:, 0:2]
results['boxes'][:, 2:4] = inputs['scale'].numpy()[:, 0:2]
results['boxes'][:, 4] = np.prod(inputs['scale'].numpy() * 200, 1)
results['boxes'][:, 5] = np.squeeze(inputs['score'].numpy())
results['image_path'] = inputs['image_file']
self.results.append(results)
def accumulate(self):
self._mpii_keypoint_results_save()
if self.save_prediction_only:
logger.info(f'The keypoint result is saved to {self.res_file} '
'and do not evaluate the mAP.')
return
self.eval_results = self.evaluate(self.results)
def _mpii_keypoint_results_save(self):
results = []
for res in self.results:
if len(res) == 0:
continue
result = [{
'preds': res['preds'][k].tolist(),
'boxes': res['boxes'][k].tolist(),
'image_path': res['image_path'][k],
} for k in range(len(res))]
results.extend(result)
with open(self.res_file, 'w') as f:
json.dump(results, f, sort_keys=True, indent=4)
logger.info(f'The keypoint result is saved to {self.res_file}.')
def log(self):
if self.save_prediction_only:
return
for item, value in self.eval_results.items():
print("{} : {}".format(item, value))
def get_results(self):
return self.eval_results
def evaluate(self, outputs, savepath=None):
"""Evaluate PCKh for MPII dataset. refer to
https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
Copyright (c) Microsoft, under the MIT License.
Args:
outputs(list(preds, boxes)):
* preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
* boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
Returns:
dict: PCKh for each joint
"""
kpts = []
for output in outputs:
preds = output['preds']
batch_size = preds.shape[0]
for i in range(batch_size):
kpts.append({'keypoints': preds[i]})
preds = np.stack([kpt['keypoints'] for kpt in kpts])
# convert 0-based index to 1-based index,
# and get the first two dimensions.
preds = preds[..., :2] + 1.0
if savepath is not None:
pred_file = os.path.join(savepath, 'pred.mat')
savemat(pred_file, mdict={'preds': preds})
SC_BIAS = 0.6
threshold = 0.5
gt_file = os.path.join(
os.path.dirname(self.ann_file), 'mpii_gt_val.mat')
gt_dict = loadmat(gt_file)
dataset_joints = gt_dict['dataset_joints']
jnt_missing = gt_dict['jnt_missing']
pos_gt_src = gt_dict['pos_gt_src']
headboxes_src = gt_dict['headboxes_src']
pos_pred_src = np.transpose(preds, [1, 2, 0])
head = np.where(dataset_joints == 'head')[1][0]
lsho = np.where(dataset_joints == 'lsho')[1][0]
lelb = np.where(dataset_joints == 'lelb')[1][0]
lwri = np.where(dataset_joints == 'lwri')[1][0]
lhip = np.where(dataset_joints == 'lhip')[1][0]
lkne = np.where(dataset_joints == 'lkne')[1][0]
lank = np.where(dataset_joints == 'lank')[1][0]
rsho = np.where(dataset_joints == 'rsho')[1][0]
relb = np.where(dataset_joints == 'relb')[1][0]
rwri = np.where(dataset_joints == 'rwri')[1][0]
rkne = np.where(dataset_joints == 'rkne')[1][0]
rank = np.where(dataset_joints == 'rank')[1][0]
rhip = np.where(dataset_joints == 'rhip')[1][0]
jnt_visible = 1 - jnt_missing
uv_error = pos_pred_src - pos_gt_src
uv_err = np.linalg.norm(uv_error, axis=1)
headsizes = headboxes_src[1, :, :] - headboxes_src[0, :, :]
headsizes = np.linalg.norm(headsizes, axis=0)
headsizes *= SC_BIAS
scale = headsizes * np.ones((len(uv_err), 1), dtype=np.float32)
scaled_uv_err = uv_err / scale
scaled_uv_err = scaled_uv_err * jnt_visible
jnt_count = np.sum(jnt_visible, axis=1)
less_than_threshold = (scaled_uv_err <= threshold) * jnt_visible
PCKh = 100. * np.sum(less_than_threshold, axis=1) / jnt_count
# save
rng = np.arange(0, 0.5 + 0.01, 0.01)
pckAll = np.zeros((len(rng), 16), dtype=np.float32)
for r, threshold in enumerate(rng):
less_than_threshold = (scaled_uv_err <= threshold) * jnt_visible
pckAll[r, :] = 100. * np.sum(less_than_threshold,
axis=1) / jnt_count
PCKh = np.ma.array(PCKh, mask=False)
PCKh.mask[6:8] = True
jnt_count = np.ma.array(jnt_count, mask=False)
jnt_count.mask[6:8] = True
jnt_ratio = jnt_count / np.sum(jnt_count).astype(np.float64)
name_value = [ #noqa
('Head', PCKh[head]),
('Shoulder', 0.5 * (PCKh[lsho] + PCKh[rsho])),
('Elbow', 0.5 * (PCKh[lelb] + PCKh[relb])),
('Wrist', 0.5 * (PCKh[lwri] + PCKh[rwri])),
('Hip', 0.5 * (PCKh[lhip] + PCKh[rhip])),
('Knee', 0.5 * (PCKh[lkne] + PCKh[rkne])),
('Ankle', 0.5 * (PCKh[lank] + PCKh[rank])),
('PCKh', np.sum(PCKh * jnt_ratio)),
('PCKh@0.1', np.sum(pckAll[11, :] * jnt_ratio))
]
name_value = OrderedDict(name_value)
return name_value
def _sort_and_unique_bboxes(self, kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
| PaddleDetection/ppdet/metrics/keypoint_metrics.py/0 | {
"file_path": "PaddleDetection/ppdet/metrics/keypoint_metrics.py",
"repo_id": "PaddleDetection",
"token_count": 11265
} | 72 |
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from ppdet.core.workspace import register, create
from .meta_arch import BaseArch
import paddle
import paddle.nn.functional as F
__all__ = ['BlazeFace']
@register
class BlazeFace(BaseArch):
"""
BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs,
see https://arxiv.org/abs/1907.05047
Args:
backbone (nn.Layer): backbone instance
neck (nn.Layer): neck instance
blaze_head (nn.Layer): `blazeHead` instance
post_process (object): `BBoxPostProcess` instance
"""
__category__ = 'architecture'
__inject__ = ['post_process']
def __init__(self, backbone, blaze_head, neck, post_process):
super(BlazeFace, self).__init__()
self.backbone = backbone
self.neck = neck
self.blaze_head = blaze_head
self.post_process = post_process
@classmethod
def from_config(cls, cfg, *args, **kwargs):
# backbone
backbone = create(cfg['backbone'])
# fpn
kwargs = {'input_shape': backbone.out_shape}
neck = create(cfg['neck'], **kwargs)
# head
kwargs = {'input_shape': neck.out_shape}
blaze_head = create(cfg['blaze_head'], **kwargs)
return {
'backbone': backbone,
'neck': neck,
'blaze_head': blaze_head,
}
def _forward(self):
# Backbone
body_feats = self.backbone(self.inputs)
# neck
neck_feats = self.neck(body_feats)
# blaze Head
if self.training:
return self.blaze_head(neck_feats, self.inputs['image'],
self.inputs['gt_bbox'],
self.inputs['gt_class'])
else:
preds, anchors = self.blaze_head(neck_feats, self.inputs['image'])
bbox, bbox_num, nms_keep_idx = self.post_process(
preds, anchors, self.inputs['im_shape'],
self.inputs['scale_factor'])
if self.use_extra_data:
extra_data = {} # record the bbox output before nms, such like scores and nms_keep_idx
"""extra_data:{
'scores': predict scores,
'nms_keep_idx': bbox index before nms,
}
"""
preds_logits = preds[1] # [[1xNumBBoxNumClass]]
extra_data['scores'] = F.softmax(paddle.concat(
preds_logits, axis=1)).transpose([0, 2, 1])
extra_data['logits'] = paddle.concat(
preds_logits, axis=1).transpose([0, 2, 1])
extra_data['nms_keep_idx'] = nms_keep_idx # bbox index before nms
return bbox, bbox_num, extra_data
else:
return bbox, bbox_num
def get_loss(self, ):
return {"loss": self._forward()}
def get_pred(self):
if self.use_extra_data:
bbox_pred, bbox_num, extra_data = self._forward()
output = {
"bbox": bbox_pred,
"bbox_num": bbox_num,
"extra_data": extra_data
}
else:
bbox_pred, bbox_num = self._forward()
output = {
"bbox": bbox_pred,
"bbox_num": bbox_num,
}
return output
| PaddleDetection/ppdet/modeling/architectures/blazeface.py/0 | {
"file_path": "PaddleDetection/ppdet/modeling/architectures/blazeface.py",
"repo_id": "PaddleDetection",
"token_count": 1955
} | 73 |
Subsets and Splits