File size: 2,478 Bytes
21794d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: apache-2.0
datasets:
- detection-datasets/coco
language:
- en
metrics:
- accuracy
tags:
- RyzenAI
- pose estimation
---

# MoveNet

MoveNet is an ultra fast and accurate model that detects 17 keypoints of a body. It released in [movenet.pytorch](https://github.com/fire717/movenet.pytorch/blob/master/README.md?plain=1)


We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/).



## How to use

### Installation

   Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
   Run the following script to install pre-requisites for this model.
   ```bash
   pip install -r requirements.txt 
   ```


### Data Preparation (optional: for accuracy evaluation)

1.Download COCO dataset2017 from https://cocodataset.org/. (You need train2017.zip, val2017.zip and annotations.)Unzip to `./data/` like this:

```
β”œβ”€β”€ data
    β”œβ”€β”€ annotations (person_keypoints_train2017.json, person_keypoints_val2017.json, ...)
    β”œβ”€β”€ train2017   (xx.jpg, xx.jpg,...)
    └── val2017     (xx.jpg, xx.jpg,...)

```


2.Make data to our data format.
 - Modify the path in line 282~287 in make_coco_data_17keypoints.py if needed
 - run the code to pre-process the dataset
```
python make_coco_data_17keypoints.py
```
```
Our data format: JSON file
Keypoints order:['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 
    'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 
    'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 
    'right_ankle']

One item:
[{"img_name": "0.jpg",
  "keypoints": [x0,y0,z0,x1,y1,z1,...],
  #z: 0 for no label, 1 for labeled but invisible, 2 for labeled and visible
  "center": [x,y],
  "bbox":[x0,y0,x1,y1],
  "other_centers": [[x0,y0],[x1,y1],...],
  "other_keypoints": [[[x0,y0],[x1,y1],...],[[x0,y0],[x1,y1],...],...], #lenth = num_keypoints
 },
 ...
]
```




### Test & Evaluation

 - Modify the DATASET_PATH in eval_onnx.py if needed
 - Test accuracy of the quantized model
  ```python
  python eval_onnx.py --ipu --provider_config Path\To\vaip_config.json
  ```

### Performance

|Metric |Accuracy on IPU|
| :----:  | :----: |
|accuracy | 79.745%|


## Citation
1.[model card](https://storage.googleapis.com/movenet/MoveNet.SinglePose%20Model%20Card.pdf)
2.[movenet.pytorch](https://github.com/fire717/movenet.pytorch/blob/master/README.md?plain=1)