danelcsb commited on
Commit
2d70574
1 Parent(s): 74a7862

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -14
README.md CHANGED
@@ -1,36 +1,55 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
  ## Model Details
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ### Model Description
15
 
16
  <!-- Provide a longer summary of what this model is. -->
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
  - **Shared by [optional]:** [More Information Needed]
23
  - **Model type:** [More Information Needed]
24
  - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
  - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
  - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
@@ -59,7 +78,15 @@ This is the model card of a 🤗 transformers model that has been pushed on the
59
 
60
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
 
 
 
 
 
 
 
 
63
 
64
  ### Recommendations
65
 
@@ -71,8 +98,75 @@ Users (both direct and downstream) should be made aware of the risks, biases and
71
 
72
  Use the code below to get started with the model.
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
  ## Training Details
77
 
78
  ### Training Data
@@ -92,13 +186,14 @@ Use the code below to get started with the model.
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
96
 
97
  #### Speeds, Sizes, Times [optional]
98
 
99
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
@@ -126,7 +221,7 @@ Use the code below to get started with the model.
126
 
127
  ### Results
128
 
129
- [More Information Needed]
130
 
131
  #### Summary
132
 
@@ -162,7 +257,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
162
 
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
@@ -174,7 +269,15 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
174
 
175
  **BibTeX:**
176
 
177
- [More Information Needed]
 
 
 
 
 
 
 
 
178
 
179
  **APA:**
180
 
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ pipeline_tag: keypoint-detection
7
  ---
8
 
9
  # Model Card for Model ID
10
 
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/ZuIwMdomy2_6aJ_JTE1Yd.png)
12
 
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
 
15
+ ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation and ViTPose+: Vision Transformer Foundation Model for Generic Body Pose Estimation. It obtains 81.1 AP on MS COCO Keypoint test-dev set.
16
 
17
  ## Model Details
18
 
19
+ Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for
20
+ pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm,
21
+ and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision
22
+ transformers as backbones to extract features for a given person instance and a
23
+ lightweight decoder for pose estimation. It can be scaled up from 100M to 1B
24
+ parameters by taking the advantages of the scalable model capacity and high
25
+ parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose
26
+ tasks. We also empirically demonstrate that the knowledge of large ViTPose models
27
+ can be easily transferred to small ones via a simple knowledge token. Experimental
28
+ results show that our basic ViTPose model outperforms representative methods
29
+ on the challenging MS COCO Keypoint Detection benchmark, while the largest
30
+ model sets a new state-of-the-art, i.e., 80.9 AP on the MS COCO test-dev set. The
31
+ code and models are available at https://github.com/ViTAE-Transformer/ViTPose
32
+
33
  ### Model Description
34
 
35
  <!-- Provide a longer summary of what this model is. -->
36
 
37
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
38
 
39
+ - **Developed by:** Sangbum Choi and Niels Rogge
40
+ - **Funded by [optional]:** ARC FL-170100117 and IH-180100002.
41
  - **Shared by [optional]:** [More Information Needed]
42
  - **Model type:** [More Information Needed]
43
  - **Language(s) (NLP):** [More Information Needed]
44
+ - **License:** apache-2.0
45
  - **Finetuned from model [optional]:** [More Information Needed]
46
 
47
  ### Model Sources [optional]
48
 
49
  <!-- Provide the basic links for the model. -->
50
 
51
+ - **Repository:** https://github.com/ViTAE-Transformer/ViTPose
52
+ - **Paper [optional]:** https://arxiv.org/pdf/2204.12484
53
  - **Demo [optional]:** [More Information Needed]
54
 
55
  ## Uses
 
78
 
79
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
80
 
81
+ In this paper, we propose a simple yet effective vision transformer baseline for pose estimation,
82
+ i.e., ViTPose. Despite no elaborate designs in structure, ViTPose obtains SOTA performance
83
+ on the MS COCO dataset. However, the potential of ViTPose is not fully explored with more
84
+ advanced technologies, such as complex decoders or FPN structures, which may further improve the
85
+ performance. Besides, although the ViTPose demonstrates exciting properties such as simplicity,
86
+ scalability, flexibility, and transferability, more research efforts could be made, e.g., exploring the
87
+ prompt-based tuning to demonstrate the flexibility of ViTPose further. In addition, we believe
88
+ ViTPose can also be applied to other pose estimation datasets, e.g., animal pose estimation [47, 9, 45]
89
+ and face keypoint detection [21, 6]. We leave them as the future work.
90
 
91
  ### Recommendations
92
 
 
98
 
99
  Use the code below to get started with the model.
100
 
101
+ ```
102
+ import numpy as np
103
+ import requests
104
+ import torch
105
+ from PIL import Image
106
+
107
+ from transformers import (
108
+ RTDetrForObjectDetection,
109
+ RTDetrImageProcessor,
110
+ VitPoseConfig,
111
+ VitPoseForPoseEstimation,
112
+ VitPoseImageProcessor,
113
+ )
114
+
115
+
116
+ url = "http://images.cocodataset.org/val2017/000000000139.jpg"
117
+ image = Image.open(requests.get(url, stream=True).raw)
118
+
119
+ # Stage 1. Run Object Detector (User can replace this object_detector part)
120
+ person_image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
121
+ person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
122
+ inputs = person_image_processor(images=image, return_tensors="pt")
123
+
124
+ with torch.no_grad():
125
+ outputs = person_model(**inputs)
126
+
127
+ results = person_image_processor.post_process_object_detection(
128
+ outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3
129
+ )
130
+
131
+ def pascal_voc_to_coco(bboxes: np.ndarray) -> np.ndarray:
132
+ """
133
+ Converts bounding boxes from the Pascal VOC format to the COCO format.
134
+
135
+ In other words, converts from (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format
136
+ to (top_left_x, top_left_y, width, height).
137
 
138
+ Args:
139
+ bboxes (`np.ndarray` of shape `(batch_size, 4)):
140
+ Bounding boxes in Pascal VOC format.
141
+
142
+ Returns:
143
+ `np.ndarray` of shape `(batch_size, 4) in COCO format.
144
+ """
145
+ bboxes[:, 2] = bboxes[:, 2] - bboxes[:, 0]
146
+ bboxes[:, 3] = bboxes[:, 3] - bboxes[:, 1]
147
+
148
+ return bboxes
149
+
150
+ # Human label refers 0 index in COCO dataset
151
+ boxes = results[0]["boxes"][results[0]["labels"] == 0]
152
+ boxes = [pascal_voc_to_coco(boxes.cpu().numpy())]
153
+
154
+ # Stage 2. Run ViTPose
155
+ config = VitPoseConfig()
156
+ image_processor = VitPoseImageProcessor.from_pretrained("nielsr/vitpose-base-simple")
157
+ model = VitPoseForPoseEstimation.from_pretrained("nielsr/vitpose-base-simple")
158
+
159
+ pixel_values = image_processor(image, boxes=boxes, return_tensors="pt").pixel_values
160
+
161
+ with torch.no_grad():
162
+ outputs = model(pixel_values)
163
+
164
+ pose_results = image_processor.post_process_pose_estimation(outputs, boxes=boxes)[0]
165
+
166
+ for pose_result in pose_results:
167
+ print(pose_result)
168
+ ```
169
+
170
  ## Training Details
171
 
172
  ### Training Data
 
186
 
187
  #### Training Hyperparameters
188
 
189
+ - **Training regime:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/Gj6gGcIGO3J5HD2MAB_4C.png)
190
+ <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
191
 
192
  #### Speeds, Sizes, Times [optional]
193
 
194
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
195
 
196
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/rsCmn48SAvhi8xwJhX8h5.png)
197
 
198
  ## Evaluation
199
 
 
221
 
222
  ### Results
223
 
224
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/FcHVFdUmCuT2m0wzB8QSS.png)
225
 
226
  #### Summary
227
 
 
257
 
258
  #### Hardware
259
 
260
+ The models are trained on 8 A100 GPUs based on the mmpose codebase [11]
261
 
262
  #### Software
263
 
 
269
 
270
  **BibTeX:**
271
 
272
+ @misc{xu2022vitposesimplevisiontransformer,
273
+ title={ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation},
274
+ author={Yufei Xu and Jing Zhang and Qiming Zhang and Dacheng Tao},
275
+ year={2022},
276
+ eprint={2204.12484},
277
+ archivePrefix={arXiv},
278
+ primaryClass={cs.CV},
279
+ url={https://arxiv.org/abs/2204.12484},
280
+ }
281
 
282
  **APA:**
283