Skip to content

Commit 6c0ccc0

Browse files
liqikai9ly015
andauthored
[Doc] Refine some docs (#1620)
* fix typo * minor modification to changlog.md * fix lint * remove legacy files * remove .gitkeep * remove legacy files Co-authored-by: ly015 <[email protected]>
1 parent d5d8a91 commit 6c0ccc0

14 files changed

+382
-36
lines changed

README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ cd mmpose
9393
mim install -e .
9494
```
9595

96-
Please refer to [install.md](https://mmpose.readthedocs.io/en/1.x/installation.html) for more detailed installation and dataset preparation.
96+
Please refer to [installation.md](https://mmpose.readthedocs.io/en/1.x/installation.html) for more detailed installation and dataset preparation.
9797

9898
## Getting Started
9999

@@ -147,7 +147,6 @@ A summary can be found in the [Model Zoo](https://mmpose.readthedocs.io/en/1.x/m
147147
- [x] [UDP](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#udp-cvpr-2020) (CVPR'2020)
148148
- [x] [Albumentations](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#albumentations-information-2020) (Information'2020)
149149
- [x] [SoftWingloss](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#softwingloss-tip-2021) (TIP'2021)
150-
- [x] [SmoothNet](/configs/_base_/filters/smoothnet_h36m.md) (arXiv'2021)
151150
- [x] [RLE](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#rle-iccv-2021) (ICCV'2021)
152151

153152
</details>

demo/docs/2d_face_demo.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ python demo/topdown_face_demo.py \
1616
[--kpt-thr ${KPT_SCORE_THR}]
1717
```
1818

19-
The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face.html).
19+
The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face_2d_keypoint.html).
2020
Take [aflw model](https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth) as an example:
2121

2222
```shell

demo/docs/2d_hand_demo.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ python demo/topdown_demo_with_mmdet.py \
1919

2020
```
2121

22-
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/hand.html).
22+
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/hand_2d_keypoint.html).
2323
Take [onehand10k model](https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth) as an example:
2424

2525
```shell

demo/docs/2d_human_pose_demo.md

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,9 @@ python demo/image_demo.py \
1616
[--draw_heatmap]
1717
```
1818

19-
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body.html).
19+
If you use a heatmap-based model and set argument `--draw-heatmap`, the predicted heatmap will be visualized together with the keypoints.
20+
21+
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body_2d_keypoint.html).
2022
Take [coco model](https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth) as an example:
2123

2224
```shell
@@ -41,17 +43,21 @@ python demo/image_demo.py \
4143
--device=cpu
4244
```
4345

46+
Visualization result:
47+
48+
<img src="https://user-images.githubusercontent.com/87690686/187824033-2cce0f55-034a-4127-82e2-52744178bc32.jpg" height="500px" alt><br>
49+
4450
#### Use mmdet for human bounding box detection
4551

4652
We provide a demo script to run mmdet for human detection, and mmpose for pose estimation.
4753

48-
Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection).
54+
Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0.
4955

5056
```shell
5157
python demo/topdown_demo_with_mmdet.py \
5258
${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
5359
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
54-
--input ${IMG_OR_VIDEO_FILE} \
60+
--input ${INPUT_PATH} \
5561
--output-root ${OUTPUT_DIR} \
5662
[--show --draw-heatmap --device ${GPU_ID or CPU}] \
5763
[--bbox-thr ${BBOX_SCORE_THR} --kpt-thr ${KPT_SCORE_THR}]
@@ -69,11 +75,15 @@ python demo/topdown_demo_with_mmdet.py \
6975
--output-root vis_results/
7076
```
7177

78+
Visualization result:
79+
80+
<img src="https://user-images.githubusercontent.com/87690686/187824368-1f1631c3-52bf-4b45-bf9a-a70cd6551e1a.jpg" height="500px" alt><br>
81+
7282
### 2D Human Pose Top-Down Video Demo
7383

74-
The above demo script can also take video as input, and run mmdet for human detection, and mmpose for pose estimation.
84+
The above demo script can also take video as input, and run mmdet for human detection, and mmpose for pose estimation. The difference is, the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file.
7585

76-
Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection).
86+
Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0.
7787

7888
Examples:
7989

@@ -93,5 +103,5 @@ Some tips to speed up MMPose inference:
93103

94104
For top-down models, try to edit the config file. For example,
95105

96-
1. set `flip_test=False` in [topdown-res50](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_res50_8xb64-210e_coco-256x192.py#L56).
106+
1. set `model.test_cfg.flip_test=False` in [topdown-res50](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_res50_8xb64-210e_coco-256x192.py#L56).
97107
2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html).

demo/docs/2d_wholebody_pose_demo.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ python demo/image_demo.py \
1616
[--draw_heatmap]
1717
```
1818

19-
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/wholebody.html).
19+
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/2d_wholebody_keypoint.html).
2020
Take [coco-wholebody_vipnas_res50_dark](https://download.openmmlab.com/mmpose/top_down/vipnas/vipnas_res50_wholebody_256x192_dark-67c0ce35_20211112.pth) model as an example:
2121

2222
```shell
@@ -43,13 +43,13 @@ python demo/image_demo.py \
4343

4444
We provide a demo script to run mmdet for human detection, and mmpose for pose estimation.
4545

46-
Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection).
46+
Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0.
4747

4848
```shell
4949
python demo/topdown_demo_with_mmdet.py \
5050
${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
5151
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
52-
--input ${IMG_OR_VIDEO_FILE} \
52+
--input ${INPUT_PATH} \
5353
--output-root ${OUTPUT_DIR} \
5454
[--show --draw-heatmap --device ${GPU_ID or CPU}] \
5555
[--bbox-thr ${BBOX_SCORE_THR} --kpt-thr ${KPT_SCORE_THR}]
@@ -91,5 +91,5 @@ Some tips to speed up MMPose inference:
9191

9292
For top-down models, try to edit the config file. For example,
9393

94-
1. set `flip_test=False` in [pose_hrnet_w48_dark+](/configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_hrnet-w48_dark-8xb32-210e_coco-wholebody-384x288.py#L90).
94+
1. set `model.test_cfg.flip_test=False` in [pose_hrnet_w48_dark+](/configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_hrnet-w48_dark-8xb32-210e_coco-wholebody-384x288.py#L90).
9595
2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html).

demo/topdown_demo_with_mmdet.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -161,6 +161,7 @@ def main():
161161
elif input_type == 'video':
162162
tmp_folder = tempfile.TemporaryDirectory()
163163
video = mmcv.VideoReader(args.input)
164+
progressbar = mmengine.ProgressBar(len(video))
164165
video.cvt2frames(tmp_folder.name, show_progress=False)
165166
output_root = args.output_root
166167
args.output_root = tmp_folder.name
@@ -172,6 +173,7 @@ def main():
172173
pose_estimator,
173174
visualizer,
174175
show_interval=1)
176+
progressbar.update()
175177
if output_root:
176178
mmcv.frames2video(
177179
tmp_folder.name,

docs/en/installation.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
- [Prerequisites](#prerequisites)
44
- [Installation](#installation)
55
- [Best Practices](#best-practices)
6+
- [Verify the installation](#verify-the-installation)
67
- [Customize Installation](#customize-installation)
78
- [CUDA versions](#cuda-versions)
89
- [Install MMEngine without MIM](#install-mmengine-without-mim)
@@ -60,15 +61,15 @@ We recommend that users follow our best practices to install MMPose. However, th
6061
```shell
6162
pip install -U openmim
6263
mim install mmengine
63-
mim install mmcv>=2.0.0rc1
64+
mim install "mmcv>=2.0.0rc1"
6465
```
6566

6667
**Step 1.** Install MMPose.
6768

6869
Case a: If you develop and run mmpose directly, install it from source:
6970

7071
```shell
71-
git clone https://github.com/open-mmlab/mmpose.git -d dev-1.x
72+
git clone https://github.com/open-mmlab/mmpose.git -b dev-1.x
7273
# "-b dev-1.x" means checkout to the `dev-1.x` branch.
7374
cd mmpose
7475
pip install -r requirements.txt
@@ -81,9 +82,13 @@ pip install -v -e .
8182
Case b: If you use mmpose as a dependency or third-party package, install it with pip:
8283

8384
```shell
84-
mim install mmpose>=1.0.0rc0
85+
mim install "mmpose>=1.0.0b0"
8586
```
8687

88+
### Verify the installation
89+
90+
To verify that MMPose is installed correctly, you can run an inference demo according to this [guide](/demo/docs/2d_human_pose_demo.md).
91+
8792
### Customize Installation
8893

8994
#### CUDA versions
@@ -135,7 +140,7 @@ thus we only need to install MMEngine, MMCV and MMPose with the following comman
135140
```shell
136141
!pip3 install openmim
137142
!mim install mmengine
138-
!mim install mmcv>=2.0.0rc1
143+
!mim install "mmcv>=2.0.0rc1"
139144
```
140145

141146
**Step 2.** Install MMPose from the source.

docs/en/migration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -261,10 +261,8 @@ Commonly used transforms are defined in `$MMPOSE/mmpose/datasets/transforms/comm
261261

262262
For top-down methods, `Shift`, `Rotate`and `Resize` are implemented by `RandomBBoxTransform`**.** For bottom-up methods, `BottomupRandomAffine` is used.
263263

264-
Most data transforms depend on `bbox_center` and `bbox_scale`, which can be obtained by `GetBBoxCenterScale`.
265-
266264
```{note}
267-
All transforms in this part will only generate the **transformation matrix** and **will not** perform the actual transformation on the input data.
265+
Most data transforms depend on `bbox_center` and `bbox_scale`, which can be obtained by `GetBBoxCenterScale`.
268266
```
269267

270268
#### ii. Transformation
@@ -579,6 +577,8 @@ def loss(self,
579577

580578
MMPose 1.0 has been refactored extensively and addressed many legacy issues. Most of the code in MMPose 1.0 will not be compatible with 0.x version.
581579

580+
To try our best to help you migrate your code and model, here are some major changes:
581+
582582
### Data Transformation
583583

584584
#### Translation, Rotation and Scaling

docs/en/notes/benchmark.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Benchmark
2+
3+
We compare our results with some popular frameworks and official releases in terms of speed and accuracy.
4+
5+
## Comparison Rules
6+
7+
Here we compare our MMPose repo with other pose estimation toolboxes in the same data and model settings.
8+
9+
To ensure the fairness of the comparison, the comparison experiments were conducted under the same hardware environment and using the same dataset.
10+
For each model setting, we kept the same data pre-processing methods to make sure the same feature input.
11+
In addition, we also used Memcached, a distributed memory-caching system, to load the data in all the compared toolboxes.
12+
This minimizes the IO time during benchmark.
13+
14+
The time we measured is the average training time for an iteration, including data processing and model training.
15+
The training speed is measure with s/iter. The lower, the better.
16+
17+
### Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 dataset
18+
19+
We demonstrate the superiority of our MMPose framework in terms of speed and accuracy on the standard COCO keypoint detection benchmark.
20+
The mAP (the mean average precision) is used as the evaluation metric.
21+
22+
| Model | Input size | MMPose (s/iter) | HRNet (s/iter) | MMPose (mAP) | HRNet (mAP) |
23+
| :--------- | :--------: | :-------------: | :------------: | :----------: | :---------: |
24+
| resnet_50 | 256x192 | **0.28** | 0.64 | **0.718** | 0.704 |
25+
| resnet_50 | 384x288 | **0.81** | 1.24 | **0.731** | 0.722 |
26+
| resnet_101 | 256x192 | **0.36** | 0.84 | **0.726** | 0.714 |
27+
| resnet_101 | 384x288 | **0.79** | 1.53 | **0.748** | 0.736 |
28+
| resnet_152 | 256x192 | **0.49** | 1.00 | **0.735** | 0.720 |
29+
| resnet_152 | 384x288 | **0.96** | 1.65 | **0.750** | 0.743 |
30+
| hrnet_w32 | 256x192 | **0.54** | 1.31 | **0.746** | 0.744 |
31+
| hrnet_w32 | 384x288 | **0.76** | 2.00 | **0.760** | 0.758 |
32+
| hrnet_w48 | 256x192 | **0.66** | 1.55 | **0.756** | 0.751 |
33+
| hrnet_w48 | 384x288 | **1.23** | 2.20 | **0.767** | 0.763 |
34+
35+
## Hardware
36+
37+
- 8 NVIDIA Tesla V100 (32G) GPUs
38+
- Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
39+
40+
## Software Environment
41+
42+
- Python 3.7
43+
- PyTorch 1.4
44+
- CUDA 10.1
45+
- CUDNN 7.6.03
46+
- NCCL 2.4.08

docs/en/notes/changelog.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,7 @@
44

55
We are excited to announce the release of MMPose 1.0.0beta.
66
MMPose 1.0.0beta is the first version of MMPose 1.x, a part of the OpenMMLab 2.x projects.
7-
Built upon the new [training engine](https://github.com/open-mmlab/mmengine),
8-
MMPose 1.x unifies the interfaces of dataset, models, evaluation, and visualization with faster training and testing speed.
9-
It also provide a general semi-supervised object detection framework, and more strong baselines.
7+
Built upon the new [training engine](https://github.com/open-mmlab/mmengine).
108

119
**Highlights**
1210

0 commit comments

Comments
 (0)