Skip to content

Commit 72f5db7

Browse files
committed
first commit
0 parents  commit 72f5db7

File tree

138 files changed

+6773
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

138 files changed

+6773
-0
lines changed
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
name: 🔍오탈자 또는 내용 수정/변경
3+
about: 번역된 문서에 수정/변경이 필요하다고 생각하실 때 🤔
4+
---
5+
6+
## 문서 URL
7+
_수정이 필요한 URL을 남겨주세요. (예. https://pytorch.kr/hub/facebookresearch_pytorchvideo_resnet/)_
8+
- **URL**:
9+
10+
## 변경 사항
11+
_(1)어떤 단어 / 문장 / 내용이 (2)어떻게 변경되어야 한다고 생각하세요?_
12+
13+
14+
## 추가 정보
15+
_위와 같이 생각하신 이유 또는 다른 참고할 정보가 있다면 알려주세요._
16+
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
name: 📖번역 진행 / 요청
3+
about: 번역을 진행하실 문서가 있거나 번역을 요청하실 때 📝
4+
---
5+
6+
## 문서 URL
7+
_번역을 하실 / 요청하실 URL을 남겨주세요. (예. https://tutorials.pytorch.kr/beginner/saving_loading_models.html)_
8+
- **URL**:
9+
10+
## (대략적인) 예상 완료 일정
11+
_예상하시는 완료 일정이 있으시다면 알려주세요. (예. 1달 내, 12월 중 등)_<br />
12+
_(반드시 지키셔야 하는 일정이 아닙니다 - 일정이 너무 늦어질 경우 다른 번역자를 위해 남겨주세요.)_
13+
* 예상 완료 일정:
14+
15+
## 관련 이슈
16+
_현재 번역 요청 / 진행 내역을 보기 위해 각 버전의 메인 이슈를 참조합니다._ <br />
17+
_(특별한 일이 없다면 변경하지 않으셔도 됩니다.)_
18+
* 관련 이슈: #(숫자) (버전)
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
---
2+
name: 📃기타 다른 이슈
3+
about: 위 이슈들이 아닌 다른 이슈들을 남기실 때 📬
4+
---
5+
6+
## 이슈 내용
7+
_어떤 이슈인지 알려주세요._
8+
9+
10+
## 추가 정보
11+
_다른 참고할 정보 또는 URL 등이 있다면 알려주세요._
12+

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
## 라이선스 동의
2+
_변경해주시는 내용에 BSD 3항 라이선스가 적용됨을 동의해주셔야 합니다._<br />
3+
_더 자세한 내용은 [기여하기 문서](https://github.com/PyTorchKorea/hub-kr/blob/master/CONTRIBUTING.md)를 참고해주세요._<br />
4+
_동의하시면 아래 `[ ]``[x]`로 만들어주세요._<br />
5+
6+
- [ ] 기여하기 문서를 확인하였으며, 본 PR 내용에 BSD 3항 라이선스가 적용됨에 동의합니다.
7+
8+
## 관련 이슈 번호
9+
_이 Pull Request와 관련있는 이슈 번호를 적어주세요._<br />
10+
_이슈 또는 PR 번호 앞에 #을 붙이시면 제목을 바로 확인하실 수 있습니다. (예. #999 )_
11+
12+
- **이슈 번호**: #(숫자)
13+
14+
## PR 종류
15+
_이 PR에 해당되는 종류 앞의 `[ ]``[x]`로 변경해주세요._<br />
16+
- [ ] 오탈자를 수정하거나 번역을 개선하는 기여
17+
- [ ] 번역되지 않은 모델 소개를 번역하는 기여
18+
- [ ] 공식 허브의 내용을 반영하는 기여
19+
- [ ] 위 종류에 포함되지 않는 기여
20+
21+
## PR 설명
22+
_이 PR로 무엇이 달라지는지 대략적으로 알려주세요._
23+

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
.DS_Store
2+
_preview
3+
yarn.lock

CONTRIBUTING.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# 기여하기
2+
3+
*TBD*

README.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
# PyTorch 한국어 모델 허브
2+
3+
## 소개
4+
5+
PyTorch에서 제공하는 모델 허브의 한국어 번역을 위한 저장소입니다.\
6+
번역의 결과물은 [https://pytorch.kr/hub](https://pytorch.kr/hub)에서 확인하실 수 있습니다. (번역을 진행하며 **불규칙적으로** 업데이트합니다.)
7+
새로운 모델에 대한 반영은 [모델 허브의 공식 저장소](https://github.com/pytorch/hub)를 참고해주세요.
8+
9+
## 빌드하기
10+
11+
파이토치 허브는 [파이토치 한국 사용자 모임 홈페이지의 일부](https://pytorch.kr/hub/)입니다. \
12+
빌드를 위해서는 [파이토치 한국 사용자 모임 홈페이지 빌드 환경](https://github.com/PyTorchKorea/pytorch.kr#%EB%B9%8C%EB%93%9C-%EC%A0%88%EC%B0%A8)이 준비되어야 합니다. \
13+
자세한 내용은 [PyTorchKorea/pytorch.kr 저장소의 README.md](https://github.com/PyTorchKorea/pytorch.kr#%EB%B9%8C%EB%93%9C-%EC%A0%88%EC%B0%A8)를 참고해주세요.
14+
빌드 환경이 준비되었다면, 아래 명령어로 빌드 및 미리보기를 할 수 있습니다.
15+
```sh
16+
./preview_hub.sh
17+
```
18+
19+
## 기여하기
20+
21+
다음의 방법들로 기여하실 수 있습니다.
22+
23+
1. 오탈자를 수정하거나 번역을 개선하는 기여
24+
* [한국어 모델 허브 사이트](https://pytorch.kr/hub)에서 발견한 오탈자를 [한국어 모델 허브 저장소](https://github.com/PyTorchKorea/hub-kr)에서 고치는 기여입니다.
25+
2. 번역되지 않은 모델 소개를 번역하는 기여
26+
* [한국어 모델 허브 사이트](https://pytorch.kr/hub)에서 아직 번역되지 않은 모델 소개를 번역하는 기여입니다.
27+
3. 2로 번역된 문서를 리뷰하는 기여 :star:
28+
* [본 저장소에 Pull Request된 모델 소개](https://github.com/PyTorchKorea/hub-kr/pulls)의 번역이 적절한지 리뷰하는 기여입니다. \
29+
(많은 분들의 참여를 간절히 기다리고 있습니다. :pray:)
30+
31+
## 원문
32+
33+
현재 [PyTorch v1.9 기준(pytorch/hub@552c779)](https://github.com/pytorch/hub/commit/552c779) 번역이 진행 중입니다. \
34+
최신 버전의 모델 소개(공식, 영어)는 [PyTorch 모델 허브 사이트](https://pytorch.org/hub)[PyTorch 모델 허브 저장소](https://github.com/pytorch/hub)를 참고해주세요.
35+
36+
--
37+
This is a project to translate [pytorch/hub@552c779](https://github.com/pytorch/hub/commit/552c779) into Korean.
38+
For the latest version, please visit to the [official PyTorch model hub repo](https://github.com/pytorch/hub).

datvuthanh_hybridnets.md

Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
---
2+
layout: hub_detail
3+
background-class: hub-background
4+
body-class: hub
5+
category: researchers
6+
title: HybridNets
7+
summary: HybridNets - End2End Perception Network
8+
image: hybridnets.jpg
9+
author: Dat Vu Thanh
10+
tags: [vision]
11+
github-link: https://github.com/datvuthanh/HybridNets
12+
github-id: datvuthanh/HybridNets
13+
featured_image_1: no-image
14+
featured_image_2: no-image
15+
accelerator: cuda-optional
16+
demo-model-link: https://colab.research.google.com/drive/1Uc1ZPoPeh-lAhPQ1CloiVUsOIRAVOGWA
17+
---
18+
## Before You Start
19+
20+
Start from a **Python>=3.7** environment with **PyTorch>=1.10** installed. To install PyTorch see [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/). To install HybridNets dependencies:
21+
```bash
22+
pip install -qr https://raw.githubusercontent.com/datvuthanh/HybridNets/main/requirements.txt # install dependencies
23+
```
24+
25+
## Model Description
26+
27+
<img width="100%" src="https://github.com/datvuthanh/HybridNets/raw/main/images/hybridnets.jpg">
28+
29+
HybridNets is an end2end perception network for multi-tasks. Our work focused on traffic object detection, drivable area segmentation and lane detection. HybridNets can run real-time on embedded systems, and obtains SOTA Object Detection, Lane Detection on BDD100K Dataset.
30+
31+
### Results
32+
33+
### Traffic Object Detection
34+
35+
| Model | Recall (%) | [email protected] (%) |
36+
|:------------------:|:------------:|:---------------:|
37+
| `MultiNet` | 81.3 | 60.2 |
38+
| `DLT-Net` | 89.4 | 68.4 |
39+
| `Faster R-CNN` | 77.2 | 55.6 |
40+
| `YOLOv5s` | 86.8 | 77.2 |
41+
| `YOLOP` | 89.2 | 76.5 |
42+
| **`HybridNets`** | **92.8** | **77.3** |
43+
44+
<img src="https://github.com/datvuthanh/HybridNets/raw/main/images/det1.jpg" width="50%" /><img src="https://github.com/datvuthanh/HybridNets/raw/main/images/det2.jpg" width="50%" />
45+
46+
### Drivable Area Segmentation
47+
48+
| Model | Drivable mIoU (%) |
49+
|:----------------:|:-----------------:|
50+
| `MultiNet` | 71.6 |
51+
| `DLT-Net` | 71.3 |
52+
| `PSPNet` | 89.6 |
53+
| `YOLOP` | 91.5 |
54+
| **`HybridNets`** | **90.5** |
55+
56+
<img src="https://github.com/datvuthanh/HybridNets/raw/main/images/road1.jpg" width="50%" /><img src="https://github.com/datvuthanh/HybridNets/raw/main/images/road2.jpg" width="50%" />
57+
58+
### Lane Line Detection
59+
60+
| Model | Accuracy (%) | Lane Line IoU (%) |
61+
|:----------------:|:------------:|:-----------------:|
62+
| `Enet` | 34.12 | 14.64 |
63+
| `SCNN` | 35.79 | 15.84 |
64+
| `Enet-SAD` | 36.56 | 16.02 |
65+
| `YOLOP` | 70.5 | 26.2 |
66+
| **`HybridNets`** | **85.4** | **31.6** |
67+
68+
<img src="https://github.com/datvuthanh/HybridNets/raw/main/images/lane1.jpg" width="50%" /><img src="https://github.com/datvuthanh/HybridNets/raw/main/images/lane2.jpg" width="50%" />
69+
70+
<img width="100%" src="https://github.com/datvuthanh/HybridNets/raw/main/images/full_video.gif">
71+
72+
73+
### Load From PyTorch Hub
74+
75+
This example loads the pretrained **HybridNets** model and passes an image for inference.
76+
```python
77+
import torch
78+
79+
# load model
80+
model = torch.hub.load('datvuthanh/hybridnets', 'hybridnets', pretrained=True)
81+
82+
#inference
83+
img = torch.randn(1,3,640,384)
84+
features, regression, classification, anchors, segmentation = model(img)
85+
```
86+
87+
### Citation
88+
89+
If you find our [paper](https://arxiv.org/abs/2203.09035) and [code](https://github.com/datvuthanh/HybridNets) useful for your research, please consider giving a star and citation:
90+
91+
```BibTeX
92+
@misc{vu2022hybridnets,
93+
title={HybridNets: End-to-End Perception Network},
94+
author={Dat Vu and Bao Ngo and Hung Phan},
95+
year={2022},
96+
eprint={2203.09035},
97+
archivePrefix={arXiv},
98+
primaryClass={cs.CV}
99+
}
100+
```

docs/template.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
---
2+
layout: hub_detail
3+
background-class: hub-background
4+
body-class: hub
5+
category: researchers
6+
<!-- Only change fields below(remove this line before submitting a PR). Take inspiration e.g. from pytorch_vision_fcn_resnet101.md -->
7+
title: <REQUIRED: short model name>
8+
summary: <REQUIRED: 1-2 sentences>
9+
image: <REQUIRED: best image to represent your model>
10+
author: <REQUIRED>
11+
tags: <REQUIRED: [tag1, tag2, ...]. Allowed tags are vision, nlp, generative, audio, scriptable>
12+
github-link: <REQUIRED>
13+
github-id: <REQUIRED: top level of repo>
14+
featured_image_1: <OPTIONAL: use no-image if not applicable>
15+
featured_image_2: <OPTIONAL: use no-image if not applicable>
16+
accelerator: <OPTIONAL: Current supported values: "cuda", "cuda-optional">
17+
---
18+
<!-- REQUIRED: provide a working script to demonstrate it works with torch.hub, example below -->
19+
```python
20+
import torch
21+
torch.hub.load('pytorch/vision', 'resnet18', pretrained=True)
22+
```
23+
<!-- Walkthrough a small example of using your model. Ideally, less than 25 lines of code -->
24+
25+
<!-- REQUIRED: detailed model description below, in markdown format, feel free to add new sections as necessary -->
26+
### Model Description
27+
28+
29+
<!-- OPTIONAL: put link to reference papers -->
30+
### References
Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
---
2+
layout: hub_detail
3+
background-class: hub-background
4+
body-class: hub
5+
title: ResNext WSL
6+
summary: ResNext models trained with billion scale weakly-supervised data.
7+
category: researchers
8+
image: wsl-image.png
9+
author: Facebook AI
10+
tags: [vision]
11+
github-link: https://github.com/facebookresearch/WSL-Images/blob/master/hubconf.py
12+
github-id: facebookresearch/WSL-Images
13+
featured_image_1: wsl-image.png
14+
featured_image_2: no-image
15+
accelerator: cuda-optional
16+
order: 10
17+
demo-model-link: https://huggingface.co/spaces/pytorch/ResNext_WSL
18+
---
19+
20+
```python
21+
import torch
22+
model = torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl')
23+
# or
24+
# model = torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x16d_wsl')
25+
# or
26+
# model = torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x32d_wsl')
27+
# or
28+
#model = torch.hub.load('facebookresearch/WSL-Images', 'resnext101_32x48d_wsl')
29+
model.eval()
30+
```
31+
32+
All pre-trained models expect input images normalized in the same way,
33+
i.e. mini-batches of 3-channel RGB images of shape `(3 x H x W)`, where `H` and `W` are expected to be at least `224`.
34+
The images have to be loaded in to a range of `[0, 1]` and then normalized using `mean = [0.485, 0.456, 0.406]`
35+
and `std = [0.229, 0.224, 0.225]`.
36+
37+
Here's a sample execution.
38+
39+
```python
40+
# Download an example image from the pytorch website
41+
import urllib
42+
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
43+
try: urllib.URLopener().retrieve(url, filename)
44+
except: urllib.request.urlretrieve(url, filename)
45+
```
46+
47+
```python
48+
# sample execution (requires torchvision)
49+
from PIL import Image
50+
from torchvision import transforms
51+
input_image = Image.open(filename)
52+
preprocess = transforms.Compose([
53+
transforms.Resize(256),
54+
transforms.CenterCrop(224),
55+
transforms.ToTensor(),
56+
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
57+
])
58+
input_tensor = preprocess(input_image)
59+
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
60+
61+
# move the input and model to GPU for speed if available
62+
if torch.cuda.is_available():
63+
input_batch = input_batch.to('cuda')
64+
model.to('cuda')
65+
66+
with torch.no_grad():
67+
output = model(input_batch)
68+
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
69+
print(output[0])
70+
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
71+
print(torch.nn.functional.softmax(output[0], dim=0))
72+
73+
```
74+
75+
### Model Description
76+
The provided ResNeXt models are pre-trained in weakly-supervised fashion on **940 million** public images with 1.5K hashtags matching with 1000 ImageNet1K synsets, followed by fine-tuning on ImageNet1K dataset. Please refer to "Exploring the Limits of Weakly Supervised Pretraining" (https://arxiv.org/abs/1805.00932) presented at ECCV 2018 for the details of model training.
77+
78+
We are providing 4 models with different capacities.
79+
80+
| Model | #Parameters | FLOPS | Top-1 Acc. | Top-5 Acc. |
81+
| ------------------ | :---------: | :---: | :--------: | :--------: |
82+
| ResNeXt-101 32x8d | 88M | 16B | 82.2 | 96.4 |
83+
| ResNeXt-101 32x16d | 193M | 36B | 84.2 | 97.2 |
84+
| ResNeXt-101 32x32d | 466M | 87B | 85.1 | 97.5 |
85+
| ResNeXt-101 32x48d | 829M | 153B | 85.4 | 97.6 |
86+
87+
Our models significantly improve the training accuracy on ImageNet compared to training from scratch. **We achieve state-of-the-art accuracy of 85.4% on ImageNet with our ResNext-101 32x48d model.**
88+
89+
### References
90+
91+
- [Exploring the Limits of Weakly Supervised Pretraining](https://arxiv.org/abs/1805.00932)

0 commit comments

Comments
 (0)