Skip to content

Don't Decompose Hardswish #12360

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 11, 2025
Merged

Conversation

mcr229
Copy link
Contributor

@mcr229 mcr229 commented Jul 10, 2025

Summary:
Investigating MV3, I noticed that hardswish was getting decomposed into many little ops. This become annoying because it injected unnecessary transposes, and also wasn't being quantized. I didn't realize that this was being decomposed. After doing some investigating, it looks like this can greatly improve our MV3 Performance for both Quantized and FP32 models. Some benchmarks here.

Before Hardwish Decomp After Hardswish Decomp Latency Reduction
Macbook (FP32) 13.3685 8.451 36%
Macbook (QS8) 16.0361 4.914 69%
S24 (FP32) 56.885 41.9638 26%
S24 (QS8) 56.1718 40.2096 40%

Reviewed By: cccclai

Differential Revision: D77765129

@mcr229 mcr229 requested a review from digantdesai as a code owner July 10, 2025 20:17
Copy link

pytorch-bot bot commented Jul 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12360

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 Cancelled Jobs, 11 Unrelated Failures

As of commit 15df3d3 with merge base bdbad3f (image):

CANCELLED JOBS - The following jobs were cancelled. Please retry:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 10, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77765129

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

mcr229 added a commit to mcr229/executorch that referenced this pull request Jul 10, 2025
Summary:

Investigating MV3, I noticed that hardswish was getting decomposed into many little ops. This become annoying because it injected unnecessary transposes, and also wasn't being quantized. I didn't realize that this was being decomposed. After doing some investigating, it looks like this can greatly improve our MV3 Performance for both Quantized and FP32 models. Some benchmarks here.

|                | Before Hardwish Decomp | After Hardswish Decomp  | Latency Reduction |
|----------------|------------------------|------------------------|-------------------|
| Macbook (FP32) | [13.3685]((https://www.internalfb.com/phabricator/paste/view/P1859573931))                | [8.451](https://www.internalfb.com/phabricator/paste/view/P1859573328)                  |36%               |
| Macbook (QS8)  | [16.0361](https://www.internalfb.com/phabricator/paste/view/P1859609658)                | [4.914](https://www.internalfb.com/phabricator/paste/view/P1859610252)                  |69%               |
| S24 (FP32)     | [56.885](https://www.internalfb.com/intern/paste/P1859603500)                 | [41.9638](https://www.internalfb.com/intern/paste/P1859603738)                |26%               |
| S24 (QS8)      | [56.1718](https://www.internalfb.com/intern/paste/P1859615896)                | [40.2096](https://www.internalfb.com/intern/paste/P1859615683/)                |40%               |

Reviewed By: cccclai

Differential Revision: D77765129
@mcr229 mcr229 force-pushed the export-D77765129 branch from bb232cc to 8c3095d Compare July 10, 2025 20:50
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77765129

Summary:

Investigating MV3, I noticed that hardswish was getting decomposed into many little ops. This become annoying because it injected unnecessary transposes, and also wasn't being quantized. I didn't realize that this was being decomposed. After doing some investigating, it looks like this can greatly improve our MV3 Performance for both Quantized and FP32 models. Some benchmarks here.

|                | Before Hardwish Decomp | After Hardswish Decomp  | Latency Reduction |
|----------------|------------------------|------------------------|-------------------|
| Macbook (FP32) | [13.3685]((https://www.internalfb.com/phabricator/paste/view/P1859573931))                | [8.451](https://www.internalfb.com/phabricator/paste/view/P1859573328)                  |36%               |
| Macbook (QS8)  | [16.0361](https://www.internalfb.com/phabricator/paste/view/P1859609658)                | [4.914](https://www.internalfb.com/phabricator/paste/view/P1859610252)                  |69%               |
| S24 (FP32)     | [56.885](https://www.internalfb.com/intern/paste/P1859603500)                 | [41.9638](https://www.internalfb.com/intern/paste/P1859603738)                |26%               |
| S24 (QS8)      | [56.1718](https://www.internalfb.com/intern/paste/P1859615896)                | [40.2096](https://www.internalfb.com/intern/paste/P1859615683/)                |40%               |

Reviewed By: cccclai

Differential Revision: D77765129
@mcr229 mcr229 force-pushed the export-D77765129 branch from 8c3095d to 15df3d3 Compare July 10, 2025 22:46
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77765129

@facebook-github-bot facebook-github-bot merged commit ae15253 into pytorch:main Jul 11, 2025
85 of 99 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants