Using generic implementation for 16-bit activations and 8-bit weights for matmul in backends#16008
Open
RahulC7 wants to merge 3 commits intopytorch:mainfrom
Open
Using generic implementation for 16-bit activations and 8-bit weights for matmul in backends#16008RahulC7 wants to merge 3 commits intopytorch:mainfrom
RahulC7 wants to merge 3 commits intopytorch:mainfrom
Commits
Commits on Nov 28, 2025
- authored andcommitted
Using generic implemntation for 16-bit activations and 8 bit weights for Conv2D in Backends (#16007)
authored andcommitted- authored andcommitted