Skip to content
This repository was archived by the owner on Jul 1, 2025. It is now read-only.

Commit dab4b56

Browse files
swolchokfacebook-github-bot
authored andcommitted
Fix static runtime sigrid_hash precomputed multiplier pass
Reviewed By: jfix71, pls331, houseroad Differential Revision: D54336561 fbshipit-source-id: 752deac027ef98ca429bd47611efbf65b1cbaf33
1 parent 8866516 commit dab4b56

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

torch_glow/src/ShapeInferenceEngine.cpp

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3437,16 +3437,17 @@ ShapeInferenceEngine::argmin(const MetaStack &variableMetas) {
34373437
* int salt,
34383438
* int maxValue,
34393439
* Tensor multiplier_shift,
3440-
* bool hashIntoInt32
3440+
* bool hashIntoInt32,
3441+
* bool? noHashNegSalt
34413442
* ) -> Tensor
34423443
*
34433444
*
34443445
*/
34453446
Expected<TensorOutput>
34463447
ShapeInferenceEngine::sigridHashPrecompute(const MetaStack &variableMetas) {
34473448
RETURN_ERR_IF_NOT(
3448-
variableMetas.size() == 5,
3449-
strFormat("Expected 5 inputs, got %zu", variableMetas.size()));
3449+
variableMetas.size() == 6,
3450+
strFormat("Expected 6 inputs, got %zu", variableMetas.size()));
34503451

34513452
TensorShape shape = variableMetas[0].shape<TensorShape>();
34523453

0 commit comments

Comments
 (0)