Hello, In [`Int8DynActInt4WeightQuantizer`](https://github.com/pytorch/ao/blob/main/torchao/quantization/linear_quant_modules.py#L617), the `scales_precision` argument of `replace_linear_8da4w` is set to `self.precision` instead of `self.scales_precision`. This introduces an inconsistency between calling `_create_quantized_state_dict(model)` and calling `quantize(model).state_dict()`.