Skip to content

Commit de0fa21

Browse files
authored
Fix broken link in docs (#1969)
Signed-off-by: Huang, Tai <[email protected]>
1 parent 385da7c commit de0fa21

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

docs/source/3x/PT_MixedPrecision.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,5 +107,5 @@ best_model = autotune(model=build_torch_model(), tune_config=custom_tune_config,
107107

108108
## Examples
109109

110-
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch\cv\mixed_precision
110+
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/pytorch/cv/mixed_precision
111111
) on how to quantize a model with Mixed Precision.

docs/source/3x/TF_Quant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ TensorFlow Quantization
1313

1414
`neural_compressor.tensorflow` supports quantizing both TensorFlow and Keras model with or without accuracy aware tuning.
1515

16-
For the detailed quantization fundamentals, please refer to the document for [Quantization](../quantization.md).
16+
For the detailed quantization fundamentals, please refer to the document for [Quantization](quantization.md).
1717

1818

1919
## Get Started

docs/source/3x/TF_SQ.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,4 +50,4 @@ best_model = autotune(
5050
5151
## Examples
5252

53-
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models\quantization\ptq\smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`.
53+
Users can also refer to [examples](https://github.com/intel/neural-compressor/blob/master/examples/3.x_api/tensorflow/nlp/large_language_models/quantization/ptq/smoothquant) on how to apply smooth quant to a TensorFlow model with `neural_compressor.tensorflow`.

docs/source/3x/quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -396,7 +396,7 @@ For supported quantization methods for `accuracy aware tuning` and the detailed
396396

397397
User could refer to below chart to understand the whole tuning flow.
398398

399-
<img src="../source/imgs/accuracy_aware_tuning_flow.png" width=600 height=480 alt="accuracy aware tuning working flow">
399+
<img src="./imgs/workflow.png" alt="accuracy aware tuning working flow">
400400

401401

402402

0 commit comments

Comments
 (0)