Skip to content

Commit e24dced

Browse files
9bowholly1238
andauthored
fix reST code-block syntax (pytorch#1496) (pytorch#1497)
Co-authored-by: Holly Sweeney <[email protected]>
1 parent 940666a commit e24dced

File tree

6 files changed

+9
-6
lines changed

6 files changed

+9
-6
lines changed

advanced_source/extend_dispatcher.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ You can choose any of keys above to prototype your customized backend.
5353
To create a Tensor on ``PrivateUse1`` backend, you need to set dispatch key in ``TensorImpl`` constructor.
5454

5555
.. code-block:: cpp
56+
5657
/* Example TensorImpl constructor */
5758
TensorImpl(
5859
Storage&& storage,

advanced_source/torch-script-parallelism.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -207,6 +207,7 @@ Let's use the profiler along with the Chrome trace export functionality to
207207
visualize the performance of our parallelized model:
208208

209209
.. code-block:: python
210+
210211
with torch.autograd.profiler.profile() as prof:
211212
ens(x)
212213
prof.export_chrome_trace('parallel.json')

advanced_source/torch_script_custom_ops.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -605,7 +605,7 @@ Along with a small ``CMakeLists.txt`` file:
605605
606606
At this point, we should be able to build the application:
607607
608-
.. code-block::
608+
.. code-block:: shell
609609
610610
$ mkdir build
611611
$ cd build
@@ -645,7 +645,7 @@ At this point, we should be able to build the application:
645645
646646
And run it without passing a model just yet:
647647
648-
.. code-block::
648+
.. code-block:: shell
649649
650650
$ ./example_app
651651
usage: example_app <path-to-exported-script-module>
@@ -672,7 +672,7 @@ The last line will serialize the script function into a file called
672672
"example.pt". If we then pass this serialized model to our C++ application, we
673673
can run it straight away:
674674
675-
.. code-block::
675+
.. code-block:: shell
676676
677677
$ ./example_app example.pt
678678
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'

beginner_source/hyperparameter_tuning_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -431,7 +431,7 @@ def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
431431
######################################################################
432432
# If you run the code, an example output could look like this:
433433
#
434-
# .. code-block::
434+
# ::
435435
#
436436
# Number of trials: 10 (10 TERMINATED)
437437
# +-----+------+------+-------------+--------------+---------+------------+--------------------+

prototype_source/fx_graph_mode_ptq_static.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,8 @@ we'll have a separate tutorial to show how to make the part of the model we want
1010
We also have a tutorial for `FX Graph Mode Post Training Dynamic Quantization <https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_dynamic.html>`_.
1111
tldr; The FX Graph Mode API looks like the following:
1212

13-
.. code:: python
13+
.. code:: python
14+
1415
import torch
1516
from torch.quantization import get_default_qconfig
1617
# Note that this is temporary, we'll expose these functions to torch.quantization after official releasee

recipes_source/android_native_app_with_custom_op.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -704,7 +704,7 @@ If you check the android logcat:
704704
705705
You should see logs with tag 'PyTorchNativeApp' that prints x, y, and the result of the model forward, which we print with ``log`` function in ``NativeApp/app/src/main/cpp/pytorch_nativeapp.cpp``.
706706

707-
.. code-block::
707+
::
708708

709709
I/PyTorchNativeApp(26968): x: -0.9484 -1.1757 -0.5832 0.9144 0.8867 1.0933 -0.4004 -0.3389
710710
I/PyTorchNativeApp(26968): -1.0343 1.5200 -0.7625 -1.5724 -1.2073 0.4613 0.2730 -0.6789

0 commit comments

Comments
 (0)