Skip to content

Commit 4eda5a0

Browse files
authored
docs: fix broken links (Lightning-AI#20590)
1 parent 9afcc58 commit 4eda5a0

File tree

30 files changed

+40
-44
lines changed

30 files changed

+40
-44
lines changed

.github/CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ We welcome any useful contribution! For your convenience here's a recommended wo
189189
#### How can I help/contribute?
190190

191191
All types of contributions are welcome - reporting bugs, fixing documentation, adding test cases, solving issues, and preparing bug fixes.
192-
To get started with code contributions, look for issues marked with the label [good first issue](https://github.com/Lightning-AI/lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) or chose something close to your domain with the label [help wanted](https://github.com/Lightning-AI/lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22). Before coding, make sure that the issue description is clear and comment on the issue so that we can assign it to you (or simply self-assign if you can).
192+
To get started with code contributions, look for issues marked with the label [good first issue](https://github.com/Lightning-AI/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) or chose something close to your domain with the label [help wanted](https://github.com/Lightning-AI/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22). Before coding, make sure that the issue description is clear and comment on the issue so that we can assign it to you (or simply self-assign if you can).
193193

194194
#### Is there a recommendation for branch names?
195195

docs/source-fabric/_templates/theme_variables.jinja

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{%- set external_urls = {
22
'github': 'https://github.com/Lightning-AI/lightning',
3-
'github_issues': 'https://github.com/Lightning-AI/lightning/issues',
3+
'github_issues': 'https://github.com/Lightning-AI/pytorch-lightning/issues',
44
'contributing': 'https://github.com/Lightning-AI/lightning/blob/master/.github/CONTRIBUTING.md',
55
'governance': 'https://lightning.ai/docs/pytorch/latest/community/governance.html',
66
'docs': 'https://lightning.ai/docs/fabric/',

docs/source-fabric/links.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
.. _PyTorchJob: https://www.kubeflow.org/docs/components/training/pytorch/
1+
.. _PyTorchJob: https://www.kubeflow.org/docs/components/trainer/legacy-v1/user-guides/pytorch/
22
.. _Kubeflow: https://www.kubeflow.org
33
.. _Trainer: https://lightning.ai/docs/pytorch/stable/common/trainer.html

docs/source-pytorch/_templates/theme_variables.jinja

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{%- set external_urls = {
22
'github': 'https://github.com/Lightning-AI/lightning',
3-
'github_issues': 'https://github.com/Lightning-AI/lightning/issues',
3+
'github_issues': 'https://github.com/Lightning-AI/pytorch-lightning/issues',
44
'contributing': 'https://github.com/Lightning-AI/lightning/blob/master/.github/CONTRIBUTING.md',
55
'governance': 'https://lightning.ai/docs/pytorch/latest/community/governance.html',
66
'docs': 'https://lightning.ai/docs/pytorch/latest/',

docs/source-pytorch/accelerators/accelerator_prepare.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ It is possible to perform some computation manually and log the reduced result o
123123
124124
# When you call `self.log` only on rank 0, don't forget to add
125125
# `rank_zero_only=True` to avoid deadlocks on synchronization.
126-
# Caveat: monitoring this is unimplemented, see https://github.com/Lightning-AI/lightning/issues/15852
126+
# Caveat: monitoring this is unimplemented, see https://github.com/Lightning-AI/pytorch-lightning/issues/15852
127127
if self.trainer.is_global_zero:
128128
self.log("my_reduced_metric", mean, rank_zero_only=True)
129129

docs/source-pytorch/accelerators/gpu_intermediate.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,6 @@ Lightning supports multiple ways of doing distributed training.
2525
.. note::
2626
If you request multiple GPUs or nodes without setting a strategy, DDP will be automatically used.
2727

28-
For a deeper understanding of what Lightning is doing, feel free to read this
29-
`guide <https://towardsdatascience.com/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565>`_.
30-
31-
3228
----
3329

3430

docs/source-pytorch/advanced/ddp_optimizations.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ On a Multi-Node Cluster, Set NCCL Parameters
5858
********************************************
5959

6060
`NCCL <https://developer.nvidia.com/nccl>`__ is the NVIDIA Collective Communications Library that is used by PyTorch to handle communication across nodes and GPUs.
61-
There are reported benefits in terms of speedups when adjusting NCCL parameters as seen in this `issue <https://github.com/Lightning-AI/lightning/issues/7179>`__.
61+
There are reported benefits in terms of speedups when adjusting NCCL parameters as seen in this `issue <https://github.com/Lightning-AI/pytorch-lightning/issues/7179>`__.
6262
In the issue, we see a 30% speed improvement when training the Transformer XLM-RoBERTa and a 15% improvement in training with Detectron2.
6363
NCCL parameters can be adjusted via environment variables.
6464

docs/source-pytorch/advanced/model_parallel/deepspeed.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -319,7 +319,7 @@ Additionally, DeepSpeed supports offloading to NVMe drives for even larger model
319319
)
320320
trainer.fit(model)
321321
322-
When offloading to NVMe you may notice that the speed is slow. There are parameters that need to be tuned based on the drives that you are using. Running the `aio_bench_perf_sweep.py <https://github.com/microsoft/DeepSpeed/blob/master/csrc/aio/py_test/aio_bench_perf_sweep.py>`__ script can help you to find optimum parameters. See the `issue <https://github.com/microsoft/DeepSpeed/issues/998>`__ for more information on how to parse the information.
322+
When offloading to NVMe you may notice that the speed is slow. There are parameters that need to be tuned based on the drives that you are using. Running the `aio_bench_perf_sweep.py <https://github.com/microsoft/DeepSpeed/blob/master/csrc/aio/py_test/aio_bench_perf_sweep.py>`__ script can help you to find optimum parameters. See the `issue <https://github.com/deepspeedai/DeepSpeed/issues/998>`__ for more information on how to parse the information.
323323

324324
.. _deepspeed-activation-checkpointing:
325325

docs/source-pytorch/data/alternatives.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ the desired GPU in your pipeline. When moving data to a specific device, you can
9090
WebDataset
9191
^^^^^^^^^^
9292

93-
The `WebDataset <https://webdataset.github.io/webdataset>`__ makes it easy to write I/O pipelines for large datasets.
93+
The `WebDataset <https://github.com/webdataset/webdataset>`__ makes it easy to write I/O pipelines for large datasets.
9494
Datasets can be stored locally or in the cloud. ``WebDataset`` is just an instance of a standard IterableDataset.
9595
The webdataset library contains a small wrapper (``WebLoader``) that adds a fluid interface to the DataLoader (and is otherwise identical).
9696

docs/source-pytorch/data/iterables.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ To choose a different mode, you can use the :class:`~lightning.pytorch.utilities
5050
5151
5252
Currently, the ``trainer.predict`` method only supports the ``"sequential"`` mode, while ``trainer.fit`` method does not support it.
53-
Support for this feature is tracked in this `issue <https://github.com/Lightning-AI/lightning/issues/16830>`__.
53+
Support for this feature is tracked in this `issue <https://github.com/Lightning-AI/pytorch-lightning/issues/16830>`__.
5454

5555
Note that when using the ``"sequential"`` mode, you need to add an additional argument ``dataloader_idx`` to some specific hooks.
5656
Lightning will `raise an error <https://github.com/Lightning-AI/lightning/pull/16837>`__ informing you of this requirement.

0 commit comments

Comments
 (0)