Skip to content

Commit 43b9ea6

Browse files
committed
Renamed all pytorch_lightning to lightning.pytorch
Signed-off-by: George Araujo <[email protected]>
1 parent ef1f96f commit 43b9ea6

File tree

9 files changed

+29
-29
lines changed

9 files changed

+29
-29
lines changed

Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,14 +58,14 @@ RUN APT_INSTALL="apt-get install -y --no-install-recommends" && \
5858
ipdb \
5959
ipython \
6060
kornia \
61+
lightning \
62+
"lightning[extra]" \
6163
matplotlib \
6264
numpy \
6365
omegaconf \
6466
pillow \
6567
piq \
6668
prettytable \
67-
pytorch-lightning \
68-
"pytorch-lightning[extra]" \
6969
rich \
7070
tensorboard \
7171
torch_optimizer \

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ api_key=YOUR_API_KEY
110110

111111
More configuration variables can be found [here](https://www.comet.ml/docs/python-sdk/advanced/#comet-configuration-variables).
112112

113-
Most of the things that I found useful to log (metrics, codes, log, image results) are already being logged. Check [train.py](train.py) and [srmodel.py](models/srmodel.py) for more details. All these loggings are done by the [comet logger](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.loggers.comet.html) already available from pytorch lightning. An example of these experiments logged in Comet can be found [here](https://www.comet.ml/george-gca/super-resolution-experiments).
113+
Most of the things that I found useful to log (metrics, codes, log, image results) are already being logged. Check [train.py](train.py) and [srmodel.py](models/srmodel.py) for more details. All these loggings are done by the [comet logger](https://pytorch-lightning.readthedocs.io/en/stable/api/lightning.pytorch.loggers.comet.html) already available from pytorch lightning. An example of these experiments logged in Comet can be found [here](https://www.comet.ml/george-gca/super-resolution-experiments).
114114

115115
## Finished experiment Telegram notification
116116

configs/all.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ trainer:
6666
barebones: false
6767
benchmark: null
6868
callbacks:
69-
- class_path: pytorch_lightning.callbacks.ModelCheckpoint
69+
- class_path: lightning.pytorch.callbacks.ModelCheckpoint
7070
init_args:
7171
dirpath: ${trainer.default_root_dir}/checkpoints
7272
every_n_epochs: ${trainer.check_val_every_n_epoch}
@@ -89,7 +89,7 @@ trainer:
8989
gradient_clip_val: null
9090
inference_mode: true
9191
logger:
92-
- class_path: pytorch_lightning.loggers.CometLogger
92+
- class_path: lightning.pytorch.loggers.CometLogger
9393
# for this to work, create the file ~/.comet.config with
9494
# [comet]
9595
# api_key = YOUR API KEY
@@ -99,7 +99,7 @@ trainer:
9999
offline: false
100100
project_name: sr-pytorch-lightning
101101
save_dir: ${trainer.default_root_dir}
102-
- class_path: pytorch_lightning.loggers.TensorBoardLogger
102+
- class_path: lightning.pytorch.loggers.TensorBoardLogger
103103
init_args:
104104
default_hp_metric: false
105105
log_graph: true

configs/train_default_sr.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ model:
3535
trainer:
3636
# https://lightning.ai/docs/pytorch/stable/common/trainer.html
3737
callbacks:
38-
- class_path: pytorch_lightning.callbacks.ModelCheckpoint
38+
- class_path: lightning.pytorch.callbacks.ModelCheckpoint
3939
init_args:
4040
every_n_epochs: ${trainer.check_val_every_n_epoch}
4141
filename: model
@@ -44,20 +44,20 @@ trainer:
4444
save_last: true
4545
save_top_k: 3
4646
verbose: false
47-
# - class_path: pytorch_lightning.callbacks.RichModelSummary
47+
# - class_path: lightning.pytorch.callbacks.RichModelSummary
4848
# init_args:
4949
# max_depth: -1
50-
# - class_path: pytorch_lightning.callbacks.RichProgressBar
50+
# - class_path: lightning.pytorch.callbacks.RichProgressBar
5151
check_val_every_n_epoch: 200
5252
default_root_dir: experiments/test
5353
logger:
54-
- class_path: pytorch_lightning.loggers.CometLogger
54+
- class_path: lightning.pytorch.loggers.CometLogger
5555
init_args:
5656
experiment_name: test
5757
offline: false
5858
project_name: sr-pytorch-lightning
5959
save_dir: ${trainer.default_root_dir} # without save_dir defined here, Trainer throws an assertion error
60-
# - class_path: pytorch_lightning.loggers.TensorBoardLogger
60+
# - class_path: lightning.pytorch.loggers.TensorBoardLogger
6161
# init_args:
6262
# default_hp_metric: false
6363
# log_graph: true

main.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
from logging.handlers import RotatingFileHandler
33
from pathlib import Path
44
import numpy as np
5-
from pytorch_lightning.cli import LightningCLI
6-
from pytorch_lightning.loggers import CometLogger
5+
from lightning.pytorch.cli import LightningCLI
6+
from lightning.pytorch.loggers import CometLogger
77

88
import models
99
from srdata import SRData

models/srmodel.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,14 @@
88

99
import kornia.augmentation as K
1010
import piq
11-
import pytorch_lightning as pl
11+
import lightning.pytorch as pl
1212
import torch
1313
import torch.nn as nn
1414
import torch.optim as optim
1515
import torchvision
1616
import torch_optimizer as toptim
1717
from losses import EdgeLoss, FLIP, FLIPLoss, PencilSketchLoss
18-
from pytorch_lightning.loggers import CometLogger, TensorBoardLogger
18+
from lightning.pytorch.loggers import CometLogger, TensorBoardLogger
1919
from robust_loss_pytorch import AdaptiveImageLossFunction
2020
from torch import is_tensor
2121

predict.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@
66

77
import numpy as np
88
import torch
9-
from pytorch_lightning import Trainer, seed_everything
10-
from pytorch_lightning.loggers import CometLogger, TensorBoardLogger
9+
from lightning.pytorch import Trainer, seed_everything
10+
from lightning.pytorch.loggers import CometLogger, TensorBoardLogger
1111

1212
import models
1313
from srdata import SRData

srdata.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
import numpy as np
88
from PIL import Image
99
from PIL.Image import Image as Img
10-
from pytorch_lightning import LightningDataModule
10+
from lightning.pytorch import LightningDataModule
1111
from torch import Tensor
1212
from torch.utils.data import ConcatDataset, DataLoader, Dataset
1313
from torchvision.transforms import functional as TF

train.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,10 @@
66

77
import numpy as np
88
import torch
9-
from pytorch_lightning import Trainer, seed_everything
10-
from pytorch_lightning.callbacks import ModelCheckpoint
11-
from pytorch_lightning.callbacks import TQDMProgressBar
12-
from pytorch_lightning.loggers import CometLogger, TensorBoardLogger
9+
from lightning.pytorch import Trainer, seed_everything
10+
from lightning.pytorch.callbacks import ModelCheckpoint
11+
from lightning.pytorch.callbacks import TQDMProgressBar
12+
from lightning.pytorch.loggers import CometLogger, TensorBoardLogger
1313

1414
import models
1515
from srdata import SRData
@@ -23,14 +23,14 @@ class ItemsProgressBar(TQDMProgressBar):
2323
- **sanity check progress:** the progress during the sanity check run
2424
- **main progress:** shows training + validation progress combined. It also accounts for
2525
multiple validation runs during training when
26-
:paramref:`~pytorch_lightning.trainer.trainer.Trainer.val_check_interval` is used.
26+
:paramref:`~lightning.pytorch.trainer.trainer.Trainer.val_check_interval` is used.
2727
- **validation progress:** only visible during validation;
2828
shows total progress over all validation datasets.
2929
- **test progress:** only active when testing; shows total progress over all test datasets.
3030
For infinite datasets, the progress bar never ends.
3131
If you want to customize the default ``tqdm`` progress bars used by Lightning, you can override
3232
specific methods of the callback class and pass your custom implementation to the
33-
:class:`~pytorch_lightning.trainer.trainer.Trainer`:
33+
:class:`~lightning.pytorch.trainer.trainer.Trainer`:
3434
Example::
3535
class LitProgressBar(ProgressBar):
3636
def init_validation_tqdm(self):
@@ -43,16 +43,16 @@ def init_validation_tqdm(self):
4343
refresh_rate:
4444
Determines at which rate (in number of batches) the progress bars get updated.
4545
Set it to ``0`` to disable the display. By default, the
46-
:class:`~pytorch_lightning.trainer.trainer.Trainer` uses this implementation of the progress
46+
:class:`~lightning.pytorch.trainer.trainer.Trainer` uses this implementation of the progress
4747
bar and sets the refresh rate to the value provided to the
48-
:paramref:`~pytorch_lightning.trainer.trainer.Trainer.progress_bar_refresh_rate` argument in the
49-
:class:`~pytorch_lightning.trainer.trainer.Trainer`.
48+
:paramref:`~lightning.pytorch.trainer.trainer.Trainer.progress_bar_refresh_rate` argument in the
49+
:class:`~lightning.pytorch.trainer.trainer.Trainer`.
5050
process_position:
5151
Set this to a value greater than ``0`` to offset the progress bars by this many lines.
5252
This is useful when you have progress bars defined elsewhere and want to show all of them
5353
together. This corresponds to
54-
:paramref:`~pytorch_lightning.trainer.trainer.Trainer.process_position` in the
55-
:class:`~pytorch_lightning.trainer.trainer.Trainer`.
54+
:paramref:`~lightning.pytorch.trainer.trainer.Trainer.process_position` in the
55+
:class:`~lightning.pytorch.trainer.trainer.Trainer`.
5656
"""
5757

5858
def __init__(self, refresh_rate: int = 1, process_position: int = 0, batch_size: int = 16):

0 commit comments

Comments
 (0)