Description
🐛 Bug
I am trying to load the lsun bedroom_train data from torchvision.dataset but it's returning an error.
To Reproduce
Steps to reproduce the behavior:
- import torch
- from torchvision import transforms
- from torchvision import datasets
- import os
- path = os.getcwd()
- train_transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize((0.5,0.5,0.5), (0.5, 0.5, 0.5))])
- bedroom = datasets.LSUN(root = path, classes= ['bedroom_train'], transform=train_transform)
Error Traceback (most recent call last)
in
----> 1 bedroom = datasets.LSUN(root = path, classes= ['bedroom_train'], transform=train_transform)
D:\Softwares\Anaconda\lib\site-packages\torchvision\datasets\lsun.py in init(self, root, classes, transform, target_transform)
75 self.dbs.append(LSUNClass(
76 root=root + '/' + c + '_lmdb',
---> 77 transform=transform))
78
79 self.indices = []
D:\Softwares\Anaconda\lib\site-packages\torchvision\datasets\lsun.py in init(self, root, transform, target_transform)
17
18 self.env = lmdb.open(root, max_readers=1, readonly=True, lock=False,
---> 19 readahead=False, meminit=False)
20 with self.env.begin(write=False) as txn:
21 self.length = txn.stat()['entries']
Error: G:\DCGANs/bedroom_train_lmdb: No such file or directory
Expected behavior
The LSUN bedroom training dataset should be downloaded in specified path
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
- PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.6.0/0.7.0
- OS (e.g., Linux): Windows 10
- How you installed PyTorch / torchvision (
conda
,pip
, source): conda - Build command you used (if compiling from source):
- Python version: 3.7.7
- CUDA/cuDNN version: 10.2
- GPU models and configuration: NVIDIA GTX 1060 6GB
- Any other relevant information:
Additional context
cc @pmeier