how are the samples picked when limit_train_batches
< 1.0?
#10672
Unanswered
miccio-dk
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment 7 replies
-
It doesn't subset the samples. It just do this internally: for batch_idx, batch in enumerate(dataloader):
# do something
if batch_idx == limit_train_batches:
break so how the data is sampled depends upon |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! I'm trying to understand the behavior of
limit_train_batches
(and itsvalid
andtest
analogues). In particular, I'm curious about whether the subset of batches is computed only once at the beginning of the training or re-sampled at every epoch.Say I have a dataset composed of N samples, and I want to train my model on a subset that comprises only M samples (doens't matter how this subset is chosen, as long as it stays the same across epochs). Is it sufficient to set
limit_train_batches=(M // batch_size)
or do I have to manually limit the size of my dataset somehow?Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions