You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned here, there are situations where the user wants more frequent feedback from the net than just after each epoch. Especially so with the arrival of RNNs, which are hungry for tons of data but slow. Having more frequent feedback also allows more neat stuff, for instance, to stop early after, 2.5 epochs etc.
The solution proposed in the PR would solve the issue but feels a little bit like cheating, since the batch iterator will pretend the epoch to be over when really it isn't.
I have an implementation lying around that has an on_epoch_finished callback. Unfortunately, that complicates matters, since you have to synchronize the loop through train and eval (which in turn requires adjusting the batch size for eval).
So does anybody have another solution? I would help out with coding if necessary.
The text was updated successfully, but these errors were encountered:
A on_batch_finished handler has been added since. But it won't cover your use case where you do early stopping between epochs.
However, I think that @dirtysalt's MinibatchIterator works well enough for your case. Sure the output will say you iterated one epoch when you didn't, but I think that can be dealt with.
I'll reproduce the MinibatchIterator class here for the record:
classMiniBatchIterator(BatchIterator):
def__init__(self, batch_size=128, iterations=32):
BatchIterator.__init__(self, batch_size)
self.iterations=iterationsself.X=Noneself.y=Noneself.cidx=0self.midx=0def__call__(self, X, y=None):
# if data set is resetifnot (self.XisXandself.yisy):
self.cidx=0n_samples=X.shape[0]
bs=self.batch_sizeself.midx= (n_samples+bs-1) //bsself.X, self.y=X, yreturnselfdef__iter__(self):
bs=self.batch_sizeforiinrange(0, self.iterations):
sl=slice(self.cidx*bs , (self.cidx+1) *bs)
self.cidx+=1# wrap up.ifself.cidx>=self.midx: self.cidx=0Xb=self.X[sl]
ifself.yisnotNone:
yb=self.y[sl]
else:
yb=Noneyieldself.transform(Xb, yb)
As mentioned here, there are situations where the user wants more frequent feedback from the net than just after each epoch. Especially so with the arrival of RNNs, which are hungry for tons of data but slow. Having more frequent feedback also allows more neat stuff, for instance, to stop early after, 2.5 epochs etc.
The solution proposed in the PR would solve the issue but feels a little bit like cheating, since the batch iterator will pretend the epoch to be over when really it isn't.
I have an implementation lying around that has an on_epoch_finished callback. Unfortunately, that complicates matters, since you have to synchronize the loop through train and eval (which in turn requires adjusting the batch size for eval).
So does anybody have another solution? I would help out with coding if necessary.
The text was updated successfully, but these errors were encountered: