You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These two questions came up, when I went threw the code:
In line 127 of pokeGan.py the bn1 tensor is created. But actually it is never used.. instead the next line continues with lrelu(conv1,...). Any reason for that?
Why is tanh used to generate the final images (line 113). Shouldn't the images values be between 0 and 1? So would not the sigmoid activation be the better choice here?
Any answers / ideas?
The text was updated successfully, but these errors were encountered:
I have the same question about bn1, but I think its just left over from a cut-n-paste as the generator has the same code.
I could be completely wrong on this though as I didn't get good results on the included pokemon images nor am I getting much better results on some other data I've thrown at it. I believe GAN networks take a lot of time and data to train, so there might just be a need to train more, or there could be problems in the code.
On tanh vs. sigmoid, I understand (in my very limited knowledge) that tanh trains better in many cases than sigmoid.
These two questions came up, when I went threw the code:
In line 127 of pokeGan.py the bn1 tensor is created. But actually it is never used.. instead the next line continues with lrelu(conv1,...). Any reason for that?
Why is tanh used to generate the final images (line 113). Shouldn't the images values be between 0 and 1? So would not the sigmoid activation be the better choice here?
Any answers / ideas?
The text was updated successfully, but these errors were encountered: