P
P
Prizm2021-04-22 22:43:22
Neural networks
Prizm, 2021-04-22 22:43:22

Is it possible to train GAN without batchnorm?

I am writing my own tools for creating neural networks. FeedForward models learn normally on it (often quite slowly, a convolution classifier of drawn zero-ones of size 30 * 30 can learn up to a minute, but that suits me). The problems start when I try to implement GAN. Error propagation and stuff works correctly, but learning does nothing. I tried to train on about 300 hand-drawn circles, took random subsets of size from 5 to 200 as samples, tried everything I could to optimize methods, waited a long time - up to 2 hours, tried different architectures, reconfigured pool operations, etc. as advised - nothing comes out. I came to the conclusion that it is necessary to use batchNorm, but in my implementation it is impossible, because running calculations on the neural network takes place only for one value at a time. (and in order to fix this, you will need to fix about 4k lines of code or write it again). So - can it really be in the absence of batchnorm? If not, what are the optimal generator and discriminator architectures and gradient descent optimization techniques to use? And why?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
I
imageman, 2021-08-11
@PrizmMARgh

Yes, without normalization.
Here https://github.com/eriklindernoren/PyTorch-GAN quite a lot of implementations of neural networks are collected (I experimented with ESRGAN from there).
FeedForward (these are those that are fully connected) is practically a dead end. Count how many weights you put into your neural network. For pictures, try convolutional neural networks.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question