Answer the question
In order to leave comments, you need to log in
Training a network to generate images - how to build a network?
Hello!
I'm trying to build a network on the tflearn library
Purpose: generating images based on the data fed to it in the learning process.
Images: own (set of pictures).
As I understood from the description of such generators, the network is trained so that the output data matches the input data, and the network topology itself is like an hourglass - many neurons, then convolution, then the narrowest part needed for subsequent generation, then mirror inverted layers.
That is, a picture is given as input, the network somehow compresses it into a set of features, and then generates an image in the next layers. And the learning process comes down to ensuring that the generated image becomes similar to the input.
Then a half with input data is “bited off” from the trained network, and the generator works on the data coming to the narrowest part.
But I can't build a network in such a way that it uses the input data for training.
I would be very grateful for some simple examples. I saw implementations for generating numbers in a similar way, but did not understand how they work (many letters in the code).
Answer the question
In order to leave comments, you need to log in
You described an autoencoder, this is far from the only architecture suitable for your task, try looking at GAN.
There are a lot of implementations on github.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question