E
E
efremovaleksey2012-03-20 02:26:13
Algorithms
efremovaleksey, 2012-03-20 02:26:13

Restoring a damaged image by a neural network?

h1I6QDby.png
I will create a neural network to restore the image above, as well as similar ones.
Each image has regular R/G/B color plane damage (see the callout on the right side of the image). That is, as an example, only the R-coordinate is left in every fourth (second) pixel. Also, the image damaged in this way will optionally contain noise (training will take place on real photographs). The amount of damage is approximately 2/3, i.e. each pixel has only one of its R/G/B values ​​intact. At the output, the network should produce the restored value of the central pixel from the NxN block, N is naturally odd, the maximum block size is 15x15.
The input data will be cleaned - by subtracting the image interpolated from the original data (as a preliminary result).
So far I have settled on a multilayer ANN and on the Elman network .
If someone has encountered solving problems of this kind, give advice on what type of neural network is better to use and how to train? Should genetic algorithms be used?

Answer the question

In order to leave comments, you need to log in

4 answer(s)
P
powder96, 2012-03-20
@powder96

You may find this article useful: habrahabr.ru/post/120473/

S
sermal, 2012-03-20
@sermal

Look also towards the Hopfield Neural Network

D
da0c, 2012-03-29
@da0c

The task is interesting. Here are some thoughts that come to mind about the solution.
1. Neurons. Maybe, but it is not very clear how to organize training. In general, you can try it works / does not work in the NeuroNet toolbox in Matlab, there is a worthwhile documentation.
2. Perhaps the best solution would be fractal methods based on systems of iterated functions (Iteration Function System). This method is used in compression and can give reasonably good extrapolation.

A
Alexander Khmelev, 2012-04-06
@akhmelev

Just a thought.
If the ANN is fundamentally important, then the usual ANN, but tiny and with a narrow throat (small number of neurons in the hidden layer), may suit you.
A terribly slow algorithm:
1) train the grid with a 3x3 (or so) running window over a 15x15 field (as an associative memory, i.e. an entry and exit window).
2) after training, the network will approximate the surface i.e. will smooth out the noise, for this the small size of the network and the early stop of training are important.
3) we remember the network output for all window positions, we average the results pixel by pixel.
4) then we shift the square 15x15 and the whole process is repeated. The offset must be overlapped (i.e. less than 15 pixels) otherwise the seams will pop out.
If the field of one training cycle is more than 15x15, then the approximation will smear the image due to excessive averaging. The algorithm will work in theory in this form, but super-slowly.
PS In general, the usual non-linear regression or something like splines is appropriate here, but for two input variables.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question