T
T
tukal-off2016-03-11 20:05:58
Neural networks
tukal-off, 2016-03-11 20:05:58

How are multiple outputs implemented in a multilayer perceptron?

I'm trying to implement a multilayer perceptron with multiple outputs.
Went through a lot of articles about backpropagation. In particular , this page describes the material on which I implemented the training in a very accessible way.
At the input I submit 3 matrices 5X5, depicting "0", "1", "2". The output is three neurons.
The learning process is as follows:

  1. I give the first matrix
  2. At the output, the first neuron should react ([1, 0, 0])
  3. Spreading the error
  4. I repeat accordingly for the other two matrices
  5. Next iteration

As a result, after already 10 iterations for any matrix at the output, the neurons react in the same way ([0.06, 0.16, 0.96]).
The weights look like this: (layer 0 (input), layer 1 (hidden), layer 2 (hidden), below it there are three output neurons)
0f73e6af7bb24b57b91a60abd991660f.png
Colored in ascending order from green to red.
Actually, what could I be missing?
Ps: this is my first time asking a question, don't judge too harshly. If you need more information, please let me know :)

Answer the question

In order to leave comments, you need to log in

1 answer(s)
W
Who_R_U, 2020-03-12
@Who_R_U

I made the first fully connected networks according to this article, the most understandable. After testing from the input to 3 neurons and 1 output, I started making a neural network into 2 hidden layers with an input of 100 values ​​(image 10x10 bw) to recognize numbers from 0 to 9 - and it all worked. I advise you to try. Are you using Bias? What is the learning rate, activation function?
Article

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question