L
L
LVitA2018-03-07 19:27:31
Python
LVitA, 2018-03-07 19:27:31

Why does a neural network train incorrectly?

Good day!
I analyze the error backpropagation algorithm and it seems that training occurs, but during the check an incorrect result is produced. What is the problem I can not figure out, help me find what the problem is?

Network code
import numpy as np
import numpy.random as r
from data import one, tow, three

# Инициализация сети
def initialize_network(inputs, n_first, n_hidden, n_last):
    network = list()
    first_layer = [{
        'weights': [round(r.uniform(-0.5, 0.5), 2) for i in range(inputs + 1)]
    } for i in range(n_first)]
    hidden_layer = [{
        'weights': [round(r.uniform(-0.5, 0.5), 2) for i in range(n_first + 1)]
    } for i in range(n_hidden)]
    last_layer = [{
        'weights': [round(r.uniform(-0.5, 0.5), 2) for i in range(n_hidden + 1)]
    } for i in range(n_last)]
    network.append(first_layer)
    network.append(hidden_layer)
    network.append(last_layer)
    return network


def sigmoid(activation):
    # return np.tanh(activate)
    return 1.0 / (1.0 + np.exp(-activation))


def sigmoid_derivative(output):
    # return 1.0 - np.tanh(output) * np.tanh(output)
    return sigmoid(output) * (1.0 - sigmoid(output))


def activate(weights, inputs):
    activation = weights[-1]
    for i in range(len(weights) - 1):
        activation += weights[i] * inputs[i]
    return activation


# Прямой проход
def forward_propagate(network, row):
    inputs = row
    for layer in network:
        new_inputs = []
        for neuron in layer:
            activation = activate(neuron['weights'], inputs)
            neuron['output'] = round(sigmoid(activation), 3)
            new_inputs.append(neuron['output'])
        inputs = new_inputs
    return inputs


def backward_propagate_error(network, expected):
    for i in reversed(range(len(network))):
        layer = network[i]
        errors = []
        if i != len(network) - 1:
            for j in range(len(layer)):
                error = 0.0
                for neuron in network[i + 1]:
                    error += (neuron['weights'][j] * neuron['delta'])
                errors.append(error)
        else:
            for j in range(len(layer)):
                neuron = layer[j]
                errors.append(expected[j] - neuron['output'])
        for j in range(len(layer)):
            neuron = layer[j]
            neuron['delta'] = round(
                errors[j] * sigmoid_derivative(neuron['output']), 2)


def update_weights(network, row, l_rate):
    for i in range(len(network)):
        inputs = row
        if i != 0:
            inputs = [neuron['output'] for neuron in network[i - 1]]
        for neuron in network[i]:
            for j in range(len(inputs)):
                neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]
            neuron['weights'][-1] += l_rate * neuron['delta']


def predict(network, row):
    outputs = forward_propagate(network, row)
    return outputs


def train_network(network, train, l_rate, n_epoch, n_outputs):
    for epoch in range(n_epoch):
        sum_error = 0
        for index, row in enumerate(train):
            output = forward_propagate(network, row)
            expected = [0 for i in range(n_outputs)]
            expected[index] = 1
            sum_error += round(sum([(expected[i] - output[i]) ** 2 for i in range(len(expected))]), 2)
            backward_propagate_error(network, expected)
            update_weights(network, row, l_rate)


if __name__ == '__main__':
    dataset = [one, tow, three]
    inputs = len(dataset[0])
    outputs = len(dataset)
    network = initialize_network(inputs, 4, 9, outputs)
    train_network(network, dataset, 0.1, 500, outputs)
    for row in dataset:
        print(predict(network, row))

I store data like this:
Data
one = [0, 0, 0, 1, 0,
       0, 0, 1, 1, 0,
       0, 1, 0, 1, 0,
       0, 0, 0, 1, 0,
       0, 0, 0, 1, 0,
       0, 0, 0, 1, 0,
       0, 0, 0, 1, 0]

tow = [0, 0, 1, 0, 0,
       0, 1, 0, 1, 0,
       1, 0, 0, 0, 1,
       0, 0, 0, 1, 0,
       0, 0, 1, 0, 0,
       0, 1, 0, 0, 0,
       1, 1, 1, 1, 1]

three = [0, 1, 1, 1, 1,
         0, 0, 0, 0, 1,
         0, 0, 0, 1, 0,
         0, 0, 1, 0, 0,
         0, 0, 0, 1, 0,
         0, 0, 0, 0, 1,
         0, 1, 1, 1, 1]

Result when checking:
[0.323, 0.344, 0.349]
[0.314, 0.345, 0.357]
[0.308, 0.344, 0.368]
It is clear from the first time that the result is wrong.
I would be very grateful for your help!

Answer the question

In order to leave comments, you need to log in

1 answer(s)
I
iQQator, 2018-03-07
@iDevPro

neuron['weights'][-1] ??

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question