T
T
thehyperbit2021-05-05 18:36:47
Neural networks
thehyperbit, 2021-05-05 18:36:47

Interpretability issue?

Hello, I am writing a scientific work on the topic of human and artificial consciousness with an emphasis on the biological section of science. Many sources describe the so-called "problem of interpretability", I quote: "After training a neural network, we have practically no way to determine on the basis of what the neural network makes a particular decision (and even more so to influence this decision). It is often said that a neural network is it's a black box. Is it true? Is it true that there is no way to determine how any neuron, weight or bias affects the final result? Maybe there is some algorithm at the concept stage or some article that solves this problem?
Thanks in advance for your time!

Answer the question

In order to leave comments, you need to log in

2 answer(s)
V
Vasily Bannikov, 2021-05-05
@vabka

Is it true that there is no way to determine how any neuron, weight or bias affects the final result?

This is not a problem - all weights are known. The problem is precisely to understand why the weights are the way they are.

K
kamenyuga, 2021-05-06
@kamenyuga

Yes it is. A neural network is difficult to interpret because it is complex. Even in simple cases, a neuron is several layers of tens or hundreds of neurons, with each next layer depending on the previous one, and in addition, activation functions and dropouts are applied between the layers. One of the latest approaches to interpreting neurons is LRP (layer-wise relevance propagation). So google something like "neural network interpretation with lrp method". It is expected that all serious sources will be in English and filled with matan.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question