K
K
Kojanseed2016-08-12 23:09:07
C++ / C#
Kojanseed, 2016-08-12 23:09:07

How are neural networks different?

Hello.
I decided to start studying neural networks.
Created my first perceptron.
In my understanding, a perceptron is needed for easy tasks to recognize a letter by type.
I decided to look further about convolutional networks and deep learning, but I don’t quite understand about them.
Can you explain to me (preferably with an example) how a convolutional network differs from deep learning?
As I understand it, the convolutional network allows you to optimize the neural network, i.e. first, for example, classifies the object (square), then details the object (traffic sign), and finally specifies the object (no entry sign).
And deep learning itself determines how many layers to do, what identifiers will be. Those. itself determines how to find a road sign and specify it.
Point out my mistakes. And prompt articles/literature about it. If with a convolutional network (in my understanding, which is described above) it is more or less clear how to implement it, then with deep learning (in my understanding, which is described above), it is not at all clear how it is possible to implement.
C++: What libraries would you recommend to use for image analysis and what methods (using contours or gradients) are suitable for training neural networks?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
A
Alexander Kislinsky, 2016-08-13
@Luonic

The perceptron allows you to select approximately a function that will allow you to classify data or predict what the unknown features of the data will be, learning from data for which the desired features are already known.
A convolutional layer in a neural network is simply a layer that allows you to reduce the dimension of a feature map (features are called features in English literature and lectures). Convolutions are not the opposite of deep neural networks, deep neural networks are just neural networks with more layers than a perceptron, that's all. For image classification, several basic types of layers are mainly used: convolutional (convolutional), max pooling, ReLU (Linear Rectifier Unit), and as the last few layers, for the classification itself, fully connected layers are used, as in a perceptron, and the number of outputs = the number of classes to classify.
No, the number of layers, their size, the size of the NS input cannot be determined automatically. These parameters are called hyperparameters. There are methods for their selection, but they mainly rely on experience and intuition.
There are several main frameworks for working with networks, and these are Caffe (perhaps the most common), Torch, Theano, TensorFlow. And then there is CNTK, it captivates with the fact that it can run 4 gpu at the same time on one machine. But most of them work with python, among data scientists for managing neural networks, scripting languages, or matlab, are more common.
As an advice on how to study this difficult topic, I’ll say the following: you don’t need to reach for the code and practice until there is a clear enough understanding in theory of how everything works. Each video, each article needs to be parsed to the word in order to chew everything that is not clear. Something is not clear, we google, we read, we realize, we return to the article. I recommend starting with video lectures on YouTube, it is easier to understand the principles of how layers work there without loading yourself with mathematics, since there will be no sense in mathematics until the basics are clear.
Here are some links:
scs.ryerson.ca/~aharley/vis/conv - great interactive demo of a convolutional network for digit recognition trained on MNIST
https://www.youtube.com/watch?v=2aF_yhVtlH0 - this is a great one video to start
https://www.youtube.com/watch?v=VhmE_UXDOGs
https://youtu.be/CLSy5WlaWKc - a bit boring but rewarding
https://www.youtube.com/watch?v=ByjaPdWXKJ4&index=... - super interesting and informative, but after understanding the basics

D
Deerenaros, 2016-08-13
@Deerenaros

Neural networks are a rather old direction, purely mathematical, which has recently received a very powerful kick in the ass due to greatly increased performance. However, the material on it is very motley, a lot of misinformation. And it is quite difficult to understand this, especially considering that it is poorly systematized and the industry is developing very quickly.
In short, everything is based on the simple idea of ​​finding the optimum, from the point of view of mathematics, almost any problem is solved in this way. Classification is based on such a simple thing as an error that can be corrected. By cumulatively correcting a hundred thousand times on various examples of the training set, you can get something working. Everything else is just purely technical issues that arise when the question arises about the implementation of all this.
And so what do we have? Mathematical apparatus. Target. And funds. Well, that looks like progress. But what is missing? Adequate classification, a large amount of good material, long and stable educational practice. But it seems there are beautiful materials.
Well, in general, there is an excellent collection of questionswhich you can try to answer. Asking the right question is half the answer.

X
xmoonlight, 2016-08-13
@xmoonlight


Here is a good selection of neural networks (text, formulas, principles): here Video lectures: here propagation) or as control threshold blockers (for a given neuron weight, back propagation is blocked).
In fact, this is an analogue of an electrical circuit with cascades of transistors and thyristors.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question