K
K
ks_ks2013-05-15 10:37:11
data mining
ks_ks, 2013-05-15 10:37:11

What stops you in the process of creating AI?

I'm interested in how many people have tried to do something intelligent
that will think logically the same way we do,
solving problems on an arbitrarily given topic.

By artificial intelligence, please understand the following thing:
- We can set arbitrary tasks for this system - perform something
- +\- interaction should take place in a natural (and / or as close as possible to it, language)
- It has the ability to only start performing tasks, after the task is clear
— If the task is unclear, the system asks us for clarification
— We are sure that this process will be finite and it will not last for many years in terms of time
— Tasks are carried out in comparable, with human quality (preferably more productively)
— Tasks are completed in adequate time (a day, two, five ..)

Why did you stop the implementation process, and do you plan to continue it?
What do you personally lack to implement "Strong AI"?
What was this area? And which one are you planning?
When are you thinking of taking the next step?

My question was born from here:
- it’s good to come up with ideas - what can be done
- but when I understand how much time it takes (to me personally, to implement the next idea),
I understand that I won’t even start doing this, because enthusiasm will fade away much earlier than
the first results that could please ...
- there is no possibility of joint thinking on the task (according to personal experience -
it helps a lot to build a clear picture in the head, what
do we want, if you think not 1)
- If you transfer everything to a commercial basis, then a year of work of 30-50 developers is worth about a lama bucks.
Namely, such a team, I believe, is capable of creating a major project that would be
not only interesting from a scientific point of view,
but also commercially successful.

1. Accordingly, while I am stopped at the beginning of the implementation by the lack of a couple of
extra, extra, lam bucks. 2. The absence of a person, or a group of people, who
, unlike me, would be easier to implement
ideas than to come up with new ideas.

Yes, that's all, in general. I would love to hear what anyone thinks about this.

Answer the question

In order to leave comments, you need to log in

18 answer(s)
L
lightcaster, 2013-05-17
@lightcaster

Let's write AI. Let him chat with people, do something useful. In fact, as much as possible - 60 years have passed, but AI is still gone.
1) And so, let's try from the tongue. Let them listen, understand and respond.
Here we have the text. Breakdown into words. We will process the endings and so on. The morphology module is ready, it was not difficult. Next, somehow you need to extract the structure from the text. The same is not a problem - context-free grammars to help, and a syntax module is ready. So far, everything is fine: we have sorted out the morphology, built a tree of word connections. But what's next? In a good way, it is necessary that AI understand the text. This is where the dancing with tambourines begins. What is "understand" no one really understands :).
a) Option one - logic to help. We do not need understanding at all, the main thing is that everything be clear:
All people are mortal
Socrates is a man,
Socrates is mortal
Modus ponens, that's all you need. Prolog knows how to deal with this. We select from the text (syntax tree) constructions of the VB type (SUBJ OBJ), stuff them into the predicate VB (SUBJ, OBJ) and you're done.
But then, suddenly:
> "Socrates was a man."
Um... is that time? We don't process this. Well, it's not scary, let's invent some kind of temporal logic.
> "In my opinion, Socrates is a man"
What is this, a modality? Where did she come from? You will have to somehow process the degree of certainty of the facts supplied by someone ...
And what, for each language nuance, cut out your own logic? And then somehow combine them? And then the output will still be undecidable, the AI ​​will hang on the phrase “hello world”. No, it's complicated. We need to come up with something else.
You can of course turn to linguists. But they can break spears over one simple construction for a decade, and here we need to learn the whole language at once. Won't go.
Actually, it works, but only to a very limited extent. There were attempts, but they failed. Google by the name "Terry Winograd" and his program SHRDLU. It's also helpful to google Montague grammar .
b) To hell with logic. Let's write a graph that would describe any situation.
So, to make a graph, you need to set some concepts. Optimal - we describe the terms, and set how they interact with each other. It is clear that the graph will turn out to be large, but if we try, we will succeed. So?
No not like this. Google the Cyc project. Started writing in 1984. Most of the participants in this discussion have not yet been born. So where are the results? They are, quite mediocre:
- the graph turns out to be huge
- the relations between the terms do not really want to line up in a beautiful graph
- polysemy
- the difficulty of adding knowledge
See also the Frame language and other knowledge representations.
But it looks like we've failed here too. What's next?
2) And let's go from a completely different perspective - write your own programming language. Apparently, it will be easier with him.
But here's the catch - no matter what language we write, it will be equivalent (or weaker) to a Turing machine or any Turing-complete computing system. Yes, yes, and even Brainfuck.
3) Maybe stuff it all into a NEURAL NETWORK and train everything with a GENETIC ALGORITHM? It sounds tempting, only if you translate it into mathematical language - we are just doing optimization, trying to find some kind of probability distribution. In general, machine learning is quite specific. Here we solve narrow and specific tasks - we build classifiers or regressions. Not very similar to strong AI, and all this mathematics somehow doesn’t look sexy :) - gradient descents, hessians, entropies.
But oddly enough, it is precisely this area that serious scientists are working on. And there are many problems here. Models without hidden states are too naive. Hidden state models are difficult to train. Yes, you need computing power. But this is the most constructive approach and already allows you to achieve interesting results.
Forgive me for the sarcasm. Apparently, I stumble upon such questions too often. I hope I have answered. I suggest writing a simple POS tagger to dive into the field. This is the very first step in the computer. linguistics. But it allows you to feel the complexity of the problem.

P
ProstoTyoma, 2013-05-15
@ProstoTyoma

It seems to me that the problem is that TK on AI is written every time in the same style as yours. With such a technical specification, you cannot solve the problem, and writing in more detail is not a fact that is possible.

L
lesobrod, 2013-05-15
@lesobrod

To be honest, I have no experience in creating AI. But I have been interested in human intellect and consciousness for a long time.
(From a materialistic point of view, of course).
Two approaches have made a big impression:
1. Roger Penrose tries to prove (see, for example, "Shadows of the Mind") that our consciousness is not computable
and not modelable
. Especially in terms of mathematical thinking.
2. Giulio Tononi put forward the “Integral Information” hypothesis , according to which any (including non-living) systems capable of generating information beyond and in addition to that generated by its individual parts can have consciousness.
It seems to me that anyone who decides to delve into AI should familiarize themselves with these materials.

K
kzn, 2013-05-17
@kzn

Trite interferes with the ambiguity of a natural language, for example. There are no modern methods that would have accuracy at the human level.
POS tagger shuts up on syntax.
Syntax shuts up on semantics.
About semantics it is not clear at all.

L
lesobrod, 2013-05-15
@lesobrod

Excuse me (for God's sake, in whom I don't believe), but about the search for funding for projects, I always remember this short but killer story by Arkady Averchenko.
I'm really close to the topic of AI. Therefore, one more idea, rather closer to the topic of funding (o_O)
Not very big, but still (2 years) experience in studying neuroscience materials suggests:
90 to 10 more likely to receive a grant or other support for work on modeling and studying the human consciousness , and not to create another AI model "as standalone itself"

B
Biga, 2013-05-15
@Biga

In what language should the AI ​​task be set? On formal or human?
If in a formal way, then this language will be almost a programming language, and you will very quickly stop calling it artificial intelligence.
If on a human, then, for a second, AI is not human. He will not understand all the meanings that you put in a message in natural language. People involved in ontologies are now trying to defeat this topic (I don’t know how it’s called correctly), maybe they will succeed.
In general, the essence is that we ourselves do not know what we need. And when we know, then we no longer need AI.

S
Sergey Galkin, 2013-05-18
@Larrikin

The only thing stopping me is the lack of a computing cluster to launch the project. Maybe even diskless, just memory, processor and network access.

E
Eddy_Em, 2013-05-15
@Eddy_Em

Science has not yet developed enough for this. That's all.
As there will be a breakthrough in medicine, then it will be possible to try to write AI ...

P
palexisru, 2013-05-16
@palexisru

I propose to evaluate as a modeling tool for AI Tunnel modeling from habrahabr.ru/post/176391/
But, of course, contexts are important, for example, “bind” in programming is different from sea
nodes it is necessary to have AI for each context and a system for translating between them using full transcripts into a single global language. For each term, the expert system should ask which of the options listed in the dictionary (encyclopedic entry) is meant.

P
palexisru, 2013-05-16
@palexisru

something like this: “Between love for the many-unity and the many-unity of love” by
V.E. Voitsekhovich. Love as One
S.A. Borchikov. The organic logic of love
- and further fabrications
I think that this is an emotional reaction of the editorial board to the inclusion of my thoroughly analytical article.
You don't pay much attention. The authors of the articles are successful philosophers. Many of them have university degrees in mathematics.
Did you think that no one would ask your AI system a question about "Love for the many-unity and the many-unity of love"? You were wrong!
By the way, I can offer for analysis my abstract of an article by the late director of the Institute of Artificial Intelligence
A.S. Narinyani. Between knowing and not knowing - naive topography 2.: integral-community.ru/forum/viewtopic.php?f=17&t=49

L
lilek, 2013-05-22
@lilek

From a materialistic point of view, the problem of implementing AI is as follows:
The process of thinking is the process of interaction of neurons in the associative area of ​​the brain. Modeling the operation of a single neuron consumes quite significant computer resources. That is, it is necessary to simulate the interaction of a huge number of neurons. There are two options for further development: either increase productivity in order to eventually get a simulated brain, or develop new hardware, the structural units of which will not be transistors, but real neurons (or nodes that completely simulate their work).
That is, what stops me is that at the moment there is no sufficient hardware base for the implementation of AI. (Yes, I studied a lot of information on the implementation of AI and, in general, on the topic of human thinking and brain physiology, and even tried to do something.)

L
lilek, 2013-05-22
@lilek

Tried to implement a narrowly focused AI to solve certain problems. For example, word sense disambiguation, or an analytical system of exchange trading. The systems produced some acceptable output. Transistors are completely unsuitable for the implementation of AI, because they only have two states: open / closed. The neuron, as a device, is much more complex. It has many different "states". Read the wikipedia article, it's all there. The very logic of interaction is on a completely different level.
Why model your brain with your head? Because the brain is a “sample” of a full-fledged AI that is able to perceive information, remember it (learn), and later, in similar repeated situations, analyze it and issue solutions. Do you need AI for this? Or I'm wrong?

K
ks_ks, 2013-05-16
@ks_ks

"Well, I'll have to wait a few years" - why so pessimistic? =)

K
ks_ks, 2013-05-22
@ks_ks

"analytical system of exchange trading" - on which platform (if it was real-time analysis, was its API used to make transactions)? What methods of analysis were used? Why did you decide to stop?
"The neuron, as a device, is much more complex." - well, this is not the equivalent of the fact that in tasks requiring analytics it will surpass the capabilities of a PC, which is built on the basis of transistors - or does it have other features that unambiguously allow us to assert the opposite?
If you are talking about this article , then I don’t see a word about states there. But I see this, which says: "at rest (inactive state) and in the state of discharges (active state)" - which roughly corresponds to your statement about the transistor: "on / off".

S
Sosiska, 2013-11-30
@Sosiska

It is not clear why the developers? In my opinion K.O. answered the question best of all) You can draw an analogy with robotics. They were able to make an artificial arm (IR). The quality of execution depends on understanding how it works and on the development of technologies that are used to implement it.

G
Grubergen, 2014-04-04
@Grubergen

It stops the understanding that there is a lot of knowledge that needs to be taught to AI. Right up to the moment when you allow him to study something on his own. There would be many like-minded people, but it’s useless to show someone a program that understands nothing, and even more so to declare that this is artificial intelligence ...

A
andyN, 2014-05-08
@andyN

Yes, there are only 2 difficulties: computing power (try to implement a neural network with at least a hundred million neurons in hardware) and training (it is impossible to fully automate it, and the amount of resources spent increases in proportion to the amount of data).

I
Ierihon2014, 2014-06-04
@Ierihon2014

And if we make a system that will remember everything we say to it and it will constantly remember questions and answers

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question