The "right" problem for AI... a personal view

 

 

 

Last updated about 2013 - changed my mind a tiny bit since, i.e. away from modelling and more towards AI.

AI needs problems and techniques; problems to be solved and techniques to be used to solve them. I have no idea about what might be the "right" techniques for AI, but I have a strong belief that trying to model infant development is the "right" problem to solve. If we try to model infant development then we are forcing ourselves to develop techniques which can do some of the most basic things which humans do with ease but which computers cannot yet do. Furthermore we are arguably focussing on the easiest such problems, because human competences at later ages are built on what is learnt in infancy. If we try to take short cuts to more advanced stages, then we probably do it in the wrong way, for example coding in representations which do not make further learning and adjustment easy.

It seems that in looking at infants we can see the whole range of the hallmarks of human intelligence in their embryonic stages: experimenting with known actions in new unknown situations, always interpreting the new results in terms of what was known previously, exploiting analogies between similar situations, adjusting and extending existing knowledge, and forming new models of how the world works. A computational model of infant development should capture the essence of human intelligence.

I advocate quite a close modelling of the problems infants solve; i.e. placing a simulated infant learner in a world with similar physics to the real world, with blocks/rattles which it must manipulate in increasingly sophisticated ways, to recapitulate the stages of development observed in infancy. One might argue that this is unnecessary, and that we could research problems like "experimenting with known actions in new unknown situations" in different domains. This may be so, but it runs the risk of solving the "wrong" problem; copying the infant's problem seems to be a "safer" approach.

What is the "wrong" problem?

To clarify what I mean by the "right" problem it is perhaps useful to look at what I mean by the "wrong" ones. I believe that natural language understanding and machine translation are the "wrong" problems, because we are simply not ready for them yet. They rely on an understanding of commonsense knowledge, which begins to be acquired during infancy; we first need to understand the learning mechanism which could acquire this. That's not to say that I believe that the results of current research in these language areas will not have some use in some "strong AI" system in future; it's just that I expect that we will not achieve human level competence in language without cracking the learning of commonsense knowledge first. Chess is the "wrong" problem because it is something humans are particularly bad at, and in solving it we tend to produce systems which do not shed light on core human competences, and which cannot solve the kinds of problems which humans do with ease. Computer vision is partly the right problem, but should not be restricted to identifying objects (chair, aeroplane, horse) in images by matching against a modelbase. I agree with Aaron Sloman's writings: that we need to look at identifying affordances; objects will then be constituted by a particular combination of affordances. I believe that this affordance approach should be combined with the use of a sensorimotor learner, which learns how to identify affordances; the general approach is to begin building up the components it will use later. The argument for vision here is similar to that for language: we need to get the system to learn commonsense things for itself (language concepts, or classes of seen objects). This need for learning for itself has been well argued by Rich Sutton in his "Verification Principle". extend with comment on Hofstadter, or dev AI, or peak