Every concept we have is essentially nothing but a tightly packaged bundle of analogies.
I'm interested in how we can get human-like knowledge into computer systems. For example ideas like the human concept of "container"; the actions associated with it, the spatial relationships it can participate in, that it can have many different instantiations, like plastic bag, or box, or glass bowl. In particular there is a facility for fluid/analogical reasoning with human concepts; a sheet of paper is not a container, but you can force it to be one by wrapping it around something (to get this idea for the first time you probably recall an analogous situation where you saw a material wrapped around something).
The two big issues I'm interested in here are:
The ultimate goal is "learning for itself": a system that can go out in the world of images or video or text or robotics, and learn for itself. It will need some sort of base knowledge to help it get started; how much is an open question. Human assistance is ok, to point out relevant examples or explain things, but not reprogramming. I'm not talking about "seed AI" here to go beyond humans, I'm just aiming to get some parts of human (toddler) knowledge learnt.
More on 2: ConceptsThere are various existing AI efforts at capturing concepts or commonsense knowledge
A lot of people are negative on handcoding knowledge. They believe instead that systems need to learn for themselves. In the long-term I agree with that, but in the short term I think it makes sense to do some handcoding, as a way to explore the space, with a constraint that we need the resulting system to transfer very well, ideally in a human-like way.
I plan to investigate the representation of concepts in areas of computer vision (for acting robots) (actually some work on this already in publications) and text understanding, to see if it's possible to get to a level of toddler vision or toddler understanding.
More on 1: Developmental AII want to find out how new knowledge can build on old knowledge. It seems that some level of knowledge can act as a basis which gives you access to the possibility of learning some new knowledge, and after learning some bits of new knowledge you can have a new basis, opening new learning possibilities. For example, if I demonstrate to a typical 8-month-old infant how to knock down a tower of bricks with a wooden spoon, he cannot learn to do it; it is above his level. By 11 months he may be able to learn from this demonstration, and thus learn a new behaviour which can lead to further new discoveries. What is it that develops in the in-between time that allows him to learn from this demonstration? What are the components that need to be learnt and put together, and how do they explain how a new behaviour can be learnt? Another example comes from adults: If I pick up an advanced maths book to try to understand some new concept, I cannot understand it if I don't have sufficient background knowledge. Again there are some intermediate components which I need to learn and combine somehow, in order to give me the necessary "hooks" to understand the new concept. How are these new components added to what I already know, how are they combined, and how do they facilitate the new learning?
If we could understand this then we could begin to understand how an infant can start with some pretty basic knowledge, and eventually develop to an adult with knowledge of a great deal of complicated concepts. I think the best place to study this is in human infants, because adults have just too much knowledge, so it's hard to know what bits of knowledge are being leveraged to help an adult learn something new.
During the first year infants develop increasingly sophisticated sensorimotor abilities. Very little is known about how these abilities arise, and what prior knowledge and experiences the infant is building on. Artificial Intelligence (AI) can help us to investigate how this knowledge is built; with AI we try to create models of how the process might work, and test if our hypotheses are sensible or not. The technique of computational modelling forces us to be precise about our theories; all the details have to be made clear. This work usually throws up new questions because we realise we don't know enough about infant development, and this then leads to the need for new research on infants. For example, one thing that is needed from psychology is a detailed path of development, describing behaviours which can build on each other in a sequence, and the training experience necessary for development. I am currently looking for a psychologist to collaborate with to pursue this further.
My original aim has been to contribute to making better AI systems, however I feel that the best way to do this is by first understanding more about the kind of learning which human infants do, and then trying to apply those learning methods back to AI.
The focus of my current computational modelling work is on means-end sequences of action; i.e. where one "means" action is performed so that a second "end" can be achieved. A typical example is removing an obstacle in front of a toy which the infant wants to grab. I am interested in:
The goal is to build a simulated system that exhibits behavioural development similar to an infant, and is able to develop to a stage of behaviour comparable to an infant at about 12 months; i.e. to exhibit the same sophistication in its exploration and play with objects. I believe that if we can do this then we will be likely to have discovered a radically new approach to learning and representation, which can be applied in practical AI systems.