Research Overview

 

 

 

Every concept we have is essentially nothing but a tightly packaged bundle of analogies.
    Hofstadter

I'm interested in how we can get human-like knowledge into computer systems. For example ideas like the human concept of "container"; the actions associated with it, the spatial relationships it can participate in, that it can have many different instantiations, like plastic bag, or box, or glass bowl. In particular there is a facility for fluid/analogical reasoning with human concepts; a sheet of paper is not a container, but you can force it to be one by wrapping it around something (to get this idea for the first time you probably recall an analogous situation where you saw a material wrapped around something).

The two big issues I'm interested in here are:

  1. Development
    How do infants acquire/construct concepts, through sensorimotor interaction, and later with language input?
    (and: How could these infant mechanisms be borrowed by a cognitive robot interacting with the world, or a program with access to information on the Internet?)
  2. Representation
    How are concepts represented?
    (encompasses issues of compositionality in a hierarchical structure, and connections to related concepts/situations/perceptions)
    Hand in hand with this goes the question "How is reasoning performed on concepts?", because a particular representation lends itself to particular types of reasoning.
Up to 2013 I have been primarily interested in 1. But I began to think it is hard to tackle this without having some idea of how a concept is to be represented and used (otherwise I don't know what I'm trying to get my system to acquire). I now don't believe we are ready to tackle developmental AI. We first need to solve the problem of knowledge representation. If we can get some idea of a suitable knowledge representation and the knowledge (B) for a three-year-old say - just for some domain of reasoning - not everything, And if we can get some idea of the same for a two-year-old say (A), Then we will be in a position to tackle the developmental problem of explaining how to go from A to B given some experience. I expect to spend a few more years focussing more on 2 (the representation problem), then to return to 1 at some point (provided I'm around for long enough;-)

The ultimate goal is "learning for itself": a system that can go out in the world of images or video or text or robotics, and learn for itself. It will need some sort of base knowledge to help it get started; how much is an open question. Human assistance is ok, to point out relevant examples or explain things, but not reprogramming. I'm not talking about "seed AI" here to go beyond humans, I'm just aiming to get some parts of human (toddler) knowledge learnt.

More on 2: Concepts

There are various existing AI efforts at capturing concepts or commonsense knowledge I feel that these are focussing on tackling the vastness of human knowledge. I want to focus more on the machinery and principles, rather than the actual breadth of knowledge. In particular I want to investigate how to represent a concept or acquire it. I want to focus on a more limited number of concepts, and see if I can get the representation "right". The aspect of concepts I want to focus on is the aspect of structure that facilitates analogical reasoning. I believe Minsky when he said '... commonsense is knowing maybe 30 or 60 million things about the world and having them represented so that when something happens, you can make analogies with others'. Concepts need to be multi-faceted (allowing different representations) and compositional (made of parts that are meaningful themselves).

A lot of people are negative on handcoding knowledge. They believe instead that systems need to learn for themselves. In the long-term I agree with that, but in the short term I think it makes sense to do some handcoding, as a way to explore the space, with a constraint that we need the resulting system to transfer very well, ideally in a human-like way.
What kind of knowledge representation could allow the kind of transferrable knowledge we want?

I think that is an important question. I would like to try some handcoding to get a feel for possible answers. If we can handcode it we can always look into how to learn that as a second step.

I plan to investigate the representation of concepts in areas of computer vision (for acting robots) (actually some work on this already in publications) and text understanding, to see if it's possible to get to a level of toddler vision or toddler understanding.

More on 1: Developmental AI

I want to find out how new knowledge can build on old knowledge. It seems that some level of knowledge can act as a basis which gives you access to the possibility of learning some new knowledge, and after learning some bits of new knowledge you can have a new basis, opening new learning possibilities. For example, if I demonstrate to a typical 8-month-old infant how to knock down a tower of bricks with a wooden spoon, he cannot learn to do it; it is above his level. By 11 months he may be able to learn from this demonstration, and thus learn a new behaviour which can lead to further new discoveries. What is it that develops in the in-between time that allows him to learn from this demonstration? What are the components that need to be learnt and put together, and how do they explain how a new behaviour can be learnt? Another example comes from adults: If I pick up an advanced maths book to try to understand some new concept, I cannot understand it if I don't have sufficient background knowledge. Again there are some intermediate components which I need to learn and combine somehow, in order to give me the necessary "hooks" to understand the new concept. How are these new components added to what I already know, how are they combined, and how do they facilitate the new learning?

If we could understand this then we could begin to understand how an infant can start with some pretty basic knowledge, and eventually develop to an adult with knowledge of a great deal of complicated concepts. I think the best place to study this is in human infants, because adults have just too much knowledge, so it's hard to know what bits of knowledge are being leveraged to help an adult learn something new.

During the first year infants develop increasingly sophisticated sensorimotor abilities. Very little is known about how these abilities arise, and what prior knowledge and experiences the infant is building on. Artificial Intelligence (AI) can help us to investigate how this knowledge is built; with AI we try to create models of how the process might work, and test if our hypotheses are sensible or not. The technique of computational modelling forces us to be precise about our theories; all the details have to be made clear. This work usually throws up new questions because we realise we don't know enough about infant development, and this then leads to the need for new research on infants. For example, one thing that is needed from psychology is a detailed path of development, describing behaviours which can build on each other in a sequence, and the training experience necessary for development. I am currently looking for a psychologist to collaborate with to pursue this further.

My original aim has been to contribute to making better AI systems, however I feel that the best way to do this is by first understanding more about the kind of learning which human infants do, and then trying to apply those learning methods back to AI.

The focus of my current computational modelling work is on means-end sequences of action; i.e. where one "means" action is performed so that a second "end" can be achieved. A typical example is removing an obstacle in front of a toy which the infant wants to grab. I am interested in:

  • How the infant develops from using individual actions to chaining sequences,
  • What new things are learnt by chaining actions in different situations,
    (An intriguing possibility is Piaget's idea that relationships among objects can be learnt in concrete situations, and then these can be "lifted out" to become new "concepts" that can be recognised wherever they occur in new situations, effectively taking the perception of the world to a new level.)
  • How the knowledge acquired facilitates yet more sophisticated behaviour (for example how could the understanding of a relationship (such as one thing being "in front" of another) be used as a component in a higher level behaviour such as manipulating something with a stick, which appears in infants' experimentations towards the end of the first year).

The goal is to build a simulated system that exhibits behavioural development similar to an infant, and is able to develop to a stage of behaviour comparable to an infant at about 12 months; i.e. to exhibit the same sophistication in its exploration and play with objects. I believe that if we can do this then we will be likely to have discovered a radically new approach to learning and representation, which can be applied in practical AI systems.

Destinations of past PhDs:


Witold Slowinski: Google, Zurich
John Alexander: Arria
Severin Fichtl: Aeolus Robotics
Paulo Abelha: soon to Birmingham