Driven by instinct

Style

30 Nov 2014

I first discovered Moravec’s Paradox when watching Brief Encounter on television. I was watching the film with my daughter, who was then about seven; she had wandered in and began watching the film instead of persevering with her 12-times table.

Very quickly something struck me about my daughter’s reaction to the film: she understood it perfectly. Take the scene (which, incidentally, gave Billy Wilder the idea for The Apartment) when Alec borrows his friend’s flat for his assignation and the friend notices Laura’s scarf on a chair… My daughter immediately understood why Alec felt embarrassed. She understood the need for subterfuge when the couple were spotted by people they knew.

What struck me was the contrast between the effort it took her to understand the 12-times table and the ease which which she could decode interpersonal relations (guilt, shame, deceit, desire). Flat-owner Stephen deduces from presence of scarf that his friend Alec has been entertaining a female and that, since there was no reason for Alec to borrow the flat to meet his own wife, he must be engaged in some hanky-panky: to a seven-year-old girl that mental leap was easy. But a computer which could multiply 12 by 23,467 in a millisecond would have immense difficulty even recognising that a scarf was a scarf, never mind assessing its significance.

This, in essence, is Moravec’s Paradox — a problem identified by researchers in artificial intelligence. What this says is that, compared to a computer, my young daughter is at the same time completely dumb and a genius. The things we think of as hard, computers find easy, while the things we find instinctively easy computers find almost impossible. From a computer’s point of view, David Beckham is much, much cleverer than Garry Kasparov. A computer can beat Garry Kasparov at chess, but devising a robot to beat David Beckham at football — well, good luck with that one.

As Moravec himself — he’s an Austrian-born academic and robot designer — explains: ‘Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100,000 years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.’

Quite simply, evolution has endowed us with two forms of human intelligence: one automatic, easy and fast, the other deliberative, effortful and slow. Perhaps because it is effortful, we regard the second kind of intelligence as more virtuous than the first — in business or government the ‘rationalisation’ of anything is assumed to be a good thing. But, as all good Burkeans should know, attempts to rationalise things — replacing evolved intelligence with designed intelligence — can be much more problematic than we expect precisely because we underestimate the vital role unconscious intuition plays in life.

This is what is rapidly being discovered in our attempts to develop driverless cars, which will legally be allowed on British roads from 1 January next year.

‘I came on to the harbour front and I could detect agitation among the spectators. They were not looking at me leading the race, but were looking the other way. I braked very hard.’

This is the great Fangio describing how he avoided a crash in the 1950 Monaco Grand Prix. The pile-up was invisible to him, around a sharp bend. Yet from a trivial piece of information, the behaviour of the crowd, he was able to infer that something was badly wrong. That is a kind of intelligence which it is almost impossible to encode in software. What we don’t yet know is what role such instinctive intelligence plays in driving. Much more than in piloting a large airliner, certainly: compared to a driverless car, an autopilot has an easy job. It simply has to avoid one object: the ground.

So far, driverless cars have proved surprisingly bad at some things humans find easy. They don’t always spot potholes. They find it difficult to distinguish between a takeaway container and a rock, and will swerve to avoid both. Temporary traffic lights confuse them. In the UK, they will also need to learn to handle roundabouts, which demand social intelligence: human drivers instinctively use another driver’s road position or wheel alignment to predict his intended direction.

That is not to say that we cannot do a great deal to automate driving. At the Ford test-track last year, I was shown cars which parked themselves, and which automatically swerve to avoid crashes. My own car beeps if it thinks I am going to hit anything — a system which saved me from a rear-end shunt last year.

But there is a huge leap from computer-assisted driving to computer-controlled driving. And here’s where the problem lies. For unless I can fall asleep in a driverless car, it is as good as useless. If I have to sit upright at the wheel at all times in case the computer suddenly spasms and needs my help, there is little gain. Under those conditions, riding in a driverless car might be far worse psychologically than actually driving the car yourself: rather like the working life of a modern airline pilot, which was once memorably described as 99.99 per cent boredom, 0.01 per cent sheer panic.


Close