Categories
Uncategorized

Neural Nets that don’t stop learning

Over the weekend, I was reminiscing over my 1990 copy of Rumelhart and Mclelland’s seminal work on Parallel Distributed Processing (its about using backpropagation to teach Neural Nets). It reminded me that most modern efforts have missed the point over the efficacy of using Neural Nets in artificial intelligence.

Unlike artificial neural nets, us biological neural nets are not taught everything during a learning phase and then released unto the world as a fully taught algorithm. Instead we never stop learning. This is enormously useful for a number of reasons, but is also enormously dangerous for others.

Consider the driverless car that was incorrectly parking in vacant disabled parking spaces. Engineers had to teach the car that it had made a mistake until eventually the AI learnt that vacant parks with the relevant symbols are not for parking (presumably unless the car itself contains a handicapped passenger). The same neural net has to learn that those same symbols are irrelevant during regular driving and only of relevance when undertaking parking maneuvers.

Us humans have a major advantage. We don’t have to keep all potential contexts in our head simultaneously, because we can hold contexts in our short-term memory. Short term memory is simply recently learned material. If we are driving past lots of parked cars and searching for vacant parks, we have recent memories of driving slowly over the past minute or so and seeing lots of parked cars. This recently learned material is invaluable in determining context.

When an AI neural net is no longer in learning mode, it must have sufficient knowledge in its net to decide a course of action in all potential contexts… parking, driving, recharging/refueling, loading, unloading etc. It’s like trying to determine a story from a photo instead of a video. So why don’t we just let our neural nets continue to learn after we feel they are performing sufficiently well at the task at hand? The AI could then take advantage of recently learned context-relevant information which should simplify the AI’s task during operation…just like it does for us humans (see Google’s recent paper: https://www.technologyreview.com/s/602615/what-happens-when-you-give-an-ai-a-working-memory/).

This sounds good until we realise that neural nets that are prevented from continuing to learn in the field are more predictable. Imagine a driverless car accident occurs where the neural net decided to crash the car, killing its passenger, rather than plough through a zebra-crossing full of school children. The car’s manufacturer can take an identically trained neural net and test it under the same conditions experienced during the accident. The responses of the test will be the same as that produced by the crashed car’s neural net. However, if we have a net which continues to learn in the field, it becomes almost immediately unique and unreplicable. We are unable to reproduce the conditions and state of the AI during the accident. In fact the decision processes of the AI become as unpredictable as us human neural nets.

A machine controlled by a continually learning AI will not necessarily perform as expected. The impact of all of the uncontrolled experiences on the neural net are essentially unknown in total. Effectively this is the case for all humans. We trust human neural nets to be airline pilots, but the odd one may decide to deliberately fly the plane into the ground. Will we be able to accept similar uncertainty in the performance of our machines? And yet, it may be that failure to accept this uncertainty may be the key reason why our neural nets are being held back from performing general intelligence tasks.

Leave a Reply

Your email address will not be published. Required fields are marked *