Google DeepMind’s New Algorithm adds “Memory” to A.I.

When DeepMind burst into prominent view in 2014 it taught its machine learning systems how to play Atari games. The system could learn to defeat the games, and score higher than humans, but not remember how it had done so.

The performance of DeepMind's EWC algorithm 'enhanced' neural network compared to its other nets. [Credit: DeepMind / PNAS]

The performance of DeepMind’s EWC algorithm ‘enhanced’ neural network compared to its other nets. [Credit: DeepMind / PNAS]

For each of the Atari games, a separate neural network had to be created. The same system could not be used to play Space Invaders and Breakout without the information for both being given to the artificial intelligence at the same time. Now, a team of DeepMind and Imperial College London researchers have created an algorithm that allows its neural networks to learn, retain the information, and use it again.

Previously, we had a system that could learn to play any game, but it could only learn to play one game. Here we are demonstrating a system that can learn to play several games one after the other.

— James Kirkpatrick [Research Scientist at DeepMind, lead author of its new research paper]

The group says it has been able to show “continual learning” that’s based on ‘synaptic consolidation’. In the human brain, the process is described as “the basis of learning and memory”.

Read More
2019 Artificial Intelligence News – AI News