Posts

Decoupled Neural Interfaces using Synthetic Gradients

Image
by Max Jaderberg, DeepMind Neural networks are the workhorse of many of the algorithms developed at DeepMind. For example, AlphaGo uses convolutional neural networks to evaluate board positions in the game of Go and DQN and Deep Reinforcement Learning algorithms use neural networks to choose actions to play at super-human level on video games. This post introduces some of our latest research in progressing the capabilities and training procedures of neural networks called Decoupled Neural Interfaces using Synthetic Gradients . This work gives us a way to allow neural networks to communicate, to learn to send messages between themselves, in a decoupled, scalable manner paving the way for multiple neural networks to communicate with each other or improving the long term temporal dependency of recurrent networks. This is achieved by using a model to approximate error gradients , rather than by computing error gradients explicitly with backpropagation. The rest of this post assume

DeepMind AI reduces Google data centre cooling bill by 40%

Image
by Rich Evans, Research Engineer, DeepMind and Jim Gao, Data Centre Engineer, Google From smartphone assistants to image recognition and translation, machine learning already helps us in our everyday lives. But it can also help us to tackle some of the world’s most challenging physical problems -- such as energy consumption.  Large-scale commercial and industrial systems like data centres consume a lot of energy, and while much has been done to stem the growth of energy use , there remains a lot more to do given the world’s increasing need for computing power. Reducing energy usage has been a major focus for us over the past  10 years: we have built our own super-efficient servers at Google, invented more efficient ways to cool our data centres and invested heavily in green energy sources , with the goal of being powered 100 percent by renewable energy. Compared to five years ago, we now get around 3.5 times the computing power out of the same amount of energy, and we continue

Deep Reinforcement Learning

Image
by David Silver, Google DeepMind Humans excel at solving a wide variety of challenging problems, from low-level motor control through to high-level cognitive tasks. Our goal at DeepMind is to create artificial agents that can achieve a similar level of performance and generality. Like a human, our agents learn for themselves to achieve successful strategies that lead to the greatest long-term rewards. This paradigm of learning by trial-and-error, solely from rewards or punishments, is known as reinforcement learning (RL). Also like a human, our agents construct and learn their own knowledge directly from raw inputs, such as vision, without any hand-engineered features or domain heuristics. This is achieved by deep learning of neural networks. At DeepMind we have pioneered the combination of these approaches - deep reinforcement learning - to create the first artificial agents to achieve human-level performance across many challenging domains. Our agents must continually ma