Breaking Down the Differentiable Neural Computer

Ever since first working with Recurrent Neural Networks (RNNs) for predicting musical sequences during my PhD, I have been fascinated by these models and try to keep up with exciting developments in connectionist machine learning research surrounding these models. One of these for me has been the emergence of RNNs that are augmented by a dedicated memory unit. The idea was notably illustrated as the Neural Turing Machine (NTM) in an ArXiV submission by Alex Graves and colleagues from Google DeepMind. This early work while having gathered a fair deal of acclaim in the community, has since been followed up in a publication in the prestigious journal Nature that introduces a more evolved variant of the NTM known as theĀ Differentiable Neural Computer (DNC). During the past couple of weeks, I managed to spend some time learning about the NTM and the DNC and prepared a little slide-show (with Google Slides) containing my observations to share with others.

So here is the link to the slides, and I hope some of you who read it benefit from it! Please let me know if you find anything that needs to be corrected in it. I would appreciate that!

Leave a Reply

Your email address will not be published. Required fields are marked *