logo

Project AGI

Building an Artificial General Intelligence

This site has been deprecated. New content can be found at https://agi.io

Sunday 26 July 2015

Reading list - July 2015

This month's reading list continues with a subtheme on recurrent neural networks, and in particular Long Short Term Memory (LSTM).

First here's an interesting report on a panel discussion about the future of Deep Learning at the International Conference on Machine Learning (ICML), 2015:

http://deeplearning.net/2015/07/13/a-brief-summary-of-the-panel-discussion-at-dl-workshop-icml-2015/

Participants included Yoshua Bengio (University of Montreal), Neil Lawrence (University of Sheffield), Juergen Schmidhuber (IDSIA), Demis Hassabis (Google DeepMind), Yann LeCun (Facebook, NYU) and Kevin Murphy (Google).

It was great to hear the panel express an interest in some of our favourite topics, notably hierarchical representation, planning and action selection (reported as sequential decision making) and unsupervised learning. From the Deep Learning community this is a new focus - most DL is based on supervised learning.

In the Q&A session, it was suggested that reinforcement learning be used to motivate the exploration of search-spaces to train unsupervised algorithms. In robotics, robustly trading off the potential reward of exploration vs using existing knowledge has been a hot topic for several years (example).

The theory of Predictive Coding suggests that the brain strives to eliminate unpredictability. This presents difficulties for motivating exploration - critics have asked why we don't seek out quiet, dark solitude! Friston suggests that prior expectations balance the need for immediate predictability with improved understanding in the longer term. For a good discussion, see here.

Our in-depth reading this month has continued on the theme of LSTM. The most thorough introduction we have found is Alex Graves' "Supervised Sequence Labelling with Recurrent Neural Networks":


However, a critical limitation of LSTM as presented in Graves' work is that online training is not possible - so you can't use this variant of LSTM in an embodied agent.

The best and online variant of LSTM seems to be Derek Monner's Generalized LSTM algorithm, introduced in D. Monner and J. A. Reggia (2012). "A generalized LSTM-like training algorithm for second-order recurrent neural networks". You download the paper from Monner's website here:


We'll be back with some actual code soon, including our implementation of Generalized LSTM. And don't worry, we'll be back to unsupervised learning soon with a focus on Growing Neural Gas.

No comments :

Post a Comment