TL;DR:
- An SDR is a Sparse Distributed Representation, described below
- SDRs are biologically plausible data structures
- SDRs have powerful properties
- SDRs have received a lot of attention recently
- There are a few really great new resources on the topic:
- Presentation by Subutai Ahmad of Numenta
- Older introductory presentation by Jeff Hawkins
- Excellent summary of their characteristics in Ferrier’s draft paper (discussed in an earlier blog post)
Background
An SDR is a binary vector, where only a small portion of the bits are ‘on’. There is growing evidence that SDRs are a feature of biological computation, for storing and conveying information. In a biological context, this represents a small number of active cells in a population of cells. SDR’s have been adopted by HTM (in CLA), and HTM's focus on this has set it apart from the bulk of mainstream machine learning research.SDRs have very promising characteristics and are still relatively under utilised in the field of AI and machine learning. It’s an exciting area to be watching as AGI leaps forward.
The concept of SDRs are not new however. Kanerva first proposed a sparse distributed memory as a physiologically plausible model for human memory in his influential 1988 book Sparse Distributed Memory [Kanerva1988]. The properties of sparse distributed memory were nicely summarised by Denning in 1989 [Denning1989].
As early as 1997, Hinton and Ghahramani described a generative model, implementable with a neural network, that ‘discovered’ sparse distributed representations for image analysis [Hinton1997].
Then in 1998 SDRs were used in the context of robot control for navigation by both Rajesh et. al and Rao et. al [Rajesh1998, Rao1998] and then in 2004 for reinforcement learning by Ratitch [Ratitch2004].
Recently some great new resources have become available. There is a new video from Numenta of Subutai Ahmad presenting on the topic. A nice companion to that is an older introductory video of Jeff Hawkins presenting. The recent draft paper by Ferrier on a universal cortical algorithm (discussed in an earlier blog post) gives an excellent summary of their characteristics.
What’s all the fuss about? - SDR Characteristics
Given that so much great (and recent) material exists describing SDRs, I won’t go into very much detail. This post would not be complete though, without at least a cursory look at the ‘highlights’ of SDRs.- Semantic
- each bit corresponds to something meaningful
- Efficient/Versatile
- storage: there is a huge number of potential encodings for a given vector size, as capacity increases exponentially with number of potentially active bits
- compositionality: because of sparsity SDRs are generally linearly separable and can therefore be combined to represent a set of several states
- comparisons: it is very efficient to measure similarity between vectors (you just need to count overlap of ‘on’ bits) and if a state is part of a set (due to compositionality)
- Robust
- subsampled or noisy vectors are still semantically similar and can be compared effectively
No comments :
Post a Comment