Cortical Learning Algorithm (CLA) whitepaper.
Now, Hawkins and Subutai Ahmad have pre-published a new paper (currently to arXiv, but peer-review will follow):
http://arxiv.org/abs/1511. 00083
The paper is interesting for a number of reasons, most notably the combination of computational and biological detail. This paper expands on the artificial neuron model used in CLA/HTM. A traditional integrate-and-fire artificial neuron has one set of inputs and a transfer function. This doesn't accurately represent the structure or function of cortical neurons, that come in various shapes & sizes. The function of cortical neurons is affected by their structure and quite unlike the the traditional artificial neuron.
Hawkins and Ahmad propose a model that best fits Pyramidal cells in neocortex layers 2/3 and 5. They explain the morphology of these neurons by assigning specific roles to the various dendrite types observed.
They propose that each dendrite is individually a pattern-matching system similar to a traditional artificial neuron: The dendrite has a set of inputs to which it responds, and a transfer function that decides whether enough inputs are observed to "fire" the output (although nonlinear continuous transfer functions are more widely used than binary output).
In the paper, they suggest that a single pyramidal cell has dendrites for recognising feed-forward input (i.e. external data) and other dendrites for feedback input from other cells. The feedback provides contextual input that allows the neuron to "fire" only in specific sequential contexts (i.e. given a particular history of external input).
To produce an output along its axon, the complete neuron needs both an active feed-forward dendrite and an active contextual dendrite; when the neuron fires, it implies that a particular pattern has been observed in a specific historical context.
In the original CLA whitepaper, multiple sequential contexts were embodied by a "column" of cells that shared a proximal dendrite, although they acknowledged that this differed from their understanding of the biology.
Now, Hawkins and Subutai Ahmad have pre-published a new paper (currently to arXiv, but peer-review will follow):
http://arxiv.org/abs/1511.
The paper is interesting for a number of reasons, most notably the combination of computational and biological detail. This paper expands on the artificial neuron model used in CLA/HTM. A traditional integrate-and-fire artificial neuron has one set of inputs and a transfer function. This doesn't accurately represent the structure or function of cortical neurons, that come in various shapes & sizes. The function of cortical neurons is affected by their structure and quite unlike the the traditional artificial neuron.
Hawkins and Ahmad propose a model that best fits Pyramidal cells in neocortex layers 2/3 and 5. They explain the morphology of these neurons by assigning specific roles to the various dendrite types observed.
They propose that each dendrite is individually a pattern-matching system similar to a traditional artificial neuron: The dendrite has a set of inputs to which it responds, and a transfer function that decides whether enough inputs are observed to "fire" the output (although nonlinear continuous transfer functions are more widely used than binary output).
In the paper, they suggest that a single pyramidal cell has dendrites for recognising feed-forward input (i.e. external data) and other dendrites for feedback input from other cells. The feedback provides contextual input that allows the neuron to "fire" only in specific sequential contexts (i.e. given a particular history of external input).
To produce an output along its axon, the complete neuron needs both an active feed-forward dendrite and an active contextual dendrite; when the neuron fires, it implies that a particular pattern has been observed in a specific historical context.
In the original CLA whitepaper, multiple sequential contexts were embodied by a "column" of cells that shared a proximal dendrite, although they acknowledged that this differed from their understanding of the biology.
The new paper suggests that basket cells provide the inhibitory function that ensures sparse output from a column of pyramidal cells having similar receptive fields. Note that this definition of column differs from the one in the CLA whitepaper!
The other interesting feature of the paper is its explanation of the sparse, distributed sequence memory that arises from a layer of the artificial pyramidal cells with complex, specialised dendrites. This is also a feature of the older CLA whitepaper, but there are some differences.
Hawkins and Ahmad's paper does match the morphology and function of pyramidal cells more accurately than traditional artificial neural networks. Their conceptualisation of a neuron is far more powerful. However, this doesn't mean that it's better to model it this way in silico. What we really need to understand is the computational benefit of modelling these extra details. The new paper claims that their method has the following advantages over traditional ANNs:
- continuous learning
- robustness of distributed representation
- ability to deal with multiple simultaneous predictions
We follow Numenta's work because we believe they have a number of good insights into the AGI problem. It's great to see this new theoretical work and to have a solid foundation for future publications.
Hawkins and Ahmad's paper does match the morphology and function of pyramidal cells more accurately than traditional artificial neural networks. Their conceptualisation of a neuron is far more powerful. However, this doesn't mean that it's better to model it this way in silico. What we really need to understand is the computational benefit of modelling these extra details. The new paper claims that their method has the following advantages over traditional ANNs:
- continuous learning
- robustness of distributed representation
- ability to deal with multiple simultaneous predictions
We follow Numenta's work because we believe they have a number of good insights into the AGI problem. It's great to see this new theoretical work and to have a solid foundation for future publications.
No comments :
Post a Comment