tag:blogger.com,1999:blog-11805360241314406382024-03-17T05:52:36.010+11:00Project AGIWe’re researching algorithms for Artificial General Intelligence: Machines with aptitude for anything a human can do. Unlike other AI, a general intelligence does not require tailoring to specific problems. Artificial General Intelligence changes everything: For the first time, the invention could become an inventor. Feedback, discussion and participation are welcome.Unknownnoreply@blogger.comBlogger49125tag:blogger.com,1999:blog-1180536024131440638.post-51769440864470134592017-04-28T16:34:00.002+10:002017-04-28T16:34:48.839+10:00Open Sourcing MNIST and NIST Preprocessing CodeIn our most recent <a href="http://blog.agi.io/2016/09/region-layer-experiments.html">post</a> we discussed the current set of experiments that we are conducting, using the MNIST dataset. We've also been looking at the NIST dataset which is similar, but extends to handwritten letters (as well as digits).<br />
<br />
These are extremely popular datasets and freely available, so make a great choice for testing and comparing an algorithm with the benchmarks.<br />
<br />
The MNIST data is not available directly as images though. Even though it's a standard format, it's not common. It's easy to find snippets of code to convert this format into standard images (such as PNG or JPG), but putting it together and getting it working is not where you want to spend your time - instead of designing and running your experiment!<br />
<br />
We've been through that phase, so very happy to open source our code to make it easier for others to get going faster.<br />
<br />
These are simple, small, self contained Java projects with ZERO dependencies. There are two projects, one for preprocessing MNIST files into images, the other is for NIST images, to make them equivalent to the MNIST images to be used in the same experimental setup easily. See the README for more information about the a steps taken.<br />
<br />
<a href="https://github.com/ProjectAGI/Preprocess-MNIST">Preprocess-MNIST</a><br />
<br />
<a href="https://github.com/ProjectAGI/Preprocess_NIST_SD19">Preprocess_NIST_SD19</a><br />
<br />Gideon Kowadlohttp://www.blogger.com/profile/06783501071538911513noreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-22175981016545962672016-09-20T20:00:00.005+10:002016-09-20T20:00:49.873+10:00Region-Layer Experiments<div dir="ltr" style="text-align: left;" trbidi="on">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-eerBJJ0QtHc/V-ECRRMj-jI/AAAAAAAAHiQ/9G9Q4pNTLj02O7nRji5mpwD4pAqyezCqACLcB/s1600/generic.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="276" src="https://3.bp.blogspot.com/-eerBJJ0QtHc/V-ECRRMj-jI/AAAAAAAAHiQ/9G9Q4pNTLj02O7nRji5mpwD4pAqyezCqACLcB/s320/generic.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Typical results from our experiments: Some active cells in layer 3 of a 3 layer network, transformed back into the input pixels they represent. The red pixels are positive weights and the blue pixels are negative weights; absence of colour indicates neutral weighting (ambiguity). The white pixels are the input stimulus that produced the selected set of active cells in layer 3. It appears these layer 3 cells collectively represent a generic '5' digit. The input was a specific '5' digit. Note that the weights of the hidden layer cells differ from the input pixels, but are recognizably the same digit.</td></tr>
</tbody></table>
We are running a series of experiments to test the capabilities of the <a href="http://blog.agi.io/2016/09/the-region-layer-building-block-for-agi.html" target="_blank">Region-Layer component</a>. The objective is to understand to what extent these ideas work, and to expose limitations both in implementation and theory.<br />
<br />
Results will be posted to the blog and written up for publication if, or when, we reach an acceptable level of novelty and rigor.<br />
<br />
We are not trying to beat benchmarks here. We’re trying to show whether certain ideas have useful qualities - the best way to tackle specific AI problems is almost certainly not an AGI way. But what we’d love to see is that AGI-inspired methods can perform close to state-of-the-art (e.g. deep convolutional networks) on a wide range of problems. Now that would be general intelligence!<br />
<h2 style="text-align: left;">
Dataset Choice</h2>
We are going to start with the <a href="http://yann.lecun.com/exdb/mnist/" target="_blank">MNIST digit classification dataset</a>, and perform a number of experiments based on that. In future we will look at some more sophisticated image / object classification datasets such as <a href="https://en.wikipedia.org/wiki/LabelMe" target="_blank">LabelMe</a> or <a href="https://en.wikipedia.org/wiki/Caltech_101" target="_blank">Caltech_101</a>.<br />
<br />
The good thing about MNIST is that it’s simple and has been extremely widely studied. It’s easy to work with the data and the images are a practical size - big enough to be interesting, but not so big as to require lots of preprocessing or too much memory. Despite only 28x28 pixels, variations in digit appearance gives considerable depth to the data (example digit '5' above).<br />
<br />
The bad thing about MNIST is that <a href="http://yann.lecun.com/exdb/mnist/" target="_blank">it’s largely “solved” by supervised learning algorithms</a>. A range of different supervised techniques have reached human performance and it’s debatable whether any further improvements are genuine.<br />
<br />
So what’s the point of trying new approaches? Well, supervised algorithms have some odd qualities, perhaps due to the narrowness of training samples or the specificity of the cost function. For example, the discovery of “<a href="https://arxiv.org/pdf/1412.6572.pdf" target="_blank">adversarial examples</a>” - images that look easily classifiable to the naked eye but cannot be classified correctly with a trained network because they exploit weaknesses in trained neural networks.<br />
<br />
But the biggest drawback of supervised learning is the need to tell it the “correct” answer for every input. This has led to a range of techniques - such as <a href="https://journalofbigdata.springeropen.com/articles/10.1186/s40537-016-0043-6" target="_blank">transfer learning</a> - to make the most of what training data is available, even if not directly relevant. But fundamentally, supervised learning is unlike the experience of an agent learning as it explores its world. Animals can learn without a tutor.<br />
<br />
However, unsupervised results with MNIST are less widely reported. Partially this is because you need to come up with a way to measure the performance of an unsupervised method. The most common approach is to use unsupervised networks to boost the performance of a final supervised network layer - but in MNIST the supervised layer is so powerful it’s hard to distinguish the contribution of the unsupervised layers. Nevertheless, these experiments are encouraging because <a href="http://yann.lecun.com/exdb/publis/pdf/ranzato-cvpr-07.pdf" target="_blank">having a few unsupervised layers seems to improve overall performance</a>, compared to all-supervised networks. In addition to the limited data problem with supervised learning, unsupervised learning actually seems to add something.<br />
<br />
One possible method of capturing the contribution of unsupervised layers alone is the <a href="https://en.wikipedia.org/wiki/Rand_index" target="_blank">Rand Index</a>, which measures the similarity between two clusters. However, we are intending to use a <a href="http://blog.agi.io/2014/12/sparse-distributed-representations-sdrs_24.html" target="_blank">distributed representation</a> where there will be overlap between similar representations - that’s one of the features of the algorithm!<br />
<br />
So, for now we’re going to go for the simplest approach we can think of, and measure the correlation between the active cells in selected hidden layers and each digit label, and see if the correlation alone is enough to pick the right label given a set of active cells. If the concepts defined by the digits exist somewhere in the hierarchy, they should be detectable as features uniquely correlated with specific labels...<br />
<br />
Note also that we’re not doing any preprocessing of the MNIST images except binarization at threshold 0.5. Since the MNIST dataset is very high contrast, hopefully the threshold doesn’t matter much: It’s almost binary already.<br />
<h2 style="text-align: left;">
Sequence Learning Tests</h2>
Before we start the experiments proper we conducted some ad-hoc tests to verify the features of the Region-Layer are implemented as intended. Remember, the Region-Layer has two key capabilities:<br />
<br />
<ul style="text-align: left;">
<li><b>Classification</b> … of the feedforward input, and</li>
<li><b>Prediction</b> … of future classification results (i.e. future internal states)</li>
</ul>
<br />
See <a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">here</a> and <a href="http://blog.agi.io/2016/09/the-region-layer-building-block-for-agi.html" target="_blank">here</a> to understand the classification role, and <a href="http://blog.agi.io/2014/10/on-predictive-coding-and-temporal.html" target="_blank">here</a> for more information about prediction. Taken together, the ability to classify and predict future classifications allows sequences of input to be learned. This is a topic we have looked at in detail in earlier <a href="http://blog.agi.io/2016/01/how-to-build-general-intelligence.html" target="_blank">blog</a> <a href="http://blog.agi.io/2014/04/introduction.html" target="_blank">posts</a> and we have some fairly effective techniques at our disposal.<br />
<br />
We completed the following tests:<br />
<br />
<ul style="text-align: left;">
<li><b>Cycle 0,1,2:</b> We verified that the algorithm could predict the set of active cells in a short cycle of images. This ensures the sequence learning feature is working. The same image was used for each instance of a particular digit (i.e. there was no variation in digit appearance).</li>
<li><b>Cycle 0,1,...,9:</b> We tested a longer cycle. Again, the Region-Layer was able to predict the sequence perfectly.</li>
<li><b>Cycle 0,1,2,3, 0,2,3,1:</b> We tested an ambiguous cycle. At 0, it appears that the next state can be 1 or 2, and similarly, at 3, the next state can be 1 or 2. However, due to the variable order modelling behaviour of the Region-Layer, a single Region-Layer is able to predict this cycle perfectly. Note that first-order prediction cannot predict this sequence correctly.</li>
<li><b>Cycle 0,1,2,3,1,2,4,0,2,3,1,2,1,5,0,3,2,1,4,5:</b> We tested a complex graph of state sequences and again a single Region-Layer was able to predict the sequence perfectly. We also were able to predict this using only first order modelling and a deep hierarchy.</li>
</ul>
<br />
After completion of the unit tests we were satisfied that our Region-Layer component has the ability to efficiently produce variable order models of observed sequences using unsupervised learning, assuming that the states can reliably be detected.<br />
<h2 style="text-align: left;">
Experiments</h2>
Now we come to the harder part. What if each digit exemplar image is ambiguous? In other words, what if each ‘0’ is represented by a randomly selected ‘0’ image from the MNIST dataset? The ambiguity of appearance means that the observed sequences will appear to be non-deterministic.<br />
<br />
We decided to run the following experiments:<br />
<h3 style="text-align: left;">
Experiment 1: Random image classification</h3>
In this experiment there will be no predictable sequence; each digit must be recognized solely based on its appearance. The classic experiment is used: Up to N training passes over the entire MNIST dataset, followed by fixing the internal weights and a single pass to calculate the correlation between each active cell in selected hidden layer[s] and the digit labels. Then, a single pass over the test set recording, for each test input image, the most highly correlated digit label for each set of active hidden cells. The algorithm gets a “correct” result if the most correlated label is the correct label.<br />
<br />
<ul style="text-align: left;">
<li>Passes 1-N: Train networks</li>
</ul>
<br />
Present each digit in the training set once, in a random order. Train the internal weights of the algorithm. Repeated several times if necessary.<br />
<br />
<ul style="text-align: left;">
<li>Pass N+1: Measure correlation of hidden layer features with training images.</li>
</ul>
<br />
Present each digit in the training set once, in a random order. Accumulate the frequency with which each active cell is associated with each digit label. After all images have been seen, convert the observed frequencies to correlations.<br />
<br />
<ul style="text-align: left;">
<li>Pass N+2: Predict label of test images. </li>
</ul>
<br />
Present each digit in the testing set once, in a random order. Use the correlations between cell activity and training labels to predict the most likely digit label given the set of active cells in selected Region-Layer components (they are arranged into a hierarchy).<br />
<h3 style="text-align: left;">
Experiment 2: Image classification & sequence prediction</h3>
What if the digit images are not in a random order? We can use the English language to generate a training set of digit sequences. For example, we can get a book, convert each character to a 2 digit number and select random appropriate digit images to represent each number.<br />
<br />
The motivation for this experiment is to see how the sequence learning can boost image recognition: Our Region-Layer component is supposed to be able to integrate both sequential and spatial information. This experiment actually has a lot of depth because English isn’t entirely predictable - if we use a different book for testing, there’ll be lots of sub-sequences the algorithm has never observed before. There’ll be uncertainty in image appearance and uncertainty in sequence, and we’d like to see how a hierarchy of Region-Layer components responds to both. Our expectation is that it will improve digit classification performance beyond the random image case.<br />
<br />
In the next article, we will describe the specifics of the algorithms we implemented and tested on these problems.<br />
<br />
A final article will present some results.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-72865004452534138532016-09-09T19:58:00.000+10:002016-09-09T19:58:58.416+10:00The Region-Layer: A building block for AGI<div dir="ltr" style="text-align: left;" trbidi="on">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-4njvhoYqfAg/V85Y9C_IDbI/AAAAAAAAHhg/K1qaWvE8Th0KlL4Ll5mHg8wBBu8zlTVvgCLcB/s1600/RegionLayer.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="https://4.bp.blogspot.com/-4njvhoYqfAg/V85Y9C_IDbI/AAAAAAAAHhg/K1qaWvE8Th0KlL4Ll5mHg8wBBu8zlTVvgCLcB/s640/RegionLayer.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1: The Region-Layer component. The upper surface in the figure is the Region-Layer, which consists of Cells (small rectangles) grouped into Columns. Within each Column, only a few cells are active at any time. The output of the Region-Layer is the activity of the Cells. Columns in the Region-Layer have similar - overlapping - but unique Receptive Fields - illustrated here by lines joining two Columns in the Region-Layer to the input matrix at the bottom. All the Cells in a Column have the same inputs, but respond to different combinations of active input in particular sequential contexts. Overall, the Region-Layer demonstrates self-organization at two scales: into Columns with unique receptive fields, and into Cells responding to unique (input, context) combinations of the Column's input. </td></tr>
</tbody></table>
<h2 style="text-align: left;">
Introducing the Region-Layer</h2>
<div>
<div>
From our background reading (see <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">here</a>, <a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">here</a>, or <a href="http://blog.agi.io/2015/12/how-to-build-general-intelligence.html" target="_blank">here</a>) we believe that the key component of a general intelligence can be described as a structure of “Region-Layer” components. As the name suggests, these are finite 2-dimensional areas of cells on a surface. They are surrounded by other Region-Layers, which may be connected in a hierarchical manner; and can be sandwiched by other Region-Layers, on <i>parallel</i> surfaces, by which additional functionality can be achieved. For example, <a href="http://blog.agi.io/2016/01/how-to-build-general-intelligence.html" target="_blank">one Region-Layer could implement our concept of the Objective system, another the Region-Layer the Subjective system</a>. Each Region-Layer approximates a single Layer within a Region of Cortex, part of one vertex or level in a hierarchy. For more explanation of this terminology, see earlier articles on <a href="http://blog.agi.io/2015/05/a-nomenclature-for-cortical-columns-and.html" target="_blank">Layers and Levels</a>.</div>
<div>
<br /></div>
<div>
The Region-Layer has a biological analogue - it is intended to approximate the collective function of two cell populations within a single layer of a <a href="http://blog.agi.io/2015/04/mini-macro-micro-and-hyper-columns.html" target="_blank">cortical macrocolumn</a>. The first population is a set of pyramidal cells, which we believe perform a sparse classifier function of the input; the second population is a set of inhibitory interneuron cells, which we believe cause the pyramidal cells to become active only in particular sequential contexts, or only when selectively dis-inhibited for other purposes (e.g. attention). Neocortex layers 2/3 and 5 are specifically and individually the inspirations for this model: Each Region-Layer object is supposed to approximate the collective cellular behaviour of a patch of just one of these cortical layers.</div>
<div>
<br /></div>
<div>
We assume the Region-Layer is trained by unsupervised learning only - it finds structure in its input without caring about associated utility or rewards. Learning should be continuous and online, learning as an agent from experience. It should adapt to non-stationary input statistics at any time.</div>
<div>
<br /></div>
<div>
The Region-Layer should be self-organizing: Given a surface of Region-Layer components, they should arrange themselves into a hierarchy automatically. [We may defer implementation of this feature and initially implement a manually-defined hierarchy]. Within each Region-Layer component, the cell populations should exhibit a form of competitive learning such that all cells are used efficiently to model the variety of input observed.</div>
<div>
<br /></div>
<div>
We believe the function of the Region-Layer is <a href="https://www.scribd.com/book/182534736/On-Intelligence" target="_blank">best described by Jeff Hawkins</a>: To find spatial features and predictable sequences in the input, and replace them with patterns of cell activity that are increasingly abstract and stable over time. Cumulative discovery of these features over many Region-Layers amounts to an incremental transformation from raw data to fully grounded but abstract symbols. </div>
<div>
<br /></div>
<div>
Within a Region-Layer, Cells are organized into Columns (see figure 1). Columns are organized within the Region-Layer to optimally cover the distribution of active input observed. Each Column and each Cell responds to only a fraction of the input. Via these two levels of self-organization, the set of active cells becomes a robust, distributed representation of the input.</div>
<div>
<br /></div>
<div>
Given these properties, a surface of Region-Layer components should have nice scaling characteristics, both in response to changing the size of individual Region-Layer column / cell populations and the number of Region-Layer components in the hierarchy. Adding more Region-Layer components should improve input modelling capabilities without any other changes to the system.</div>
<div>
<br /></div>
<div>
So let's put our cards on the table and test these ideas. </div>
<h2 style="text-align: left;">
Region-Layer Implementation</h2>
<h3 style="text-align: left;">
Parameters</h3>
<div>
<div>
For the algorithm outlined below very few parameters are required. The few that are mentioned are needed merely to describe the resources available to the Region-Layer. In theory, they are not affected by the qualities of the input data. This is a key characteristic of a general intelligence.</div>
</div>
<div>
<ul style="text-align: left;">
<li><div>
RW: Width of region layer in Columns</div>
</li>
<li><div>
RH: Height of region layer in Columns</div>
</li>
<li><div>
CW: Width of column in Cells </div>
</li>
<li><div>
CH: Height of column in Cells</div>
</li>
</ul>
</div>
<h3 style="text-align: left;">
Inputs and Outputs</h3>
<div>
<ul style="text-align: left;">
<li>Feed-Forward Input (FFI): Must be sparse, and binary. Size: A matrix of any dimension*.</li>
<li>Feed-Back Input (FBI): Sparse, binary Size: A vector of any dimension</li>
<li>Prediction Disinhibition Input (PDI): Sparse, rare. Size: Region Area+</li>
<li>Feed-Forward Output (FFO): Sparse, binary and distributed. Size: Region Area+</li>
</ul>
</div>
<div>
* the 2D shape of input[s] may be important for learning receptive fields of columns and cells, depending on implementation.</div>
<div>
<br /></div>
<div>
+ Region Area = CW * CH * RW * RH</div>
<div>
<h3>
Pseudocode</h3>
<div>
<ul></ul>
</div>
</div>
<div>
Here is some pseudocode for iterative update and training of a Region-Layer. Both occur simultaneously.</div>
<div>
<br /></div>
<div>
We also have fully working code. In the next few blog posts we will describe some of our concrete implementations of this algorithm, and the tests we have performed on it. Watch this space!</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">function: UpdateAndTrain( </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> feed_forward_input, </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> feed_back_input, </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> prediction_disinhibition </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// if no active input, then do nothing</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">if( sum( input ) == 0 ) {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> return</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Sparse activation</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Note: Can be implemented via a Quilt[1] of any competitive learning algorithm, </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// e.g. Growing Neural Gas [2], Self-Organizing Maps [3], K-Sparse Autoencoder [4].</span></div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">activity</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">(t)</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> = 0</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">for-each( column c ) {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> // find cell x that most responds to FFI </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> // in current sequential context given: </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> // a) prior active cells in region </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> // b) feedback input.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> x = findBestCellsInColumn( feed_forward_input, feed_back_input, c )</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> activity</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">(t)</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">[ x ] = 1</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Change detection</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// if active cells in region unchanged, then do nothing</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">if( activity(t) == activity(t-1) ) {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> return</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Update receptive fields to organize columns</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">trainReceptiveFields( </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">feed_forward_input,</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">columns )</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Update cell weights given column receptive fields</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// and selected active cells</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">trainCells( </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">feed_forward_input,</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">feed_back_input,</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">activity(t) )</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Predictive coding: output false-negative errors only [5]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">for-each( cell x in region-layer ) {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> coding = 0</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> if( ( </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">activity(t)</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">[x] == 1 ) and ( prediction</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">(t-1)</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">[x] == 0 ) ) {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> coding = 1</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> }</span></div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> // optional: mute output from region, for attentional gating of hierarchy</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> if( prediction_disinhibition(t)[x] == 0 ) {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">coding</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> = 0 </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> output(t)[x] = coding</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Update prediction</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// Note: Predictor can be as simple as first-order Hebbian learning. </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// The prediction model is variable order due to the inclusion of sequential </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">// context in the active cell selection step.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">trainPredictor( </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">activity</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">(t), </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">activity(</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">t-1) )</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">prediction(t) = predict( </span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">activity(t)</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> )</span></div>
</div>
<div>
<br /></div>
[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.1401&rep=rep1&type=pdf<br />
[2] https://papers.nips.cc/paper/893-a-growing-neural-gas-network-learns-topologies.pdf<br />
[3] http://www.cs.bham.ac.uk/~jxb/NN/l16.pdf<br />
[4] https://arxiv.org/pdf/1312.5663<br />
[5] http://www.ncbi.nlm.nih.gov/pubmed/10195184</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-26733888140238160082016-05-18T10:06:00.000+10:002016-05-18T10:06:06.702+10:00Reading list - May 2016<div dir="ltr" style="text-align: left;" trbidi="on">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-W8pzq6v4Iz4/Vzpek2Nph3I/AAAAAAAAHao/zMcra5q5O6A3mWNAIwVBjPh9r-i0Y2iWQCLcB/s1600/error-series.png" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-W8pzq6v4Iz4/Vzpek2Nph3I/AAAAAAAAHao/zMcra5q5O6A3mWNAIwVBjPh9r-i0Y2iWQCLcB/s640/error-series.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Digit classification error over time in our experiments. The image isn't very helpful but it's a hint as to why we're excited :)</i></td></tr>
</tbody></table>
<h2>
Project AGI</h2>
A few weeks ago we paused the "How to build a General Intelligence" series (<a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">part 1</a>, <a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">part 2</a>, <a href="http://blog.agi.io/2015/12/how-to-build-general-intelligence.html" target="_blank">part 3</a>, <a href="http://blog.agi.io/2016/01/how-to-build-general-intelligence.html" target="_blank">part 4</a>). We paused it because the next article in the series requires us to specify everything in detail, and we need working code to do that.<br />
<br />
We have been testing our algorithm on a variety of <a href="http://yann.lecun.com/exdb/mnist/" target="_blank">MNIST-derived handwritten digit datasets</a>, to better understand how well it generalizes its representation of digit-images and how it behaves when exposed to varying degrees of predictability. Initial results look promising: We will post everything here once we've verified them and completed the first batch of proper experiments. The series will continue soon!<br />
<h2 style="text-align: left;">
Deep Unsupervised Learning</h2>
Our algorithm is a type of Online Deep Unsupervised Learning, so naturally we're looking carefully at similar algorithms.<br />
<br />
We recommend <a href="https://www.youtube.com/watch?v=n1ViNeWhC24" target="_blank">this video of a talk by Andrew Ng</a>. It starts with a good introduction to the methods and importance of feature representation and touches on types of automatic feature discovery. He looks at some of the important feature detectors in computer vision, such as SIFT and HoG and shows how feature detectors - such as edge detectors - can emerge from more general pattern recognition algorithms such as sparse coding. For more on sparse coding see <a href="http://blog.shakirm.com/2016/04/learning-in-brains-and-machines-2/" target="_blank">Shakir's excellent machine learning blog</a>.<br />
<br />
For anyone struggling to intuit deep feature discovery, I also loved <a href="https://news.ycombinator.com/item?id=11483934&utm_term=comment" target="_blank">this post on yCombinator</a> which nicely illustrates how and why deep networks discover useful features, and why the depth helps.<br />
<br />
The latter part of the video covers Ng's latest work on deep hierarchical sparse coding using Deep Belief Networks, in turn based on AutoEncoders. He reports benchmark-beating results on video activity and phoneme recognition with this framework. You can find details of his deep unsupervised algorithm here:<br />
<br />
<a href="http://deeplearning.stanford.edu/wiki">http://deeplearning.stanford.edu/wiki</a><br />
<br />
Finally he presents a plot suggesting that training dataset size is a more important determiner of eventual supervised network performance than algorithm choice! This is a fundamental limitation of supervised learning where the necessary training data is much more limited than in unsupervised learning (in the latter case, the real world provides a handy training set!)<br />
<div>
<br />
<div style="text-align: left;">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-yi5NJyj9jMw/Vzpf2OgmhkI/AAAAAAAAHaw/XWVs2qG8lFIzYVPXLwN4lQ_VO167jgZ0gCLcB/s1600/training-set-size.png" imageanchor="1" style="margin-left: auto; margin-right: auto; text-align: center;"><img border="0" height="361" src="https://4.bp.blogspot.com/-yi5NJyj9jMw/Vzpf2OgmhkI/AAAAAAAAHaw/XWVs2qG8lFIzYVPXLwN4lQ_VO167jgZ0gCLcB/s640/training-set-size.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Effect of algorithm and training set size on accuracy. Training set size more significant. This is a fundamental limitation of supervised learning.</i></td></tr>
</tbody></table>
<h2>
Online K-sparse autoencoders (with some deep-ness)</h2>
We've also been reading this <a href="http://arxiv.org/abs/1312.5663" target="_blank">paper by Makhzani and Frey</a> about deep online learning with auto-encoders (a type of supervised learning neural network that is used in an unsupervised way to reconstruct its input, often known as semi-supervised learning). Actually we've struggled to find any comparison of autoencoders to earlier methods of unsupervised learning both in terms of computational efficiency and ability to cover the search space effectively. Let us know if you find a paper that covers this.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
The Makhzani paper has some interesting characteristics - the algorithm is online, which means it receives data as a stream rather than in batches. It is also sparse, which we believe is desirable from a representational perspective. </div>
<div style="text-align: left;">
<br />
One limitation is that the solution is most likely unable to handle changes in input data statistics (i.e. non-stationary problems). The reason this is an important quality is that in any arbitrarily deep network the typical position of a vertex is between higher and lower vertices. If all vertices are continually learning, the problem being modelled by any single vertex is constantly changing. Therefore, intermediate vertices must be capable of online learning of <a href="https://en.wikipedia.org/wiki/Stationary_process" target="_blank">non stationary problems</a> otherwise the network will not be able to function effectively. In Makhzani and Frey, they instead use the greedy layerwise training approach from Deep Belief Networks. The authors describe this approach:</div>
<div style="text-align: left;">
<br />
<i>"4.6. Deep Supervised Learning Results The k-sparse autoencoder can be used as a building block of a deep neural network, using greedy layerwise pre-training (Bengio et al., 2007). We first train a shallow k-sparse autoencoder and obtain the hidden codes. We then fix the features and train another ksparse autoencoder on top of them to obtain another set of hidden codes. Then we use the parameters of these autoencoders to initialize a discriminative neural network with two hidden layers."</i><br />
<i><br /></i>
The limitation introduced can be thought of as an inability to escape from local minima that result from prior training. This <a href="http://jmlr.org/proceedings/papers/v40/Choromanska15.pdf" target="_blank">paper by Choromanska et al</a> tries to explain why this happens.</div>
<div style="text-align: left;">
<br />
Greedy layerwise training is an attempt to work around the fact that deep belief networks of Autoencoders cannot effectively handle nonstationary problems.<br />
<br />
For more information here's some papers on deep sparse networks built from autoencoders:<br />
<br />
<ul style="text-align: left;">
<li><a href="http://web.stanford.edu/class/archive/cs/cs294a/cs294a.1104/sparseAutoencoder.pdf">http://web.stanford.edu/class/archive/cs/cs294a/cs294a.1104/sparseAutoencoder.pdf</a></li>
<li><a href="https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2010-73.pdf">https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2010-73.pdf</a></li>
<li><a href="http://www.jmlr.org/proceedings/papers/v22/zhou12b/zhou12b.pdf">http://www.jmlr.org/proceedings/papers/v22/zhou12b/zhou12b.pdf</a></li>
</ul>
<h2 style="text-align: left;">
Variations on Supervised Learning - a Taxonomy</h2>
Back to supervised learning, and the limitation of training dataset size. Thanks to a discussion with Jay Chakravarty we have this brief taxonomy of supervised learning workarounds for insufficient training datasets:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<b>Weakly supervised learning:</b> [For poorly labelled training data] where you want to learn models for object recognition under weak supervision - you have say object labels for images, but no localization (e.g. bounding box) for the object in the image (there might be other objects in the image as well). You would use a Latent SVM to solve the problem of localizing the objects in the images, and at the same time learning a classifier for it.</div>
<div style="text-align: left;">
<br />
Another example of weakly supervised learning is that you have a bag of positive samples mixed up with negative training samples, but also have a bag of purely negative samples - you would use Multiple Instance Learning for this.<br />
<br />
<b>Cross-modal adaptation: </b>where one mode of data supervises another - e.g. audio supervises video or vice-versa.<br />
<br />
<b>Domain adaptation:</b> model learnt on one set of data is adapted, in unsupervised fashion, to new datasets with slightly different data distributions.<br />
<br />
<b>Transfer learning:</b> using the knowledge gained in learning one problem on a different, but related problem. Here's a good <a href="https://blogs.nvidia.com/blog/2016/03/25/mapping-poverty-data-gpus/" target="_blank">example of transfer learning, a finalist in the NVIDIA 2016 Global Impact Award</a>. The system learns to predict poverty from day and night satellite images, with very few labelled samples.<br />
<br />
Full paper:<br />
<br />
<a href="http://arxiv.org/pdf/1510.00098v2.pdf">http://arxiv.org/pdf/1510.00098v2.pdf</a><br />
<h2 style="text-align: left;">
Interactive Brain Concept Map</h2>
We enjoyed this <a href="http://gallantlab.org/huth2016/" target="_blank">interactive map of the distribution of concepts within the cortex</a> captured using fMRI and produced by the Gallant Lab (<a href="http://gallantlab.org/index.php/publications/" target="_blank">source papers here</a>).</div>
<div style="text-align: left;">
<br />
Using the map you can find the voxels corresponding to various concepts, which although maybe not generalizable due to the small sample size (7) gives you a good idea of the hierarchical structure the brain has produced, and what the intermediate concepts represent.<br />
<br />
Thanks to David Ray @ <a href="http://cortical.io/">http://cortical.io</a> for the link.</div>
<div style="text-align: left;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-z0TB00iIgpQ/Vzpn3m1QNSI/AAAAAAAAHbA/rtBg0ot2Q5IcxBSXuLI4fpMrdrDbgrNfQCLcB/s1600/brain-concept-map.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="262" src="https://4.bp.blogspot.com/-z0TB00iIgpQ/Vzpn3m1QNSI/AAAAAAAAHbA/rtBg0ot2Q5IcxBSXuLI4fpMrdrDbgrNfQCLcB/s400/brain-concept-map.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Interactive brain concept map</i></td></tr>
</tbody></table>
<div style="text-align: left;">
<h2 style="text-align: left;">
OpenAI Gym - Reinforcement Learning platform</h2>
We also follow the OpenAI project with interest. OpenAI have just released their "Gym" - a platform for training and testing reinforcement learning algorithms. Have a play with it here:<br />
<br />
<a href="https://openai.com/blog/openai-gym-beta/">https://openai.com/blog/openai-gym-beta/</a><br />
<br />
<a href="http://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/" target="_blank">According to Wired magazine</a>, OpenAI will continue to release free and open source software (FOSS) for the wider impact this will have on uptake. <a href="http://mobile.nytimes.com/2016/03/26/technology/the-race-is-on-to-control-artificial-intelligence-and-techs-future.html?referer=https://t.co/jWYZbiAK7A" target="_blank">There are many companies now competing to win market share in this space</a>.</div>
<div style="text-align: left;">
<h2>
The Talking Machines Blog</h2>
<br />
<ul style="text-align: left;">
<li><a href="http://www.thetalkingmachines.com/blog/">http://www.thetalkingmachines.com/blog/</a></li>
</ul>
</div>
<div style="text-align: left;">
We're regular readers of this blog and have been meaning to mention it for months. Worth reading.<br />
<h2>
How the brain generates actions</h2>
<div style="text-align: left;">
A big gap in our knowledge is how the brain generates actions from its internal representation. This new <a href="http://neurosciencenews.com/habit-basal-ganglia-3970/" target="_blank">paper by Vicente et al</a> challenges the established (rather vague) dogma on how the brain generates actions.</div>
</div>
<div style="text-align: left;">
<br />
<i>“We found that contrary to common belief, the indirect pathway does not always prevent actions from being performed, it can actually reinforce the performance of actions. However, the indirect pathway promotes a different type of actions, habits.”</i><br />
<br />
This is probably quite informative for reverse-engineering purposes. <a href="http://dx.doi.org/10.1016/j.cub.2016.02.036" target="_blank">Full paper here</a>.<br />
<h2 style="text-align: left;">
Hierarchical Temporal Memory</h2>
HTM is an online method for feature discovery and representation and now we have a baseline result for HTM on the famous MNIST numerical digit classification problem. Since HTM works with time-series data, the paper compares HTM to LSTM (Long-Short-Term Memory), the leading supervised-learning approach to this problem domain.<br />
<br />
It is also interesting that the paper deals with adaptation to sudden changes in the input data statistics, the very problem that frustrates the deep belief networks described above. </div>
<div style="text-align: left;">
<br />
<a href="http://arxiv.org/abs/1512.05463" target="_blank">Full paper by Cui et al here</a>.<br />
<br />
For a detailed <a href="http://arxiv.org/pdf/1601.06116v1.pdf" target="_blank">mathematical description of HTM see this paper</a> by Mnatzaganian and Kudithipudi.</div>
</div>
</div>
Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-1180536024131440638.post-82372034293266482312016-03-23T10:50:00.002+11:002016-03-23T10:50:28.010+11:00Reading list: Assorted AGI links. March 2016<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="gE iv gt" style="cursor: auto; padding: 12px 0px 3px;">
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/--Dlkcgiqh_0/VvHSi1s4RFI/AAAAAAAAHVI/yfNXHRv8H6QBOW_vWgrQRLKX6Hl2Jr5tg/s1600/minecraft.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="180" src="https://2.bp.blogspot.com/--Dlkcgiqh_0/VvHSi1s4RFI/AAAAAAAAHVI/yfNXHRv8H6QBOW_vWgrQRLKX6Hl2Jr5tg/s320/minecraft.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A Minecraft API is now available to train your AGIs</td></tr>
</tbody></table>
<h2 style="text-align: left;">
Our News</h2>
We are working hard on experiments, and software to run experiments. So this week there is no normal blog post. Instead, here’s an eclectic mix of links we’ve noticed recently.<br /><br /> First, AlphaGo continues to make headlines. Of interest to Project AGI is Yann LeCun <a href="https://www.facebook.com/yann.lecun/posts/10153426023477143" target="_blank">agreeing with us that unsupervised hierarchical modelling is an essential step in building intelligence with humanlike qualities</a> [1]. We also note this <a href="http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-alphago-is-not-ai" target="_blank">IEEE Spectrum post by Jean-Christophe Baillie</a> [2] which argues, <a href="http://blog.agi.io/2016/03/what-after-alphago.html" target="_blank">as we did</a> [3], that we need to start creating embodied agents. <br /><h2 style="text-align: left;">
Minecraft </h2>
</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">
Speaking of which, the <a href="http://www.bbc.com/news/technology-35778288" target="_blank">BBC reports that the Minecraft team are preparing an API for machine learning researchers to test their algorithms in the famous game</a> [4]. The Minecraft team also stress the value of embodied agents and the depth of gameplay and graphics. It sounds like Minecraft could be a crucial testbed for an AGI. We’re always on the lookout for test problems like these.<br /><br />Of course, to play Minecraft well you need to balance local activities - building, mining etc. - with exploration. Another frontier, beyond AlphaGo, is exploration. Monte-Carlo Tree Search (as used in AlphaGo) explores in more limited ways than humans do, <a href="http://cacm.acm.org/blogs/blog-cacm/199663-alphago-is-not-the-solution-to-ai/fulltext" target="_blank">argues John Langford</a> [5].<br /><h2 style="text-align: left;">
Sharing places with robots </h2>
</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">
If robots are going to be embodied, we need to make some changes. <a href="http://www.wired.com/2016/03/self-driving-cars-wont-work-change-roads-attitudes/" target="_blank">Wired magazine says that a few small changes to the urban environment and driver behaviour will make the rollout of autonomous vehicles easier</a> [6]. It’s important to meet the machines halfway, for the benefit of all.<br /><br /><a href="http://arxiv.org/pdf/1603.02199v1.pdf" target="_blank"> This excellent paper on robotic grasping also caught our attention</a> [7]. A key challenge in this area is adaptability to slightly varying circumstances, such as variations in the objects being grasped and their pose relative the the arm. General solutions to these problems will suddenly make robots far more flexible and applicable to a greater range of tasks.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">
<h2 style="text-align: left;">
Hierarchical Quilted Self-Organizing Maps & Distributed Representations</h2>
Last week I also rediscovered this older <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.1401&rep=rep1&type=pdf" target="_blank">paper on Hierarchical-Quilted Self-Organizing Maps (HQSOMs)</a> [8].This is close to our hearts because we originally believed this type of representation was the right approach for AGI. With the success of Deep Convolutional Networks (DCNs) it’s worth looking back and noticing the similarities between the two. While HQSOM is purely unsupervised learning, (a plus, see comment from Yann LeCun above) DCNs are trained by supervised techniques. However, both methods use small, overlapping, independent units - analogous to biological cortical columns - to classify different patches of the input. The overlapping and independent classifiers lead to robust and distributed representations, which is probably the reason these methods work so well. <br /><br /> Distributed representation is one of the key features of Hawkins’ Hierarchical Temporal Memory (HTM). Fergal Byrne has recently published <a href="http://arxiv.org/pdf/1509.08255v2.pdf" target="_blank">an updated description of the HTM algorithm</a> [9] for those interested.<br /><br /> We at Project AGI believe that a grid-like “region” of columns employing a <a href="https://en.wikipedia.org/wiki/Winner-take-all_(computing)" target="_blank">“Winner-Take-All” policy</a> [10], with overlapping input receptive fields, can produce a distributed representation. Different regions are then connected together into a tree-like structure (acyclic). The result is a hierarchy. Not only does this resemble the state-of-the-art methods of DCNs, but there’s a lot of biological evidence for this type of representation too. <a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00017/full" target="_blank">This paper by Rinkus</a> [11] describes columnar features arranged into a hierarchy, with winner-take-all behaviour implemented via local inhibition. <br /><br /> Rinkus says: <i>“Saying only that a group of L2/3 units forms a WTA CM places no a priori constraints on what their tuning functions or receptive fields should look like. This is what gives that functionality a chance of being truly generic, i.e., of applying across all areas and species, regardless of the observed tuning profiles of closely neighboring units.”</i><br /><h2 style="text-align: left;">
Reinforcement Learning </h2>
But unsupervised learning can’t be the only form of learning. We also need to consider consequences, and so we need reinforcement learning to take account of these. As Yann said, the “cherry on the cake” (this is probably understating the difficulty of the RL component, but right now it seems easier than creating representations).<br /><br /><a href="http://blog.shakirm.com/2016/02/learning-in-brains-and-machines-1/" target="_blank"> Shakir’s Machine Learning blog has a great post exploring the biology of reinforcement learning</a> [12] within the brain. This is a good overview of the topic and useful for ML researchers wanting to access this area.<br /><br />But regular readers of this blog will remember that we’re obsessed with unfolding or inverting abstract plans into concrete actions. We found <a href="https://www.researchgate.net/profile/Masanori_Murayama/publication/277144323_A_Top-Down_Cortical_Circuit_for_Accurate_Sensory_Perception/links/556839e008aec22683011a30.pdf" target="_blank">a great paper by Manita et al</a> [13] that shows biological evidence for the translation and propagation of an abstract concept into sensory and motor areas, where it can assist with perception. This is the hierarchy in action.<br /><h2 style="text-align: left;">
Long-Short-Term Memory (LSTM)</h2>
One more tack before we finish. Thanks to Jay for this link to <a href="https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-sequence-learning/" target="_blank">NVIDIA’s description of LSTMs</a> [14], an architecture for recurrent neural networks (i.e. the state can depend on the previous state of the cells). It’s a good introduction, but we’re still fans of <a href="http://www.overcomplete.net/papers/nn2012.pdf" target="_blank">Monner’s Generalized LSTM </a>[15].<br /><h2 style="text-align: left;">
Fun thoughts</h2>
Now let’s end with something fun. Wired magazine again, describing watching AlphaGo as <a href="http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/" target="_blank">our first taste of a superhuman intelligence</a> [16]. Although this is a “narrow” intelligence, not a general one, it has qualities beyond anything we’ve experienced in this domain before. What’s more, watching these machines can make us humans better, without any nasty bio-engineering:<br /><br /><i> “But as hard as it was for Fan Hui to lose back in October and have the loss reported across the globe—and as hard as it has been to watch Lee Sedol’s struggles—his primary emotion isn’t sadness. As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. “So beautiful,” he says. “So beautiful.”</i><br /><h2 style="text-align: left;">
References</h2>
[1] <a href="https://www.facebook.com/yann.lecun/posts/10153426023477143">https://www.facebook.com/yann.lecun/posts/10153426023477143</a><br /><br />[2] <a href="http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-alphago-is-not-ai">http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-alphago-is-not-ai</a><br /><br />[3] <a href="http://blog.agi.io/2016/03/what-after-alphago.html">http://blog.agi.io/2016/03/what-after-alphago.html</a><br /><br />[4] <a href="http://www.bbc.com/news/technology-35778288">http://www.bbc.com/news/technology-35778288</a><br /><br />[5] <a href="http://cacm.acm.org/blogs/blog-cacm/199663-alphago-is-not-the-solution-to-ai/fulltext">http://cacm.acm.org/blogs/blog-cacm/199663-alphago-is-not-the-solution-to-ai/fulltext</a><br /><br />[6] <a href="http://www.wired.com/2016/03/self-driving-cars-wont-work-change-roads-attitudes/">http://www.wired.com/2016/03/self-driving-cars-wont-work-change-roads-attitudes/</a><br /><br />[7] <a href="http://arxiv.org/pdf/1603.02199v1.pdf">http://arxiv.org/pdf/1603.02199v1.pdf</a><br /><br />[8] <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.1401&rep=rep1&type=pdf">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.1401&rep=rep1&type=pdf</a><br /><br />[9] <a href="http://arxiv.org/pdf/1509.08255v2.pdf">http://arxiv.org/pdf/1509.08255v2.pdf</a><br /><br />[10] <a href="https://en.wikipedia.org/wiki/Winner-take-all_(computing)">https://en.wikipedia.org/wiki/Winner-take-all_(computing)</a><br /><br />[11] <a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00017/full">http://journal.frontiersin.org/article/10.3389/fnana.2010.00017/full</a><br /><br />[12] <a href="http://blog.shakirm.com/2016/02/learning-in-brains-and-machines-1/">http://blog.shakirm.com/2016/02/learning-in-brains-and-machines-1/</a><br /><br />[13] <a href="https://www.researchgate.net/profile/Masanori_Murayama/publication/277144323_A_Top-Down_Cortical_Circuit_for_Accurate_Sensory_Perception/links/556839e008aec22683011a30.pdf">https://www.researchgate.net/profile/Masanori_Murayama/publication/277144323_A_Top-Down_Cortical_Circuit_for_Accurate_Sensory_Perception/links/556839e008aec22683011a30.pdf</a><br /><br />[14] <a href="https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-sequence-learning/">https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-sequence-learning/</a><br /><br />[15] <a href="http://www.overcomplete.net/papers/nn2012.pdf">http://www.overcomplete.net/papers/nn2012.pdf</a><br /><br />[16] <a href="http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/">http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/</a><br /><br /><br /><br /><br /></div>
</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-73318555540181729862016-03-10T22:11:00.002+11:002016-03-10T22:11:34.628+11:00What's after AlphaGo?<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
What's AlphaGo?</h2>
<div>
<a href="https://deepmind.com/alpha-go.html" target="_blank">AlphaGo</a> is a system that can play <a href="https://en.wikipedia.org/wiki/Go_(game)" target="_blank">Go</a> at least as well as the best humans. Go was widely cited as the hardest (and only remaining) game at which humans could beat machines, so <a href="https://googleblog.blogspot.com.au/2016/01/alphago-machine-learning-game-go.html" target="_blank">this is a big deal</a>. <a href="http://www.theguardian.com/technology/2016/mar/09/google-deepmind-alphago-ai-defeats-human-lee-sedol-first-game-go-contest" target="_blank">AlphaGo has just defeated a top-ranked human expert</a>.</div>
<h2>
<ul style="font-size: medium; font-weight: normal;">
<li><a href="http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html" target="_blank">AlphaGo Nature paper</a> (Silver et al 2016)</li>
</ul>
</h2>
<h2>
Why is Go hard?</h2>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-gJ9mAH9x0Fs/VuAdBjBIBPI/AAAAAAAAHNI/qE-XbY5bn2s/s1600/go.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-gJ9mAH9x0Fs/VuAdBjBIBPI/AAAAAAAAHNI/qE-XbY5bn2s/s1600/go.jpg" /></a></div>
<div>
Go is hard because the search-space of possible moves is so large that tree search and pruning techniques, <a href="https://www.research.ibm.com/deepblue/meet/html/d.3.2.html" target="_blank">such as those used to beat humans at Chess</a>, won't work - or at least, they won't work well enough, with a feasible amount of memory, to play Go better than the best humans. </div>
<div>
<br /></div>
<div>
Instead, to play Go well, you need to have "intuition" rather than brute search power: To look at the board and spot local (or gross) patterns that represent opportunities or dangers. And in fact, AlphaGo is able to play in this way. It beat the next best computer algorithm "Pachi" 85% of the time without any tree search - just predicting the best action based on its interpretation of the current state. The authors of the AlphaGo Nature paper say:<br />
<i><br /></i>
<i>“During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network—an approach that is perhaps closer to how humans play.”</i></div>
<h2 style="text-align: left;">
How does AlphaGo work?</h2>
<div>
<div>
AlphaGo is trained by both <a href="https://en.wikipedia.org/wiki/Supervised_learning" target="_blank">supervised</a> and <a href="https://en.wikipedia.org/wiki/Reinforcement_learning" target="_blank">reinforcement learning</a>. Supervised learning feedback comes from recordings of moves in expert games. However, these are finite in size and used naively, would lead to <a href="https://en.wikipedia.org/wiki/Overfitting" target="_blank">overfitting</a>. </div>
<div>
<br /></div>
<div>
Instead, in AlphaGo a Supervised Learning deep neural network learns to model and predict expert behaviour in the recorded games, via conventional deep learning techniques. Then, a reinforcement learning network is used to generate reward data for novel games that AlphaGo plays against itself! This mitigates the limited size of the supervised learning dataset.</div>
<div>
<br /></div>
<div>
Of course, AlphaGo also wants the play <i>better</i> than the best play observed in the training data. To achieve this, the reinforcement learning network is further trained by playing pairs of them (networks) against each other - mixing the pairs up to prevent policies overfitting each other. This is a really clever feature because it allows AlphaGo to go beyond its training data.</div>
<div>
<br /></div>
<div>
Note also that the neural networks cannot possibly fully represent a sufficiently deep tree of board outcomes within their limited set of weights. Instead, the network has to learn to represent good and bad situations with limited resources. It has to form its own representation of the most salient features, during training.</div>
<div>
<br /></div>
<div>
The neural networks function without pre-defined rules specific to Go; instead they have learned from training data collected from many thousands of human and simulated games.</div>
</div>
<h2 style="text-align: left;">
Key advances</h2>
<div>
AlphaGo is an important advance because it is able to make good judgments about play situations based on a lossy interpretation in a finitely-sized deep neural network.<br />
<br />
What’s more, Go wasn’t simply taught to copy human experts - it went further, and improved, by playing against itself.</div>
<div>
<h2>
So, what doesn't it do?</h2>
</div>
<div>
The techniques used in deep neural networks have recently been scaled to work effectively on a wide range of problems. In some subject areas, <a href="https://www.technologyreview.com/s/600889/google-unveils-neural-network-with-superhuman-ability-to-determine-the-location-of-almost/" target="_blank">narrow AIs are reaching superhuman performance</a>. However, it is not clear that these techniques will scale indefinitely. Problems such as <a href="http://neuralnetworksanddeeplearning.com/chap5.html" target="_blank">vanishing gradients</a> have been pushed back, but not necessarily eliminated.</div>
<div>
<br /></div>
<div>
Much greater scale is needed to get intelligent agents into the real world without them being immediately smashed by cars or stuck in holes. But already, it is time to consider what features or characteristics constitute an artificial general intelligence (AGI), beyond raw intelligence (which AIs now have).</div>
<div>
<br /></div>
<div>
AlphaGo isn't a <i>general </i>intelligence; it's designed specifically to play Go. Sure, it's trained rather than programmed manually, but it was designed for this purpose. The same techniques are likely to generalize to many other problems, but they'll need to be applied thoughtfully and retrained.</div>
<div>
<br /></div>
<div>
AlphaGo isn't an <i>Agent</i>. It doesn't have any sense of self, or intent, and its behaviour is pretty static - its policies would probably work the same way in all similar situations, learning only very slowly. You could say that it doesn't have moods, or other transient biases. Maybe this is a good thing! But this also limits its ability to respond to dynamic situations.</div>
<div>
<br /></div>
<div>
AlphaGo doesn't have any desire to explore, to seek novelty or to try different things. AlphaGo couldn't ever choose to teach itself to play Go because it found it interesting. On the other hand, AlphaGo <i>did</i> teach itself to play Go… </div>
<div>
<br /></div>
<div>
All in all, it's a very exciting time to study artificial intelligence!</div>
<div>
<br /></div>
<div>
<i>by David Rawlinson & Gideon Kowadlo</i></div>
</div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1180536024131440638.post-30855891012021343552016-02-09T12:43:00.004+11:002016-02-09T12:43:56.766+11:00Some interesting finds: Acyclic hierarchical modelling and sequence unfolding<div dir="ltr" style="text-align: left;" trbidi="on">
This week we have a couple of interesting links to share.<br />
<br />
From our experiments with generative hierarchical models, we <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">claimed</a> that the model produced by feed-forward processing should not have loops. Now we have discovered a paper by Bengio et al titled "<a href="http://arxiv.org/pdf/1502.04156v2.pdf" target="_blank">Towards biologically plausible deep learning</a>" [1] that supports this claim. The paper looks for biological mechanisms that mimic key features of deep learning. Probably the credit assignment problem is the most difficult feature to substantiate - ensuring each weight is updated correctly in response to its contribution to the overall output of the network - but the paper does leave me thinking it's plausible.<br />
<br />
Anyway the reason I'm talking about it is this quote:<br />
<br />
<i>"There is strong biological evidence of a distinct pattern of connectivity between cortical areas that distinguishes between “feedforward” and “feedback” connections (Douglas et al., 1989) at the level of the microcircuit of cortex (i.e., feedforward and feedback connections do not land in the same type of cells). Furthermore, the feedforward connections form a directed acyclic graph with nodes (areas) updated in a particular order, e.g., in the visual cortex (Felleman and Essen, 1991)."</i><br />
<br />
This says that the feedforward modelling process (which we believe is constructing a hierarchical model) is a directed acyclic graph (DAG) - which means it does not have loops, as we predicted. Secondly, it is another source claiming that the representation produced is hierarchical (in this case, a DAG). The cited work is a much older paper - "<a href="http://www.ncbi.nlm.nih.gov/pubmed/1822724" target="_blank">Distributed hierarchical processing in the primate cerebral cortex</a>" [2]. We're still reading, but there's a lot of good background information here.<br />
<br />
The second item to look at this week is a <a href="http://viewer.gorilla-repl.org/view.html?source=gist&id=95da4401dc7293e02df3&filename=seq-replay.clj" target="_blank">demo by Felix Andrews featuring temporal pooling</a> [3] and sequence unfolding. "Unfolding" means transforming the pooled sequence representation back into its constituent parts - i.e. turning a sequence into a series of steps.<br />
<br />
Felix demonstrates that high-level sequence selection can successfully be used to track and predict through observation of the corresponding lower-level sequence. This is achieved by causing the high-level sequence to predict all steps, and then tracking through the predicted sequence using first-order predictions in the lower level. Both levels are necessary - the high level prediction provides guidance for the low-level to ensure it predicts correctly through forks. The low level prediction keeps track of what's next in the sequence.<br />
<br />
[1] "Towards Biologically Plausible Deep Learning" Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein and Zhouhan Lin (2015) <a href="http://arxiv.org/pdf/1502.04156v2.pdf">http://arxiv.org/pdf/1502.04156v2.pdf</a><br />
<div>
<br />
[2] "Distributed hierarchical processing in the primate cerebral cortex" Felleman DJ, Van Essen DC (1991) <a href="http://www.ncbi.nlm.nih.gov/pubmed/1822724">http://www.ncbi.nlm.nih.gov/pubmed/1822724</a><br />
<br />
[3] Felix Andrews HTM temporal pooling and sequence unfolding demo <a href="http://viewer.gorilla-repl.org/view.html?source=gist&id=95da4401dc7293e02df3&filename=seq-replay.clj">http://viewer.gorilla-repl.org/view.html?source=gist&id=95da4401dc7293e02df3&filename=seq-replay.clj</a></div>
</div>
Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-1180536024131440638.post-40377649378839113972016-02-03T00:17:00.001+11:002016-02-03T00:40:15.254+11:00Intuition over reasoning for AI<br />
By Gideon Kowadlo<br />
<br />
I’m reading a fascinating book called <a href="http://righteousmind.com/">The Righteous Mind</a>, by <a href="https://en.wikipedia.org/wiki/Jonathan_Haidt">Jonathan Haidt</a>. It’s one of those reads that can fundamentally shift the way that you see the world. In this case, the human world, everyone around you, and yourself. <br />
<br />
A central idea of the book is that our behaviour is mainly dictated by intuition rather than reasoning and that both are aspects of cognition. <br />
<br />
Many will be able to identify in themselves and others, the tendency to act first and rationalise later - even though it feels like the opposite. But more than that, our sense of morality arises from intuition and it enables us to act quickly and make good decisions. <br />
<br />
A compelling biological correlate is the ventromedial prefrontal cortex. The way it enables us to use emotion/intuition for decision making is described well in this passage:<br />
<br />
<div>
<blockquote class="tr_bq">
<span style="font-size: small;">
Damasio had noticed an unusual pattern of symptoms in patients who had suffered brain damage to a specific part of the brain - the ventromedial (i.e., bottom-middle) prefrontal cortex (abbreviated vmPFC; it’s the region just behind and above the bridge of the nose). Their emotionality dropped nearly to zero. They could look at the most joyous or gruesome photographs and feel nothing. They retained full knowledge of what was right and wrong, and they showed no deficits in IQ. They even scored well on Kohlberg’s tests of moral reasoning. Yet when it came to making decisions in their personal lives and at work, they made foolish decisions or no decisions at all. They alienated their families and their employers, and their lives fell apart.
<br /><br />
Damasio’s interpretation was that gut feelings and bodily reactions were necessary to think rationally, and that one job of the vmPFC was to integrate those gut feelings into a person’s conscious deliberations. When you weigh the advantages and disadvantages of murdering your parents … you can’t even do it, because feelings of horror come rushing in through the vmPFC.
<br /><br />
But Damasio’s patients could think about anything, with no filtering or coloring from their emotions. With the vmPFC shut down, every option at every moment felt as good as every other. The only way to make a decision was to examine each option, weighting the pros and cons using conscious verbal reasoning. If you’ve ever shopped for an appliance about which you have few feelings - say, a washing machine - you know how hard it can be once the number of options exceeds six or seven (which is the capacity of our short-term memory). Just imagine what your life would be like if at every moment, in every social situation, picking the right thing to do or say became like picking the best washing machine among ten options, minute after minute, day after day. You’d make foolish decisions too.</span></blockquote>
</div>
<div>
<span style="font-size: small;"><br /></span>Our aim has always been to build a general reasoning machine that can be scaled up. We aren’t interested in building an artificial human, which carries the legacy of a long evolution through many incarnations. <br />
<br />
This is the first time I’ve considered the importance of building intuition into the algorithm as a fundamental component. ‘Gut’ reactions are not to be underestimated. It may be the only way to make effective AGI, not to mention the need to create ‘pro-social’ agents with which we can interact in daily life.<br />
<br />
It is possible though, that this is an adaption to the limitations of our reasoning, rather than a fundamentally required feature. If the intelligence was implemented in silicon and not bound by 'cognitive effort' in the same way that we are, it could potentially select favourable actions efficiently based on intellectual reasoning, without the ‘intuition’.<br />
<br />
This is fascinating to think about in terms of human intelligence and behaviour. It raises exciting questions about the nature of intelligence itself and the relationship between cognition and both reasoning and intuition. We’ll be sure to consider these questions as we continue to develop an algorithm for AGI.<br />
<br />
<h4>
Addendum</h4>
From a functional perspective the vmPFC appears to be a separate parallel ‘component’ that is richly connected to many other brain areas.<br />
<br />
<blockquote class="tr_bq">
<span style="font-size: small;">"The ventromedial prefrontal cortex is connected to and receives input from the <a href="https://en.m.wikipedia.org/wiki/Ventral_tegmental_area">ventral tegmental area</a>, <a href="https://en.m.wikipedia.org/wiki/Amygdala">amygdala</a>, the <a href="https://en.m.wikipedia.org/wiki/Temporal_lobe">temporal lobe</a>, the <a href="https://en.m.wikipedia.org/wiki/Olfactory_system">olfactory system</a>, and the <a href="https://en.m.wikipedia.org/wiki/Medial_dorsal_nucleus">dorsomedial thalamus</a>. It, in turn, sends signals to many different brain regions including; The temporal lobe, amygdala, the <a href="https://en.m.wikipedia.org/wiki/Lateral_hypothalamus">lateral hypothalamus</a>, the <a href="https://en.m.wikipedia.org/wiki/Hippocampal_formation">hippocampal formation</a>, the <a href="https://en.m.wikipedia.org/wiki/Cingulate_cortex">cingulate cortex</a>, and certain other regions of the <a href="https://en.m.wikipedia.org/wiki/Prefrontal_cortex">prefrontal cortex</a>.<a href="https://en.m.wikipedia.org/wiki/Ventromedial_prefrontal_cortex#cite_note-Carlson-4">[4]</a> This huge network of connections affords the vmPFC the ability to receive and monitor large amounts of sensory data and to affect and influence a plethora of other brain regions, particularly the amygdala."</span> </blockquote>
Wikipedia <a href="https://en.wikipedia.org/wiki/Ventromedial_prefrontal_cortex">Ventromedial prefrontal cortex</a></div>Gideon Kowadlohttp://www.blogger.com/profile/06783501071538911513noreply@blogger.com4tag:blogger.com,1999:blog-1180536024131440638.post-44179190495030842702016-01-24T23:06:00.004+11:002016-01-24T23:11:05.256+11:00How to build a General Intelligence: An interpretation of the biology<div dir="ltr" style="text-align: left;" trbidi="on">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-1-LudUN-bqo/VqSvtDpT2rI/AAAAAAAAG5M/cDh7mRD-hvM/s1600/outline.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="245" src="http://1.bp.blogspot.com/-1-LudUN-bqo/VqSvtDpT2rI/AAAAAAAAG5M/cDh7mRD-hvM/s320/outline.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1: Our interpretation of the Thalamocortical system as 3 interacting sub-systems (objective, subjective and executive). The structure of the diagram indicates the dominant direction of information flow in each system. The objective system is primarily concerned with feed-forward data flow, for the purpose of building a representation of the actual agent-world system. The executive system is responsible for making desired future agent-world states a reality. When predictions become observations, they are fed back into the objective system. The subjective system is a circular because its behaviour depends on internal state as much as external. The subjective system builds a filtered, subjective model of observed reality, that also represents objectives or instructions for the executive. This article will describe how this model fits into the structure of the Thalamocortical system.</td></tr>
</tbody></table>
<br />
<i>Authors: David Rawlinson and Gideon Kowadlo</i><br />
<br />
This is part 4 of our series on how to build an artificial general intelligence (AGI).<br />
<br />
<ul style="text-align: left;">
<li><a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">Part 1</a>: An overview of hierarchical general intelligence</li>
<li><a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">Part 2</a>: Reverse engineering (the physical perspective - cells and layers - and the logical perspective - a hierarchy).</li>
<li><a href="http://blog.agi.io/2015/12/how-to-build-general-intelligence.html" target="_blank">Part 3</a>: Circuits and pathways; we introduced our canonical cortical micro-circuit and fitted pathways to it.</li>
</ul>
<br />
In this article, part 4, we will try to interpret all the information provided so far. We will try to fit what we know about the biological general intelligence to our theoretical expectations.<br />
<h2 style="text-align: left;">
Systems</h2>
We believe cortical activity can be usefully interpreted as 3 integrated systems. These are:<br />
<br />
<ul style="text-align: left;">
<li>Objective system</li>
<li>Subjective system</li>
<li>Executive system</li>
</ul>
<br />
So, what are these systems, why are they needed and how do they work?<br />
<h2 style="text-align: left;">
Objective System</h2>
We theorise that the purpose of the objective system is to construct a hierarchical, generative model of both the external world and the actual state of the agent. This includes internal plans & goals already executed or in progress. From our <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">conceptual overview</a> of General Intelligence we think that this representation should be <a href="http://blog.agi.io/2014/12/sparse-distributed-representations-sdrs_24.html" target="_blank">distributed, compositional</a> and therefore robust and able to immediately model novel situations instantly and meaningfully.<br />
<br />
The objective system models varying timespans depending on the level of abstraction, but events are anchored to the current state of the world and agent. Abstract events may cover long periods of time - for example, “I made dinner” might be one conceptual event.<br />
<br />
We propose that the objective system is implemented by pyramidal cells in layers 2/3 and by <a href="http://journal.frontiersin.org/article/10.3389/neuro.01.1.1.002.2007/full#h5" target="_blank">spiny excitatory cells in layer 4</a>. Specifically, we suggest that the purpose of the spiny excitatory cells is primarily dimensionality reduction, by performing a classifier function, analogous to the ‘Spatial Pooling’ function of <a href="http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf" target="_blank">Hawkins’ HTM theory</a>. This is supported by analysis of C4 spiny stellate connectivity: <a href="http://www.jneurosci.org/content/23/7/2961.long" target="_blank">“... spiny stellate cells act predominantly as local signal processors within a single barrel...”</a>. We believe the pyramidal cells are more complex and have two functions. First, they perform dimensionality reduction by requiring a set of active inputs on specific apical (distal) dendrite branches to be simultaneously observed before the apical dendrite can output a signal (an action potential). Second, they use basal (proximal) dendrites to identify the sequential context in which the apical dendrite has become active. Via a local competitive process, pyramidal cells learn to become active only when observing a set of specific input patterns in specific historical contexts.<br />
<br />
The output of pyramidal cells in C2/3 is routed via the Feed-Forward Direct pathway to a <a href="http://shermanlab.uchicago.edu/files/RP155-CONEUR498.pdf" target="_blank">“higher” or more abstract cortical region, where it enters in C4</a> (or in some parts of the Cortex, <a href="http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf" target="_blank">C2/3 directly</a>). In this “higher” region, the same classifier and context recognition process is repeated. If C4 cells are omitted, we have less dimensionality reduction and a greater emphasis on sequential or historical context.<br />
<br />
We propose these pyramidal cells only output along their axons when they become active without entering a “predicted” state first. Alternatively, interneurons could play a role in inhibiting cells via prediction to achieve the same effect. If pyramidal cells only produce an output when they make a False-Negative prediction error (i.e. they fail to predict their active state), output is equivalent to Predictive Coding (<a href="http://blog.agi.io/2014/10/on-predictive-coding-and-temporal.html" target="_blank">link</a>, <a href="http://blog.agi.io/2014/11/toward-universal-cortical-algorithm.html" target="_blank">link</a>). Predictive Coding produces an output that is more stable over time, which is a form of <a href="http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf" target="_blank">Temporal Pooling as proposed by Numenta</a>.<br />
<br />
To summarize, the computational properties of the objective system are:<br />
<br />
<ol style="text-align: left;">
<li>Replace simultaneously active inputs with a smaller set of active cells representing particular sub-patterns, and</li>
<li>Replace predictable sequences of active cells with a false-negative error coding to transform the output into a simpler sequence of prediction errors</li>
</ol>
<br />
These functions will achieve the stated purpose of <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">incrementally transforming input data into simpler forms with accumulating invariances</a>, while propagating (rather than hiding) errors, for further analysis in other Columns or cortical regions. In combination with a tree-like hierarchical structure, higher Columns will process data with increasing breadth and stability over time and space.<br />
<br />
The Feed-Forward direct pathway is not filtered by the Thalamus. This means that Columns always have access to the state of objective system pyramidal cells in lower columns. This could explain the phenomenon that we can process data without being aware of it (aka “<a href="https://en.wikipedia.org/wiki/Blindsight" target="_blank">Blindsight</a>”); essentially the objective system alone does not cause conscious attention. This is a very useful quality, because it means the data required to trigger a change in attention is available throughout the cortex. The <a href="http://protoscience.wikia.com/wiki/Phenomenal_and_Access_Conciousness" target="_blank">“access” phenomenon</a> is well documented and rather mysterious; the organisation of the cortex into objective and subjective systems could explain it.<br />
<br />
Another purpose of the objective system is to ensure internal state cannot become detached from reality. This can easily occur in <a href="https://en.wikipedia.org/wiki/Graphical_model" target="_blank">graphical models</a>, when <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">cycles form that exclude external influence</a>. To prevent this, we believe that the roles of feed-forward and feed-back input must be separated to break the cycles. However, C2/3 pyramidal cells’ dendrites receive both feed-forward (from C4) and feed-back input (via C1).<br />
<br />
One way that this problem might be avoided is by different treatment of feed-forward and feed-back input, so that the latter can be discounted when it is contradicted by feed-forward information. There is evidence that <a href="http://www.pnas.org/content/111/40/14332.full" target="_blank">feed-forward and feedback signals are differently encoded</a>, which would make this distinction possible.<br />
<br />
We speculate that the set of states represented by the cells in C2/3 could be defined only using feed-forward input, and that the purpose of feedback data in the objective system is restricted to improved prediction, because feedback contains state information from a larger part of the hierarchy (see figure 2).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-BmQB30PslS8/VqSzbPX9MiI/AAAAAAAAG5Y/sntu_DRmwv8/s1600/feedback.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="167" src="http://3.bp.blogspot.com/-BmQB30PslS8/VqSzbPX9MiI/AAAAAAAAG5Y/sntu_DRmwv8/s320/feedback.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 2: The benefit of feedback. This figure shows part of a hierarchy. The hierarchy structure is defined by the receptive fields of the columns (shown as lines between cylinders, left). Each Column has receptive fields of similar size. Moving up the hierarchy, Columns receive increasingly abstract input with a greater scope, being at the top of a pyramid of lower Columns whose receptive fields collectively cover a much larger area of input. Feedback has the opposite effect, summarizing a much larger set of Column states from elsewhere and higher in the hierarchy. Of course there is information loss during these transfers, but all data is fully represented somewhere in the hierarchy.</td></tr>
</tbody></table>
So although the objective system makes use of feedback, the hierarchy it defines should be predominantly determined by feed-forward information. The feed-forward direct pathway (see figure 3) enables the propagation of this data and consequently the formation of the hierarchy.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-lkVg1jL3aS8/VqS0cPCK3vI/AAAAAAAAG5g/dtYejpu7IsY/s1600/Neocortical%2Bcircuit%2BFF-direct.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://4.bp.blogspot.com/-lkVg1jL3aS8/VqS0cPCK3vI/AAAAAAAAG5g/dtYejpu7IsY/s640/Neocortical%2Bcircuit%2BFF-direct.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 3: Feed-Forward Direct pathway within our canonical cortical micro-circuit. Data travels from C4 to C2/3 and then to C4 in a higher Column. This pattern is repeated up the hierarchy. This pathway is not filtered by the Thalamus or any other central structure, and note that it is largely uni-directional (except for feedback to improve prediction accuracy). We propose this pathway implements the Objective System, which aims to construct a hierarchical generative model of the world and the agent within it.</td></tr>
</tbody></table>
<h2 style="text-align: left;">
Subjective System</h2>
We think that the subjective system is a selectively filtered model of both external and internal state including filtered predictions of future events. We propose that filtering of input constitutes selective attention, whereas filtering of predictions constitutes action selection and intent. So, the system is a subjective model of reality, rather than an objective one, and it is used for both perception and planning simultaneously.<br />
<br />
The time span encompassed by the system includes a subset of both present and future event-concepts, but as with the objective system, this may represent a long period of real-world time, depending on the abstraction of the events (for example, “now” I am going to work, and “next” I will check my email [in 1 hour’s time]).<br />
<br />
It makes good sense to have two parallel systems, one filtered (subjective) and one not (objective). Filtering external state reduces distraction and enhances focus and continuity. Filtering of future predictions allows selected actions to be maintained and pursued effectively, to achieve goals.<br />
<br />
In addition to events the agent can control, it is important to be aware of negative outcomes outside the agent’s control. Therefore the state of the subjective system must include events with both positive and negative reward outcomes. There is a big difference between a subjective model and a goal-oriented planning model. The subjective system should represent all outcomes, but preferentially select positive outcomes for execution.<br />
<br />
The subjective system represents potential future states, both internal and external. It does not necessarily represent reality; it represents a biased interpretation of intended or expected outcomes based on a biased interpretation of current reality! These biases and omissions are useful; they provide the ability to “imagine’ future events by serially “predicting” a pruned tree of potential futures.<br />
<br />
More speculatively, differences between the subjective and objective systems may be the cause of phenomena such as selective awareness and <a href="http://protoscience.wikia.com/wiki/Phenomenal_and_Access_Conciousness" target="_blank">“access” consciousness</a>.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-DeRXznldJQI/VqS0xnfMFZI/AAAAAAAAG5o/IoVRED6pFhg/s1600/Neocortical%2Bcircuit%2BFF-indirect.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://4.bp.blogspot.com/-DeRXznldJQI/VqS0xnfMFZI/AAAAAAAAG5o/IoVRED6pFhg/s640/Neocortical%2Bcircuit%2BFF-indirect.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 4: Feed-Forward Indirect pathway, particularly involved in the Subjective system due to its influence on C5. The Thalamus is involved in this pathway, and is believed to have a gating or filtering effect. Data flows from the Thalamus to C4, to C2/3, to C5 and then to a different Thalamic nuclei that serves as the input gateway to another cortical Column in a different region of the Cortex. We propose that the Feed-Forward Indirect pathway is a major component of the subjective system.</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-LM91ZVCv2RY/VqS09CwZS0I/AAAAAAAAG5w/DT-VAEGPllY/s1600/Neocortical%2Bcircuit%2Binhibit%2Bloop.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://3.bp.blogspot.com/-LM91ZVCv2RY/VqS09CwZS0I/AAAAAAAAG5w/DT-VAEGPllY/s640/Neocortical%2Bcircuit%2Binhibit%2Bloop.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 5: The inhibitory micro-circuit, which we suggest makes the subjective system subjective! The red highlight shows how the Thalamus controls activity in C5 by activating inhibitory cells in C4. The circuit is completed by C5 pyramidal cells driving C6 cells that modulate the activity of the same Thalamic nuclei that selectively activate C5.</td></tr>
</tbody></table>
The subjective system primarily comprises C5 (where subjective states are represented) and the Thalamus (which controls subjectivity), but it draws input from the objective system via C2/3. The latter provides context and defines the role and scope (within the hierarchy) of C5 cells in a particular column. Between each cortical region (and therefore every hierarchy level), input to the subjective system is filtered by the Thalamus (figure 5). This implements the selection process. The Feed-Forward Indirect pathway includes these Thalamo-Cortical loops.<br />
<br />
We suggest the Thalamus implements selection within C5 using special cells in C4 that are activated by axons (outputs) from the Thalamus (see figure 6). These inhibitory C4 cells target C5 pyramidal cells and inhibit them from becoming active. Therefore, thalamic axons are both informative (“this selection has been made”) and executive (the axon drives inhibition of selected C5 pyramidal cells).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-dJq2GtC6it8/VqS1pvrgpQI/AAAAAAAAG54/ONZRFYvH8NM/s1600/c4inhibit.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="500" src="http://2.bp.blogspot.com/-dJq2GtC6it8/VqS1pvrgpQI/AAAAAAAAG54/ONZRFYvH8NM/s640/c4inhibit.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 6: Thalamocortical axons (afferents) are shown driving inhibitory cells in C4 (leftmost green cell) that in turn inhibit pyramidal cells in C5 (red). They also provide information about these selections to other layers, including C2/3. When a selection has been made, it becomes objective rather than subjective, hence provision of a copy to C2/3. <a href="http://changelog.ca/quote/2013/09/02/many_types_of_neurons" target="_blank">Image source</a>.</td></tr>
</tbody></table>
<span id="docs-internal-guid-4bd8219b-7364-e2e6-98c2-7505aed02ca0"><span style="font-family: "arial"; font-size: 14.6666666666667px; vertical-align: baseline; white-space: pre-wrap;"></span></span><br />
Note that selection may be a process of selective dis-inhibition rather than direct control: Selection alone may not be enough to activate the C5 cells. Instead, C5 pyramidal cells likely require both selection by the Thalamus, and feed-forward activation via input from C2/3. The feed-forward activation could occur anywhere within a window of time in which the C5 cell is “selected”. This would relax timing requirements on the selection task, making control easier; you only need to ensure that the desired C5 cell is disinhibited when the right contextual information arrives from other sources (such as C2/3). This also ensures C5 cell activation fits into its expected sequence of events and doesn’t occur without the right prior context.<br />
<br />
C5 also benefits from informational feedback from higher regions and neighbouring cells that help to define unique contexts for the activation of each cell.<br />
<br />
We suggest that C5 pyramidal cells are similar to C2/3 pyramidal cells but with some differences in the way the cells become active. Whereas C2/3 cells require both matching input via the apical dendrites and valid historical input to the basal dendrites to become active, C5 cells additionally need to be disinhibited for full activation to occur.<br />
<br />
As mentioned in the <a href="http://blog.agi.io/2015/12/how-to-build-general-intelligence.html" target="_blank">previous article</a>, output from C5 cells sometimes drives motors very directly, so full activation of C5 cells may immediately result in physical actions. We can consider C5 to be the “output” layer of the cortex. This makes sense if the representation within C5 includes selected future states.<br />
<br />
Management of C5 activity will require a lot of inhibition; we would expect most of the input connections to C5 to be inhibitory because in every context, for every potential outcome, there are many alternative outcomes that must be inhibited (ignored). At any given time, only a sparse set of C5 cells would be fully active, but many more would be potentially-active (available for selection).<br />
<br />
Given predictive encoding and filtering inhibition, it would be common for few pyramidal cells to be active in a Column at any time. Separately, we would expect objective C2/3 pyramidal activity to be more consistent and repeatable than subjective C5 pyramidal activity, given a constant external stimulus.<br />
<h2 style="text-align: left;">
Executive System</h2>
So far we have defined a mechanism for generating a hierarchical representation and a mechanism for selectively filtering activity within that representation. In our original <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">conceptual look</a> at general intelligence, we also desired that filtering predictions would be equivalent to action selection. But if we have selected predictions of future actions at various levels of abstraction within the hierarchy, how can we make these abstract prediction-actions actually happen?<br />
<br />
The purpose of the executive system is to execute hierarchical plans reliably. As previously discussed, this is no trivial matter due to problems such as <a href="http://blog.agi.io/2014/12/agency-and-hierarchical-action-selection.html" target="_blank">vanishing agency at higher hierarchy levels</a>. If a potential future outcome represented within the subjective system is selected for action, the job of the executive system is to make it occur.<br />
<br />
We know that we want abstract concepts at high levels within the hierarchy to be faithfully translated into their equivalent patterns of activity at lower levels. Moving towards more concrete forms would result in increasing activity as the incremental dimensionality reduction of the feed-forward hierarchy is reversed.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-bv1-E0eyxmY/VqS3mbdFNWI/AAAAAAAAG6I/-k6VMbx0U-Q/s1600/flow.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="99" src="http://2.bp.blogspot.com/-bv1-E0eyxmY/VqS3mbdFNWI/AAAAAAAAG6I/-k6VMbx0U-Q/s320/flow.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 7: Differences in dominant direction of data flow between objective and executive systems. Whereas the Objective system builds increasingly abstract concepts of greater breadth, the Executive system is concerned with decomposing these concepts into their many constituent parts. so that hierarchically-represented plans can be executed.</td></tr>
</tbody></table>
We also know that we need to <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">actively prioritize execution of a high level plan</a> over local prediction / action candidates in lower levels. So, we are looking for a cascade of activity from higher hierarchy levels to lower ones.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-wpincok8zrU/VqS39aXYT3I/AAAAAAAAG6Q/7w8Kdrpk2ys/s1600/Neocortical%2Bcircuit%2BFB-control.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://3.bp.blogspot.com/-wpincok8zrU/VqS39aXYT3I/AAAAAAAAG6Q/7w8Kdrpk2ys/s640/Neocortical%2Bcircuit%2BFB-control.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 8: One of two Feed-Back direct pathways. This pathway may well be involved in cascading control activity down the hierarchy towards sensors and motors. Activity propagates from C6 to C6 directly; C6 modulates the activity of local C5 cells and relevant Thalamic nuclei that activate local C5 cells by selective disinhibition in conjunction with matching contextual information from C2/3.</td></tr>
</tbody></table>
It turns out that such a system does exist: The feed-back direct pathway from C6 to C6. Cortex layer 6 is directly connected to Cortex layer 6 in the hierarchy levels immediately below. What’s more, these connections are direct, i.e. unfiltered (which is necessary to avoid the vanishing agency problem). Note that C5 (the subjective system) is still the output of the Cortex, particularly in motor areas. C6 must modulate the activity of cells in C5, biasing C5 to particular predictions (selections) and thereby implementing a cascading abstract plan. Finally, C6 also modulates the activity of Thalamic nuclei that are responsible for disinhibiting local C5 cells. This is obviously necessary to ensure that the Thalamus doesn’t override or interfere with the execution of a cascading plan already selected at a higher level of abstraction.<br />
<br />
Our theory is that ideally, all selections originate centrally (e.g. in the Thalamus). When C5 cells are disinhibited and then become predicted, an associated set of local C6 cells is triggered to make these C5 predictions become reality.<br />
<br />
These C6 cells have a number of modulatory outputs to achieve this goal:<br />
<br />
<ul style="text-align: left;">
<li>They remove local competition by directly inhibiting C5 cells in the local Column (see <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2885865/" target="_blank">link</a>, <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2826182/" target="_blank">link</a>, <a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00013/full" target="_blank">link</a>)</li>
<li>They drive<a href="http://shermanlab.uchicago.edu/files/RP155-CONEUR498.pdf%20http://www.frontiersin.org/files/cognitiveconsilience/index.html" target="_blank"> C6 cells in the hierarchy level below</a>, in Columns that have a Feed-Forward relationship with the local column. In effect, the execution of the task is delegated to a set of hierarchically-lower C6 cells that are recruited to the task.</li>
<li>They <a href="http://shermanlab.uchicago.edu/files/RP155-CONEUR498.pdf" target="_blank">modulate cells in the Thalamus that provide input to the local Column</a>, to ensure that the desired C5 cells in the local column are <a href="http://www.frontiersin.org/files/cognitiveconsilience/index.html" target="_blank">disinhibited</a>.</li>
</ul>
<br />
<h3 style="text-align: left;">
Executive Training</h3>
<span id="docs-internal-guid-8c81be9e-7384-aafc-1b4e-17fda48439ca"><span style="font-family: "arial"; font-size: 21.3333333333333px; vertical-align: baseline; white-space: pre-wrap;"></span></span><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-ctoJtWAXdvs/VqS9_YWS0eI/AAAAAAAAG6o/VsvGgmEXmcE/s1600/training.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ctoJtWAXdvs/VqS9_YWS0eI/AAAAAAAAG6o/VsvGgmEXmcE/s1600/training.png" /></a></div>
No, this is not a personal development course for CEOs. This section checks whether C6 cells can learn to replay specific action sequences via C5 activity. This is an essential feature of our interpretation, because only C6 cells participate in a direct, modulatory feedback pathway.<br />
<br />
We propose that C6 pyramidal neurons are taught by historical activity in the subjective system. Patterns of subjective activity become available as “stored procedures” (sequences of disinhibition and excitatory outputs) within C6.<br />
<br />
Let’s start by assuming that C6 pyramidal cells have similar functionality to C2/3 and C5 pyramidal cells, due to their common morphhology. Assume that C5 cells in motor areas are direct outputs, and when active will cause the agent to take actions without any further opportunity for suppression or inhibition (<a href="http://blog.agi.io/2015/12/how-to-build-general-intelligence.html" target="_blank">see previous article</a>).<br />
<br />
In other cortical areas, we assume that the role of C5 cells is to trigger more abstract “plans” that will be incrementally translated into activity in motor areas, and therefore will also become actions performed by the agent.<br />
<br />
To hierarchically compose more abstract action sequences from simpler ones, we need activity of an abstract C5 cell to trigger a sequence of activity in more concrete C5 cells. C6 cells will be responsible for linking these C5 cells. So, activating a C6 cell should trigger a replay of a sequence of C5 cell activity in a lower Column. How can C6 cells learn which sequences to trigger, and how can these sequences be interpreted correctly by C6 cells in higher hierarchy levels?<br />
<br />
C6 pyramidal cells are <a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00013/full" target="_blank">mostly oriented</a> with their dendrites pointing towards the more superficial cortex layers C1,...,C5 and their axons emerging from the opposite end. Activity from C5 to C6 is transferred via axons from C5 synapsing with dendrites from C6. Given <a href="http://blog.agi.io/2015/11/new-htm-paper-why-neurons-have.html" target="_blank">a particular model of pyramidal cell learning rules</a>, C6 pyramidal cells will come to recognize patterns of simultaneous C5 activity in a specific sequential context, and C6 interneurons will ensure that unique sets of C6 pyramidal cells respond in each context.<br />
<br />
So how will these C6 cells learn to trigger sequences of C5 cells? We know that the axons of C6 cells <a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00013/full" target="_blank">bend around</a> and <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2826182/" target="_blank">reach up into C5</a>, down to the Thalamus and directly to hierarchically-lower C6 cells. At all targets they can be <a href="http://journal.frontiersin.org/article/10.3389/neuro.01.1.1.002.2007/full#h5" target="_blank">excitatory or inhibitory</a>.<br />
<br />
All we need beyond this, is for C6 axons to seek out axon target cells that become active immediately after the originating C6 cell is stimulated by active C5 cells. This will cause each C6 cell to trigger C5 and C6 cells that are observed to be activated afterwards. Note that we require the C6 cells themselves be organised into sequences (technically, a graph of transitions).<br />
<br />
Target seeking by axons is known as “<a href="https://en.m.wikipedia.org/wiki/Axon_guidance" target="_blank">Axon Guidance</a>” and C6 pyramidal cells’ axons do seem to <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0082954" target="_blank">target electrically active cells by ceasing growth when activity is detected</a>. We have not yet found biological evidence for the predicted timing.<br />
<br />
C6 axons can also target C4 inhibitory cells (<a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2885865/" target="_blank">evidence</a>) and <a href="http://europepmc.org/articles/PMC3462588" target="_blank">Thalamic cells</a>, which again is compatible with our interpretation, as long as they are cells that become active after the originating C6 cell. If we want to “replay” some activity that followed a particular C6 cell, then all the cells described above should be excited or inhibited to ensure that the same events occur again. Activating a C6 cell directly should reproduce the same outcome as incidental activation of the C6 cell via C5 - a chain of sequential inhibition and promotion will result. Note that the same learning rule could work to discover all axon targets mentioned.<br />
<br />
Collectively, the C6 cells within a Column will become a repertoire of “stored procedures” that can be triggered and replayed by a cascade of activity from higher in the hierarchy or by direct selection via C5. C6 cells would behave the same way whether activated by local C5 cells, or by C6 cells in the hierarchy level above. This allows cascading, incremental execution of hierarchical plans.<br />
<br />
C6 cells do not need to replace sequences of C5 cell activity with a single C6 cell (i.e. label replacement for symbolic encoding), but they do need to collectively encode transitions between chains of C5 cells, individually trigger at least 1 C5 cell and collectively allow a single C6 cell to trigger a sequence of C6 cells in both the current and lower hierarchy regions.<br />
<br />
C6 interneurons can resolve conflicts when multiple C6 triggers coincide within a column. We can expect C6 interneurons to inhibit competing C6 pyramidal cells until the winners are found, resulting in a locally consistent plan of action.<br />
<br />
As with layers C2/3 and C5, C6 inhibitory interneurons will also support training C6 pyramidal cells for collective coverage of the space of observed inputs, in this case from C5 and C2/3.<br />
<h3 style="text-align: left;">
Bootstrapping</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-fSqwVvAEn54/VqS-IFFLmgI/AAAAAAAAG6w/Sd6rTspf2E8/s1600/boots.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="175" src="http://2.bp.blogspot.com/-fSqwVvAEn54/VqS-IFFLmgI/AAAAAAAAG6w/Sd6rTspf2E8/s200/boots.png" width="200" /></a></div>
Now we are only left with a <a href="https://en.wikipedia.org/wiki/Bootstrapping" target="_blank">bootstrapping</a> problem: How can the system develop itself? Specifically, how do the sequences of C5 activity come to be defined so that they can be learned by C6?<br />
<br />
We suggest that conscious choice of behaviour via the Thalamus is used to build the hierarchical repertoire from simple primitive actions to increasingly sophisticated sequences of fine control. Initially, thalamic filtering of C5 state would be used to control motor outputs directly, without the involvement of C6. Deliberate practice and repetition would provide the training for C6 cells to learn to encode particular sequences of behaviour, making them part of the repertoire available to C6 cells in hierarchically “higher” Columns.<br />
<br />
Initially, concentration is needed to perform actions via direct C5 selections; these activities need to be carefully centrally coordinated using selective attention. However, when C6 has learnt to encode these sequences, they become both more reliable and require less effort to execute, requiring only a trigger to one C6 cell.<br />
<br />
After training, only minimal thalamic interventions are needed to execute complex sequences of behaviour learned by C6 cells. Innovation can continue by combining procedures encoded by C6 with interventions via the Thalamus, that can still excite or inhibit C5 cells. However, in most other cases C6 training is accelerated by the independence of Columns: When a C6 cell learns to control other cells within the Column, this learning remains valid no matter how many higher hierarchy levels are placed on top. By analogy, once you’ve learned to drink from a cup, you don’t need to relearn that skill to drink in restaurants, at home, at work etc.<br />
<br />
As C6 learns and starts to play a role in the actions and internal state of the agent, it becomes important to provide the state of C6 to the objective and subjective systems as contextual input.<br />
<br />
Axons from C6 to other, hierarchically lower Columns take two paths: To C6, and to C1. We propose that the copy provided to C1 is used as informational feedback in C2/3 and C5 pyramidal cells (<a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00013/full" target="_blank">these axons synapse with Pyramidal cell Apical dendrites</a>). We suggest the copy to C6 allows C6 cells to execute plans hierarchically, by delegating execution to a number of more concrete C6 cells. Therefore, the feedback direct pathway from C6 to C6 is part of the executive system. These axons should synapse on cell bodies, or nearby, to inhibit or trigger C6 activation artificially (rather than via C5).<br />
<h3 style="text-align: left;">
Interpretation of the Thalamus</h3>
Rather than as merely a <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2753250/" target="_blank">relay</a>, we propose that a better concept of the Thalamus is as a control centre. It’s job is to centrally control cortical activity in C5 (the subjective system). Abstract activity in C5 is propagated down the hierarchy by C6, and translated into its concrete component states, eventually resulting in specific motor actions. Therefore, via this feedback pathway the filtering performed by the Thalamus assumes an executive role also.<br />
<br />
We believe that filtering predictions of oneself performing an action or experiencing a reward is the <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">mechanism by which objectives and plans are selected</a>. We believe there is only one representation of the world in our heads. There is no separate “goal-oriented” or “action-based” representation. This means that filtering predictions is the mechanism of behaviour generation. Note that in a hierarchical system, you can simultaneously select novel combinations of predictions to achieve innovation without changing the hierarchical model.<br />
<br />
Our interpretation of the Thalamus depends on some theoretical assumptions about how general intelligence works. Crucially, we believe there is no difference between selective awareness of externally-caused and self-generated events, except some of the latter have <a href="http://blog.agi.io/2014/12/agency-and-hierarchical-action-selection.html" target="_blank">agency</a> in the real world via the agent’s actions. This means that selective attention and action selection can both be consequences of the same subjective modelling process.<br />
<br />
But where does selection actually occur?<br />
<br />
For a number of <a href="http://blog.agi.io/2014/12/agency-and-hierarchical-action-selection.html" target="_blank">practical reasons</a>, action and attentional selection should be centralized functions. For one thing, the reward criteria for selecting actions are of much smaller dimension than the cortical representations - for example, the set of possible pain sensations are far more limited than the potential external causes of pain. We essentially need to compare the reward of all potential actions against each other, rather than an absolute scale.<br />
<br />
It is also important that conflicts between items competing for attention or execution are resolved so that incompatible plans are replaced by a single clear choice. Conflict resolution is difficult to do in a highly parallel & distributed system; instead, it is preferable to force all alternatives to compete against each other until a few clear winners are found.<br />
<br />
Finally, once an action or attentional target is selected, it should be maintained for a long period (if still relevant), to avoid vacillation. (See Scholarpedia for a good introduction to the <a href="http://www.scholarpedia.org/article/Action_selection" target="_blank">difficulties of conflict resolution and the importance of sticking to a decision for long enough to evaluate it</a>).<br />
<br />
We believe the Thalamus plays this role via its interactions with the Cortex. It interacts with the Cortex in two ways. First, the Thalamus selectively dis-inhibits particular C5 cells, allowing them to become active when the right circumstances are later observed objectively (i.e. via C2/3, which is not subjective).<br />
<br />
Second, the Thalamus must also co-operate with the Feed-Back cascade via C6. While the Thalamus generates new selections by controlling C5, it must also permit the execution of existing, more abstract Thalamic selections by allowing cascading feedback activity to override local selections. Together, these mechanisms ensure that execution of abstract plans is as easily accomplished as simpler, concrete actions.<br />
<h3 style="text-align: left;">
Interpretation of the Basal Ganglia</h3>
The <a href="https://en.wikipedia.org/wiki/Basal_ganglia" target="_blank">Basal Ganglia</a> are involved in so many distinct functions that they can’t be fully described within this article. They consist of a set of discrete structures located adjacent to the Thalamus.<br />
<br />
In our model, selection is implemented by the Thalamus manipulating the subjective system within the Cortex. We propose that the selections themselves are generated by the Basal Ganglia, which then controls the behaviour of the Thalamus.<br />
<br />
Crucially, we believe the Striatum within the Basal Ganglia uses reward values (such as pleasure and pain) to make adaptive selections. In other words, the Basal Ganglia are responsible for picking good actions, biasing the entire Thalamo-Cortical system towards futures that are expected to be more pleasant for the agent.<br />
<br />
However, to make adaptive choices it is necessary to have accurate context and predictions (candidate actions). The hierarchical model defined within the Cortex is an efficient and powerful source for this data, and in fact, this pathway (Cortex → Basal Ganglia → Thalamus → Cortex) does exist within the brain (see figure 9 below).<br />
<br />
Thanks to studies of relevant disorders such as Parkinson’s and Huntingdon’s, it is known that this pathway is <a href="https://en.wikipedia.org/wiki/Basal_ganglia_disease" target="_blank">associated with behaviour initiation and selection based on adaptive criteria</a>.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-5o6U13rcD84/VqS7oAdR0LI/AAAAAAAAG6c/HKK7QW5-2CQ/s1600/wikipedia_bg.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="393" src="http://3.bp.blogspot.com/-5o6U13rcD84/VqS7oAdR0LI/AAAAAAAAG6c/HKK7QW5-2CQ/s400/wikipedia_bg.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 9: Pathways forming a circuit from Cortex to Basal Ganglia to Thalamus and back to Cortex. <a href="https://en.wikipedia.org/wiki/Basal_ganglia#/media/File:Basal_ganglia_circuits.svg" target="_blank">Image source</a>.</td></tr>
</tbody></table>
<h2 style="text-align: left;">
Lifecycle of an idea</h2>
Using our interpretation of biological general intelligence, we can follow the lifecycle of an idea from the conception to execution. Lets walk through the theorized response to a stimulus, resulting in an action.<br />
<br />
Although the brain is operating constantly and asynchronously, we can define the start of our idea as some sensory data that arrives at the visual cortex. In this example, it’s an image of an ice-cream in a shop.<br />
<h3 style="text-align: left;">
Objective Modelling</h3>
Sensor data propagates unfiltered up the Feed-Forward Direct pathway, activating cells in C4 and C2/3 in numerous cortical areas as it is transformed into its hierarchical form. The visual stimuli become a rich network of associated concepts, including predictions of near-future outcomes, such as experiencing the taste of ice-cream. These concepts represent an objective external reality and are now active and available for attention.<br />
<h3 style="text-align: left;">
Subjective Prediction</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-QZ0B0oLin0E/VqS-Rbi1QWI/AAAAAAAAG64/7FfXIHZbVLA/s1600/icecream.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-QZ0B0oLin0E/VqS-Rbi1QWI/AAAAAAAAG64/7FfXIHZbVLA/s1600/icecream.png" /></a></div>
Activity within the Objective system triggers activity in the Subjective system. Some C5 cells become “predicted”, but are inhibited by the Thalamus. These cells represent potential future actions and outcomes. Things that, from experience, we know are likely to occur after the current situation.<br />
<br />
The Cortex projects data from C2/3 to the Striatum where it is weighted according to reward criteria. A strong response to the flavour of the frozen treat percolates through the Basal Ganglia and manipulates the activity of the Thalamus.<br />
<br />
Between the Thalamus and the Cortex, an iterative negotiation takes place resulting in the selection (via dis-inhibition) of some C5 cells. The Basal Ganglia have learned which manipulations of the Thalamus maximize the expected Reward given the current input from Cortex.<br />
<br />
The way that the Thalamus stimulates particular C5 cells is somewhat indirect. The path of activity to “select” C5 cells in layer n is C5[n-1] → Thalamus → C4[n] → C5[n]. The signal is re-interpreted at each stage of this pipeline - that is, connections do not carry a specific meaning from point to point. Therefore, you can’t just adjust one “wire” to trigger a particular C5 cell. Rather, you must adjust the inhibition of input to many C4 → C5 cells until you’ve achieved the conditions to “select” a target C5 cell. Many target C5 cells might be simultaneously selected.<br />
<br />
In addition to requiring disinhibition, C5 cells also wait for specific patterns of cell activity in C2/3 prior to becoming “predicted”. This means that it’s very difficult to select a C5 cell that is not “predicted”; it simply doesn’t have the support to out-compete its neighbours in the column and become “selected”. This prevents unrealistic outcomes being “selected”, or output commencing, before the right circumstances have arrived to match the expectation.<br />
<br />
Eventually, a subset of C5 cells become “predicted” and “selected”, representing a subjective model of potential futures for the agent in the world. In this case, the anticipated future involves eating ice-cream.<br />
<h3 style="text-align: left;">
Execution</h3>
When C5 cells become active, they in turn drive C6 pyramidal cells that are responsible for causing the future represented by “contextual, selected & predicted” C5 cells. In this case, C6 cells are charged with executing the high-level plan to “buy some ice-cream and eat it”.<br />
<br />
The plan is embodied by many C5 cells, distributed throughout the hierarchy; each represents a subset of the “<a href="https://en.wikipedia.org/wiki/Qualia" target="_blank">qualia</a>” relating to the eating of ice-cream. C6 cells begin to interpret these C5 cells into concrete actions, via the C6-C6 Feed-Back Direct pathway. Crucially, they no longer require the Thalamus to modulate the input that makes C5 cells “selected”. Instead, C6 cells stimulate C5 and C6 cells in hierarchically-lower Columns directly, moving them to “selected” status and allowing them to become active as soon as the corresponding Feed-Forward evidence arrives to match.<br />
<br />
C6 cells also modulate relay cells in the Thalamus, guiding the Thalamus to disinhibit C5 cells in lower hierarchy regions. This helps to ensure the parts of the decomposed plan are executed as intended. In turn, these newly selected “lower” C5 cells drive associated C6 cells, and the plan cascades down the hierarchy.<br />
<br />
Note that the plan is also flowing in the “forward” direction, as it incrementally becomes reality rather than expectation. As motor actions take place, they are sensed and signalled through the Feed-Forward pathways. When C5 cells become “selected”, this information becomes available to higher columns in the hierarchy, if not filtered. This also helps the Feed-Forward Indirect pathway and C6 cells to keep track of activity and execute the plan in a coordinated manner.<br />
<br />
At the lowest levels of the hierarchy, the plan becomes a sequence of motor activity, which is activated by C5 cells directly, and also by other brain components that are not covered by our general intelligence model.<br />
<br />
A few moments later, the ice-cream is enjoyed, triggering a release of Dopamine into the Striatum and reinforcing the rewards associated with recent active Cortical input. Delicious!<br />
<h2 style="text-align: left;">
Summary</h2>
In the previous articles we <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">explored the characteristics</a> of a general intelligence and looked at some of the features we expected it to have. In <a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">part 2</a> and <a href="http://blog.agi.io/2015/12/how-to-build-general-intelligence.html" target="_blank">part 3</a> we reviewed some relevant computational neuroscience research. In this article we’ve described our interpretation of this background material.<br />
<br />
We presented a model of general intelligence built from 3 interacting systems - Objective, Subjective and Executive. We described how these systems could learn and bootstrap via interaction with the world, and how they could be implemented by the anatomy of the brain. As an example, we traced an experience from sensation, through planning and to execution.<br />
<br />
Let’s assume that our understanding of biology is approximately correct. We can use this as inspiration to build an artificial general intelligence with a similar architecture and test whether the systems behave as described in these articles.<br />
<br />
The next article in this series will look specifically at how these concepts could be implemented in software, resulting in a system that behaves much like the one described here.</div>
Unknownnoreply@blogger.com6tag:blogger.com,1999:blog-1180536024131440638.post-4356044369526531652015-12-22T12:14:00.001+11:002015-12-22T21:15:18.777+11:00How to build a General Intelligence: Circuits and Pathways<div dir="ltr" style="text-align: left;" trbidi="on">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-JTf-4s3k15U/VniVuLmZZ2I/AAAAAAAAGSo/eQCvlLrvXVs/s1600/consilience.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="457" src="http://3.bp.blogspot.com/-JTf-4s3k15U/VniVuLmZZ2I/AAAAAAAAGSo/eQCvlLrvXVs/s640/consilience.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1: Our headline image is from the Cognitive Consilience: An atlas of key pathways cross-referenced to supporting literature articles. The complexity and variety of routing within the brain can be appreciated with this beautiful illustration. Note in particular the specialisation of cortical cells and the way this affects their interactions with other cells in the cortex and elsewhere in the brain. <a href="http://www.frontiersin.org/files/cognitiveconsilience/index.html" target="_blank">Explore this fantastic resource yourself</a>.</td></tr>
</tbody></table>
<br />
<i>By David Rawlinson and Gideon Kowadlo</i><br />
<br />
This is part 3 of our series “how to build an artificial general intelligence” (AGI). <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">Part 1</a> was a theoretical look at General Intelligence (follow the link if you don’t know what General Intelligence is).<br />
<br />
We believe that the Thalamo-Cortical system is the origin of General Intelligence in people. In <a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">Part 2</a> we presented very broadly how the Thalamo-Cortical system is structured and organised. We applied some core concepts, such as hierarchy, to help us describe the system.<br />
<br />
We also looked at the cellular structure of the Cortex and in particular introduced Pyramidal cells.<br />
<br />
This article is again about what we can learn from reverse-engineering the Thalamo-Cortical system, but this time from its connectivity, which we present in terms of circuits and pathways.<br />
<h2 style="text-align: left;">
Pathways and Circuits</h2>
A <a href="https://en.wikipedia.org/wiki/Neural_pathway" target="_blank">pathway</a> is a gross pattern of sequential connectivity between brain regions - for example, if part A is highly connected to part B, and activity in A <a href="http://brainu.org/glossary-neuroscience-terms" target="_blank">is followed by</a> activity in B, we say there exists a pathway between A and B. Cells in the Thalamo-Cortical system are connected to each other in quite restricted and specific ways, so these pathways are quite informative.<br />
<br />
Circuits are more specific and precise details of both <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2570080/" target="_blank">connectivity and functional interaction between neurons</a>. In computational neuroscience there exists a concept called the <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3777738/" target="_blank">Canonical Cortical Micro-Circuit</a>. The <a href="http://journal.frontiersin.org/article/10.3389/fnins.2015.00303/full" target="_blank">specifics of this circuit</a> are <a href="http://www.nature.com/neuro/journal/v18/n2/full/nn.3917.html" target="_blank">not widely agreed</a>, because (a) the Cortex is complex and (b) many of the evidence-gathering exercises are statistical observations (e.g. “X% of outputs from A and Y% of output from B projects to region C”) which may obscure fundamental functional or topological features. For example, outputs from A and B may project to cells with exclusive roles, but physically co-located in C. Statistical, regional approaches will not capture such distinctions.<br />
<br />
In the neuroscience literature there’s a frustrating habit of selectively reporting supporting details while ignoring others. Perhaps this is simply because it's impossible to describe any part exhaustively. In particular, there is a lot of contradictory information about Cortical circuits. But the research can still shed some light on what is happening. Just don’t expect all sources to be consistent or complete!<br />
<h2 style="text-align: left;">
Key Cortical Pathways</h2>
There are several widely-cited and well established cortical pathways (i.e. routes with at least one end in the Cortex). To understand these, it is important to remember both the <a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">physical and logical structure of the Cortex as described in the previous article</a>. Physically, the cortex is made of layers, and logically, Columns within the Cortex form a hierarchy.<br />
<br />
The hierarchy defines a structure made of Columns, and determines which Columns interact. Pathways describe the patterns of interaction between cells within a Column, and between Columns. We assume that all Columns are functionally identical prior to training.<br />
<br />
Cells within Columns are usually identified by both the physical location of cell bodies within particular Layers in the Column, and by the morphology (shape) of the cell. Data flow to and from Cortical cells is largely restricted to a handful of core pathways that begin and terminate in particular cell types in specific cortical layers.<br />
<br />
There are many descriptions of cortical pathways and circuits in the literature. We will first introduce just 4 well-established cortical-cortical pathways, and then some thalamo-cortical pathways. Note that although the existence of these pathways is unambiguous, their purpose and function is poorly understood. They appear to be <a href="http://web.mit.edu/hst.722/www/Topics/CorticoThalamic/Corticothalamic%20feedback%20and%20sensory%20processing.pdf" target="_blank">consistent across various somatosensory regions of the cortex</a>, <a href="http://www.nature.com/neuro/journal/v18/n12/full/nn.4171.html" target="_blank">especially in comparison to variations in other brain tissues</a>.<br />
<br />
Hawkins’ <a href="http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf" target="_blank">Hierarchical Temporal Memory</a> (HTM) introduces 3 of the 4 pathways in a single, coherent scheme and relates them to a general intelligence algorithm. We will borrow this terminology and describe them in detail below. Their names describe the direction of data flow and the routing used:<br />
<br />
<b>Feed-Forward Direct Pathway</b>: C2/3 → C4 → C2/3<br />
<b>Feed-Forward Indirect Pathway</b>: C5 → Thalamus → C4 → C2/3<br />
<b>Feed-Back Direct Pathway #1</b>: from C6 → C1<br />
<br />
We are also interested in a second Feed-Back Direct “pathway” implemented by cortically projecting C6 pyramidal cells with axons that terminate <a href="http://www.frontiersin.org/files/cognitiveconsilience/index.html" target="_blank">in both C6 and C1 in hierarchically lower regions</a> (<a href="http://epaxon.blogspot.com.au/2012/08/feedforward-feedback-and-inhibitory.html" target="_blank">see here for a diagram</a>).<br />
<br />
<b>Feed-Back Direct Pathway #2</b>: from C6 → C6<br />
<br />
Note that cells in all cortical layers (except, perhaps, C4) receive input via their dendrites in C1. In other words, feedback from C6 to C1 is then used as input to many layers. Feedback from C6 to C6 is generally not input for other layers.<br />
<br />
In neuroscience, <a href="https://en.wikipedia.org/wiki/Feed_forward_(control)" target="_blank">Feed-Forward</a> usually means the flow of data away from external sources such as sensors (towards greater abstraction, if you believe in a cortical hierarchy). Feed-Back means the opposite - data flow towards regions that have direct interaction with external sensors and motors.<br />
<br />
Direct pathways are so-called because data is routed directly from one cortical column or region to another, without a stop along the way. Indirect pathways are routed via other structures. The “Feed-Forward-Indirect” pathway described by Hawkins is routed via the Thalamus.<br />
<br />
Figure 2, derived from a <a href="http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf" target="_blank">Hawkins/Numenta publication</a>, shows graphically how information flows between columns and between layers within columns, as part of these 3 pathways according to the HTM theory. As mentioned before, the community is welcome to <a href="http://blog.agi.io/2014/05/thalamocortical-architecture.html" target="_blank">contribute by updating and adding to the figure</a>.<br />
<br />
Hawkins assigns specific roles to these pathways, but we will be re-interpreting them in the next article.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-S69o73PEvHM/U377x8M97NI/AAAAAAAAGP8/dCWK8YngB8Y/s1600/cortical+layers+(2).png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="364" src="http://1.bp.blogspot.com/-S69o73PEvHM/U377x8M97NI/AAAAAAAAGP8/dCWK8YngB8Y/s1600/cortical+layers+(2).png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 2: Routing of 3 core pathways, based on a diagram from the <a href="http://numenta.com/assets/pdf/whitepapers/hierarchical-temporal-memory-cortical-learning-algorithm-0.2.1-en.pdf?" target="_blank">HTM/CLA White Paper</a>. Note the involvement of specific cortical layers with each pathway, and the central role of the Thalamus. The names of the pathways indicate direct (cortex-to-cortex) and indirect (cortex-thalamus-cortex) variants, with direction being either forward (away from external sensors and motors, towards increasing abstraction) or backward (towards more concrete regions dealing with specific sensor/motor input). </td></tr>
</tbody></table>
<h2 style="text-align: left;">
The role of the Thalamus</h2>
Let’s recap: The Cortex is composed of Columns, organised into a hierarchy. Cells pass messages directly to other Columns that are higher or lower in the hierarchy. Messages may also be transmitted indirectly between Columns, via the Thalamus.<br />
<br />
The Thalamus is often viewed as having a <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2753250/" target="_blank">gating or relaying function</a>. The Thalamus is particularly associated with control of <a href="http://www.scholarpedia.org/article/Thalamus" target="_blank">attention</a>.<br />
<br />
This section will describe indirect pathways involving the Thalamus. Figure 3 is a reproduction of a figure from Sherman and Guillery (2006) that has two new features of interest. These authors use the terminology “first order” to denote cortical regions receiving direct sensor input and “higher order” to denote cortical regions receiving input from “first order” cortical regions. This corresponds with the notion of hierarchy levels 1 and 2. <br />
<br />
The Thalamus is a significant part of the “Feed-Forward Indirect” pathway. This pathway originates at Cortex layer 5 and propagates to a nucleus in the Thalamus. There, the nucleus may react by transmitting a (presumably corresponding) signal to one or more other Cortical Columns, in a different region. In some theories of cortical function, the target Column is conceptually “higher” in the hierarchy. The Thalamic input enters the Cortex via Thalamic axons terminating in Cortex layer 4 and is then propagated to Cortex Layer 5 where the pathway begins again.<br />
<br />
Figure 3 also shows Cells in Columns in Cortex layer 6 fairly accurately form reciprocal modulatory connections to Thalamic nuclei that provide input to the Column via C4 and C5! Therefore, a Column within the Cortex has influence on data that it receives from the Thalamus. In effect, the Cortex is not a passive recipient but works with the Thalamus to control its input. The figure also depicts C6 cells projecting to C6 in lower regions (our second feedback pathway).<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-2axa1STxuOc/VnicXTT_JWI/AAAAAAAAGS4/tarNm9x5eVE/s1600/sherman.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="384" src="http://4.bp.blogspot.com/-2axa1STxuOc/VnicXTT_JWI/AAAAAAAAGS4/tarNm9x5eVE/s640/sherman.gif" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 3: Pathways between cortical columns in different regions, showing layer involvement in each pathway and the role of the Thalamus. Sherman and Guillery use the terminology “first order” to denote cortical regions receiving direct sensor input and “higher order” to denote cortical regions receiving input from lower (e.g. “first order”) cortical regions. This corresponds with the notion of hierarchy levels 1 and 2. Note that in addition to the 3 pathways shown in the previous figure, we see additional direct feedback pathways and reciprocal feedback from Cortex layer 6 to the Thalamic nuclei that stimulate the cortical region. <a href="http://www.scholarpedia.org/article/Thalamus" target="_blank">Image source</a>.</td></tr>
</tbody></table>
<h2 style="text-align: left;">
Motor output</h2>
At this point it is interesting to look at how the Cortex can influence or control behaviour, particularly the generation of motor output. There are two pathways that allow the cortex to influence or control behaviour:<br />
<br />
<b>Cortical Control</b>: <span class="Apple-tab-span" style="white-space: pre;"> </span>Basal Ganglia → Thalamus → Cortex → Motors<br />
<b>Cortical Influence</b>: <span class="Apple-tab-span" style="white-space: pre;"> </span>Cortex → Basal Ganglia → Motors<br />
<br />
Note that in both cases, the origin of action selection is the Basal Ganglia. In the first case, the Basal Ganglia control signals emitted by the Thalamus, with these signals in turn affecting activity within Cortex layer 5 (C5). C5, particularly in motor areas, has been studied in detail. 10-15% of the cells in these areas are <a href="https://en.wikipedia.org/wiki/Primary_motor_cortex#Betz_cells_as_the_final_common_pathway" target="_blank">very large pyramidal neurons known as Betz cells</a>, that can be observed to drive muscles very directly with few synapses in between. These cells are <a href="https://www.princeton.edu/~graziano/Papers/neuron_rev_02.pdf" target="_blank">more prevalent in primates and are especially important for control of the hands</a>. This makes sense given that manual tasks are typically more complex and require greater dexterity than movements by other parts of the body. The human Cortex is believed to be crucial for innovative and sophisticated manual tasks such as tool-making.<br />
<br />
Within the Cortical layers, C5 seems to be uniquely involved in motor output. Figure 4 shows some of the ways output from Pyramidal cells in C5 project output to areas of the brain associated with motor output and control. In contrast, pyramidal cells in C2/3 predominantly project to other areas of the cortex and are not directly involved in control.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-QgRdmBxsxbw/VnidzXZXb4I/AAAAAAAAGTE/Su2YJlPrxYA/s1600/cortex_motor_outputs.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-QgRdmBxsxbw/VnidzXZXb4I/AAAAAAAAGTE/Su2YJlPrxYA/s400/cortex_motor_outputs.jpg" width="273" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 4: Pyramidal cells in C5 project output to areas of the brain associated with motor output and control. In contrast, pyramidal cells in C2/3 predominantly project to other areas of the cortex and are not directly involved in control. <a href="http://what-when-how.com/neuroscience/the-thalamus-and-cerebral-cortex-integrative-systems-part-1/" target="_blank">Image source</a>.</td></tr>
</tbody></table>
The second way that the Cortex can influence motor output is via the Basal Ganglia. In this case, we propose that the Cortex might provide contextual information to assist the Basal Ganglia in its direct control outputs, but we found no evidence that the Cortex is able to exert control over the Basal Ganglia.<br />
<br />
We suggest Cortical influence over the Basal Ganglia is less interesting from a General Intelligence perspective, because the hierarchical representations formed within the Cortex are not exploited, and execution is performed by more ancient brain systems not associated with General Intelligence qualities.<br />
<br />
For the rest of this article series, we will ignore control pathways that do not involve the Cortex, and will focus on direct control output from Cortex layer 5.<br />
<h2 style="text-align: left;">
Action Selection</h2>
It is widely believed that <a href="http://www.scholarpedia.org/article/Action_selection" target="_blank">action selection</a> occurs within the flow of information from Cortex through the <a href="http://www.scholarpedia.org/article/Basal_ganglia" target="_blank">Basal Ganglia</a>, a group of deep, centralised brain structures adjacent to the Thalamus. There are a number of <a href="http://www.scholarpedia.org/article/Models_of_basal_ganglia" target="_blank">theories</a> about how this occurs, but it is generally believed to involve a form of Reinforcement Learning used to select ideas from the options presented by the Cortex, with competitive mechanisms for clean switching and conflict resolution.<br />
<br />
A major output of the Basal Ganglia is to the Thalamus; one prevailing theory of this relationship is that the Basal Ganglia controls the gating or filtering function performed by the Thalamus, effectively manipulating the state of the Cortex in consequence. The full loop then becomes Cortex → Basal Ganglia → Thalamus → Cortex (<a href="https://en.wikipedia.org/wiki/Basal_ganglia" target="_blank">see Wikipedia for a good illustration</a>, or figure 5).<br />
<br />
As discussed above, this article will focus on motor output generated directly by the Cortex.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-yuXMmITO7rs/Vnie2hGyjJI/AAAAAAAAGTM/6CtAgVKlUq8/s1600/wikipedia_bg.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="393" src="http://1.bp.blogspot.com/-yuXMmITO7rs/Vnie2hGyjJI/AAAAAAAAGTM/6CtAgVKlUq8/s400/wikipedia_bg.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 5: Pathways forming a circuit from Cortex to Basal Ganglia to Thalamus and back to Cortex. <a href="https://en.wikipedia.org/wiki/Basal_ganglia#/media/File:Basal_ganglia_circuits.svg" target="_blank">Image Source</a>.</td></tr>
</tbody></table>
<h2 style="text-align: left;">
Canonical Cortical Circuit</h2>
We now have the all the background information needed to define a “Canonical Cortical micro-Circuit” at a cellular level. All the information presented so far has been relatively uncontroversial, but this circuit is definitely our interpretation, not an established fact. However, we will present some evidence to (inconclusively) support our interpretation.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-pA5u2x4sWs4/VnifagGt7sI/AAAAAAAAGTU/HY2f3X0gckc/s1600/Neocortical%2Bcircuit%2B1%2Blevel.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://4.bp.blogspot.com/-pA5u2x4sWs4/VnifagGt7sI/AAAAAAAAGTU/HY2f3X0gckc/s640/Neocortical%2Bcircuit%2B1%2Blevel.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 6: Our interpretation of the canonical cortical micro-circuit. Only a single cortical region or Column is shown. Arrow endings indicate the type of connection - driver, modulator or inhibitor. The numbers 2/3, 4, 5, and 6 refer to specific cortical layers. Each shape represents a set of cells of a particular type, not an individual cell. Self-connections and connections within each set are not shown, but often exist. Shapes T and B refer to Thalamus and Basal Ganglia, not broken down into specific cell layers or types. Data enters the diagram at 4 points, labelled A-D, but does not exit; in general the system forms a circuit not a linear path. Note that shape T occurs twice, because the circuit receives data from only one part of the Thalamus but projects to two areas in forward and backward directions.</td></tr>
</tbody></table>
<h3 style="text-align: left;">
Diagram Explanation</h3>
We will use variants of the diagram shown in figure 6 to explain our interpretation of cortical function. In this diagram, only a single Cortical region or Column (used interchangeably here) is shown. In later diagrams, we will show 3 hierarchy levels together so the flow of information between hierarchy levels is apparent.<br />
<br />
In these diagrams, shapes represent a class of Neurons within a specific Cortical Layer. The numbers 2/3, 4, 5 and 6 refer to the Cortical layers in which these cell classes occur. The shapes labelled T and B refer to the Thalamus and Basal Ganglia (internal cell types and layers are not shown). Arrows on the diagram show the effect of each connection, either driving (providing information or input that causes another cell to become active), modulation (stimulating or inhibiting the activity of a target cell) or inhibition (exclusively inhibiting the activity of a target cell).<br />
<br />
If you want more detail on the thalamic end of the thalamocortical circuitry, an excellent source is this paper by <a href="http://shermanlab.uchicago.edu/files/RP155-CONEUR498.pdf" target="_blank">Sherman</a>.<br />
<br />
There are many interneurons (described in the <a href="http://blog.agi.io/2015/11/how-to-build-general-intelligence.html" target="_blank">previous article</a>) that are not shown in this diagram. We chose to omit these because we believe they are integral to the function of a layer of pyramidal cells within a Column, rather than an independent system. Specifically, we suggest that inhibitory interneurons implement local self-organising and local competitive functions (e.g. winner-take-all), ensuring sparse activation of the cell types represented by shapes in our diagram (C2/3, C4, C5, and C6). The self-organising behaviour also ensures that cells within each column optimise coverage of observed input patterns given a finite cell population. Inclusion of the interneurons would clutter the diagram without adding much explanatory value.<br />
<br />
We also omit self-connections within a class of cells represented by a shape. These self-connections likely provide context and contribute to learning and exclusive activity within the class, but don’t make it easier to understand circuits in terms of cortical layers and hierarchy levels.<br />
<h3 style="text-align: left;">
Excitatory Circuit</h3>
Figure 7 shows a multilevel version of the cortical circuit, similar to the multi-level figure from Sherman and Guillery (figure 3). We can now understand where the inputs to the circuit come from, in terms of other layers and external Sensors (S) and Motors (M). Note that Motors are driven directly from C5.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-LPUBeLZsBqo/VnigSGwR72I/AAAAAAAAGTg/-X12eYptPmA/s1600/Neocortical%2Bcircuit%2Binfo.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://1.bp.blogspot.com/-LPUBeLZsBqo/VnigSGwR72I/AAAAAAAAGTg/-X12eYptPmA/s640/Neocortical%2Bcircuit%2Binfo.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 7: The cortical micro-circuit across several levels of Cortex with involvement of Thalamus and Basal Ganglia. The red highlight shows a single excitatory ‘circuit’. See text for details.</td></tr>
</tbody></table>
The red path in figure 7 shows our excitatory “canonical circuit”: Data flows from the Thalamus to spiny stellate (star-shape in figures) cells in C4 (see <a href="http://www.jneurosci.org/content/23/7/2961.long" target="_blank">source</a> and <a href="http://journal.frontiersin.org/article/10.3389/neuro.01.1.1.002.2007/full#h5" target="_blank">source</a>), from where it propagates to pyramidal cells in C2/3, and then to pyramidal cells in C5. C6 is known as the multiform layer, but also contains many pyramidal cells of unusual proportions and orientations. C6 cells are driven by C5, and in turn modulate the Thalamus. Note that C6 cells within a region modulate the same Thalamic nuclei that provide input to that region of Cortex.<br />
<h3 style="text-align: left;">
Inhibitory Circuit</h3>
A second, inhibitory circuit exists alongside our excitatory circuit. In addition to providing input to the Cortex via C4, axons from the Thalamus also drive inhibitory Parvalbumin-expressing (PV) neurons in C4 (shown as circles in the diagram). These inhibitory neurons make up a large fraction of all the cells in C4, and inhibit pyramidal cells in C5 (see <a href="http://knowingneurons.com/2014/11/05/inhibitory-neurons-keeping-the-brains-traffic-in-check/" target="_blank">source</a> or <a href="http://clinicalgate.com/the-cerebral-cortex-2/" target="_blank">source</a> ).<br />
<br />
This means that the input from the Thalamus can be both informative and executive. It is executive in that it actually manipulates the activity of layer 5 within the Cortex, and informative by providing a copy of the signal driving the manipulation to C4. Figure 8 shows our inhibitory circuit. We believe this circuit is of critical importance because it provides a mechanism for the Thalamus to centrally manipulate the state of the Cortex, specifically layer 5 and 6 pyramidal cells. This hypothesis will be expanded in the next article.<br />
<br />
Figure 9 catalogues inhibitory cells, notably showing the cells used in our inhibitory circuit.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-3MQtjm9qBak/Vnig_yrj1jI/AAAAAAAAGTo/khibemXbhKA/s1600/Neocortical%2Bcircuit%2Binhibit%2Bloop.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://3.bp.blogspot.com/-3MQtjm9qBak/Vnig_yrj1jI/AAAAAAAAGTo/khibemXbhKA/s640/Neocortical%2Bcircuit%2Binhibit%2Bloop.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 8: The inhibitory micro-circuit. The red highlight shows how the Thalamus controls activity in C5 within a Column by activating inhibitory cells in C4. The circuit is completed by C5 pyramidal cells driving C6 cells, which in turn modulate the activity of the same Thalamic nuclei that selectively activates C5. Each shape denotes a population of cells of a specific type within a single Column, excluding ‘T’ and ‘B’ that refer to the Thalamus and Basal Ganglia respectively.</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-AEK3w2YjFhQ/VnihzetEAGI/AAAAAAAAGTw/ga-K-BgO1_g/s1600/interneuron_650.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-AEK3w2YjFhQ/VnihzetEAGI/AAAAAAAAGTw/ga-K-BgO1_g/s400/interneuron_650.jpg" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 9: Inhibitory interneurons in the Cortex. Of particular interest are the “PV” cells that are driven by axons from the Thalamus terminating in layer 4 and in turn inhibit pyramidal cells in layer 5. <a href="http://knowingneurons.com/2014/11/05/inhibitory-neurons-keeping-the-brains-traffic-in-check/" target="_blank">Image source</a>. </td></tr>
</tbody></table>
<h2 style="text-align: left;">
Pathways and the Canonical Circuits</h2>
Now let’s look at how pathways emerge from our cortical micro-circuit. Figures 10, 11, 12 show the Feed-Forward Direct, Feed-Forward Indirect and first Feed-Back pathways respectively. We also include another direct, Feed-Back pathway terminating at C6 (figure 13). Feed-back direct pathways terminating at C1, where many fibres are intermingled, are harder to interpret than feedback terminating directly at C6. Pyramidal neurons from many layers have dendrites in C1.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-JkcMJ6wiZzg/Vnii5dMY_eI/AAAAAAAAGT4/7weMqzN_fRI/s1600/Neocortical%2Bcircuit%2BFF-direct.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://1.bp.blogspot.com/-JkcMJ6wiZzg/Vnii5dMY_eI/AAAAAAAAGT4/7weMqzN_fRI/s640/Neocortical%2Bcircuit%2BFF-direct.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 10: Feed-Forward Direct pathway within our canonical cortical micro-circuit.</td></tr>
</tbody></table>
Figure 10 highlights the Feed-Forward direct pathway. Signals propagate from C4 to C2/3 and then to C4 in a higher Column. This pattern is repeated up the hierarchy. This pathway is not filtered by the Thalamus or any other central structure. Although activity from C2/3 propagates to C5, it does not ascend the hierarchy via this route: C5 in one Column does not directly connect to C5 in a higher Column, only via an indirect pathway (see below).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-7CZLR2LpWho/Vnii87X5k5I/AAAAAAAAGUA/5q4bjvr6QmY/s1600/Neocortical%2Bcircuit%2BFF-indirect.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://3.bp.blogspot.com/-7CZLR2LpWho/Vnii87X5k5I/AAAAAAAAGUA/5q4bjvr6QmY/s640/Neocortical%2Bcircuit%2BFF-indirect.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 11: Feed-Forward Indirect pathway.</td></tr>
</tbody></table>
Figure 11 highlights the Feed-Forward Indirect pathway. The Thalamus is involved in this pathway, and may have a gating or filtering effect. Data flows from the Thalamus to C4, to C2/3, to C5 and then to a different Thalamic nucleus that serves as the input gateway to another cortical Column in a different region of the Cortex.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-DR3Z7BPu6Ho/VnijEvbBWvI/AAAAAAAAGUI/1g55deO2Jzk/s1600/Neocortical%2Bcircuit%2BFB-info.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://2.bp.blogspot.com/-DR3Z7BPu6Ho/VnijEvbBWvI/AAAAAAAAGUI/1g55deO2Jzk/s640/Neocortical%2Bcircuit%2BFB-info.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 12: The first of two Feed-Back Direct pathways.</td></tr>
</tbody></table>
Figure 12 highlights the first type of Feed-Back Direct pathway. This pathway may be more concerned with provision of broader and more abstract (i.e. hierarchically higher) contextual information to be used in the Feed-Forward pathways for better prediction. This suggestion is supported by evidence that <a href="http://journal.frontiersin.org/article/10.3389/neuro.01.1.1.002.2007/full#h5" target="_blank">axons from C6 via C1 synapse with apical dendrites of pyramidal cells in C2/3, C5 and C6, in hierarchically lower regions</a>.<br />
<br />
Figure 13 highlights the second of two Feed-Back Direct pathways. This pathway might be involved in cascading control activity down the hierarchy towards sensors and motors - the next article will expand on this idea. Activity propagates from C6 to C6 directly. C6 <a href="http://shermanlab.uchicago.edu/files/RP155-CONEUR498.pdf" target="_blank">modulates the activity of local C5 cells and relevant Thalamic nuclei that drive local C5 cells</a>. Note that connections from a Column to the Thalamus are reciprocal; feedback from C6 to the Thalamus targets the same nuclei that project axons to C4.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-sbh3p-HF4dw/VnijLh8YGZI/AAAAAAAAGUQ/BGrZm-RjBW0/s1600/Neocortical%2Bcircuit%2BFB-control.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="http://2.bp.blogspot.com/-sbh3p-HF4dw/VnijLh8YGZI/AAAAAAAAGUQ/BGrZm-RjBW0/s640/Neocortical%2Bcircuit%2BFB-control.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 13: The second of two Feed-Back Direct pathways.</td></tr>
</tbody></table>
<h2 style="text-align: left;">
Summary</h2>
We’ve presented some additional, detailed perspectives on the organisation and function of circuits and pathways within the Thalamo-Cortical system and presented our interpretation of the canonical cortical micro-circuit.<br />
<br />
So what’s the point of all this information? What do these circuits and pathways do, and why are they connected this way? How do they work?<br />
<br />
It might seem that we’ve stopped short of really trying to interpret all this information and that’s because we are, indeed, holding back. After having spent so much time presenting background information, the next article <i>finally</i> attempts to understand why the thalamocortical system is connected in the ways described here, and how this system might give rise to general intelligence. </div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-90094585566953831502015-11-29T13:41:00.003+11:002016-01-16T12:39:38.075+11:00How to Build a General Intelligence: Reverse Engineering<div dir="ltr" style="text-align: left;" trbidi="on">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-IY2mmVK4Bws/Vlpg2YwyxMI/AAAAAAAAGBo/FNckUa1_9Ko/s1600/structure.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="237" src="http://2.bp.blogspot.com/-IY2mmVK4Bws/Vlpg2YwyxMI/AAAAAAAAGBo/FNckUa1_9Ko/s320/structure.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1: The physical architecture of general intelligence in the brain, namely the Thalamo-Cortical system. The system comprises a central hub (the Thalamus) surrounded by a surface (the Cortex, shown in blue). The Cortex is made up of a number of functionally-equivalent units called Columns. Each Column is a stack of cell layers. Columns share data, often via the central hub. The hub filters the data relayed between Columns.</td></tr>
</tbody></table>
<i>Authors: Rawlinson and Kowadlo</i><br />
<br />
This is part 2 of our series on how to build an artificial general intelligence (AGI). This article is about what we can learn from reverse-engineering mammalian brains. Part 1 is <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">here</a>.<br />
<br />
The next few articles will try to interpret some well-established neuroscience in the context of general intelligence. We’ll ignore anything we believe is unrelated to general intelligence, and we’ll simplify things in ways that will hopefully help us to think about how general intelligence happens in the brain.<br />
<br />
It doesn’t matter if we are missing some details, if the overall picture helps us understand the nature of general intelligence. In fact, excluding irrelevant detail will help, as long as we keep all the important bits!<br />
<br />
These articles are not peer reviewed. Do assume everything here is speculation, even when linked to a source reference (our interpretation may be skewed). There isn’t space to repeatedly add this caveat throughout these articles.<br />
<h2 style="text-align: left;">
1. Physical Architecture</h2>
First we’ll review and interpret the gross architecture of the brain, focusing on the <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2626162/" target="_blank">Thalamo-Cortical system</a>, which we believe is primarily responsible for general intelligence.<br />
<br />
The Thalamo-Cortical system comprises a central hub (the Thalamus and Basal Ganglia), surrounded by a thin outer surface (the Cortex). The surface consists of a large number of functionally-equivalent units, called Columns. The Cortex is wrinkled so that it’s possible to pack a large surface area into a small space.<br />
<br />
Why are the units called Columns? It’s the physical structure of their connectivity patterns. Cells within each Column are highly interconnected, but connections to cells in other Columns are fewer and less varied. Columns occupy the full thickness of the surface, approximately 6 distinct layers of cells. Since these layers are stacked on top of each other and are loosely connected between stacks, we have the appearance of a surface made of Columns. <br />
<br />
Confusingly, there are both <a href="http://blog.agi.io/2015/04/mini-macro-micro-and-hyper-columns.html" target="_blank">Macro and Micro-Columns</a>, and these terms are used inconsistently. In these articles we will simply say ‘Column’ when referring to a <a href="http://blog.agi.io/2015/05/a-nomenclature-for-cortical-columns-and.html" target="_blank">Macro-Column as defined in a previous post</a>.<br />
<br />
In the <a href="http://blog.agi.io/2015/10/how-to-build-general-intelligence-what.html" target="_blank">previous article</a> we described the ideal general intelligence as a structure made of many identical units that have each learned to play a small part in a distributed system. These theoretical units are analogous to Columns in real brains.<br />
<br />
Columns can be imagined to be independent units that interact by exchanging data. However, data travelling between Columns often takes an indirect path, via the central hub.<br />
<br />
The hub filters messages passed between Columns. In this way, the filter acts as a central executive that manages the distributed system made up of many Columns.<br />
<br />
We believe this is a fundamental aspect of the architecture of general intelligence.<br />
<br />
In mammalian brains, the filter function is primarily the role of the <a href="http://www.scholarpedia.org/article/Thalamus" target="_blank">Thalamus</a>, although its actions are supported and modulated by other parts, particularly the <a href="https://en.wikipedia.org/wiki/Basal_ganglia" target="_blank">Basal Ganglia</a>.<br />
<br />
Other brain components, such as the Cerebellum, are essential for effective motor control but maybe not essential for general intelligence. They are not within the scope of this article.<br />
<h2 style="text-align: left;">
2. Logical Architecture</h2>
The Cortex has both a physical structure (a layered surface, partitioned into columns) and a logical structure. The logical structure is a hierarchy - a tree-like structure that describes which columns are connected to each other (see figure 2).<br />
<br />
Connections between columns are reciprocal: “Higher” columns receive input from “Lower” columns, and return data to the same columns. This scheme is advantageous: Higher columns have (indirect) input from a wider range of sources; lower columns use the same resources to model more specific input in greater detail. This occurs naturally because each column tries to simplify the data it outputs to higher columns, allowing columns of fixed complexity to manage greater scope in higher levels, as data is incrementally transformed and abstracted.<br />
<br />
Only Columns in the lowest levels receive external input and control external outputs (albeit, often indirectly via subcortical structures).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-yWAv0nTA7cs/VlphXI08H2I/AAAAAAAAGBw/n6eyV80tYyY/s1600/hierarchy.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="199" src="http://2.bp.blogspot.com/-yWAv0nTA7cs/VlphXI08H2I/AAAAAAAAGBw/n6eyV80tYyY/s320/hierarchy.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 2: The logical architecture of the cortex, a hierarchy of Columns. The hierarchy gives us the notion of “levels”, with the lowest level having external input and output, and higher levels being increasingly abstract. The logical architecture is superimposed on the physical architecture. Note that inter-Column connections may be gated by the central hub (not shown).</td></tr>
</tbody></table>
Note that there are not necessarily fewer columns in each hierarchy level; there may be, but this is not essential. However, abstraction increases and scope broadens as we move to higher hierarchy levels.<br />
<br />
We can jump between the physical and logical architectures of the Cortex. Moving over the surface implies moving within the hierarchy. It also implies that moving between areas we will observe responses to different subsets of input data. Moving to higher hierarchy levels implies an increase in abstraction. We can observe this effect in human brains, for example by following the flow of information from the processing of raw sensor data to more abstract brain areas that deal with language and understanding (see figure 3).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-_j1-YbKu4Qk/VlpibD0YkXI/AAAAAAAAGCA/cGnqyjI0EIo/s1600/v1v2v4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="269" src="http://1.bp.blogspot.com/-_j1-YbKu4Qk/VlpibD0YkXI/AAAAAAAAGCA/cGnqyjI0EIo/s320/v1v2v4.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 3: Flow of information across the physical human Cortex also represents movement higher or lower in the logical hierarchy (increasing or decreasing abstraction). In fact, we can observe this phenomenon in human studies. Different parts of the hierarchy are specialised to conceptual roles such as understanding what, why and where things are happening. <a href="http://www.intechopen.com/books/visual-cortex-current-status-and-perspectives/adaptation-and-neuronal-network-in-visual-cortex" target="_blank">Image source</a>. </td></tr>
</tbody></table>
One final point about the logical architecture. The hierarchical structure of the Cortex is mirrored in the central hub, particularly in the Thalamus and Basal Ganglia, where we see the topology of the cortical Columns preserved through indirect pathways via central hub structures (figure 4).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-HoqnNy1XNsg/Vlph5-29tEI/AAAAAAAAGB4/WKjB18RMMiQ/s1600/Basal_Ganglia_fig3.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="275" src="http://1.bp.blogspot.com/-HoqnNy1XNsg/Vlph5-29tEI/AAAAAAAAGB4/WKjB18RMMiQ/s320/Basal_Ganglia_fig3.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 4: Data flows between different Columns within the Cortex either directly, or via our conceptual “central hub”. Our hub includes Basal Ganglia such as the Striatum, and the Thalamus. Throughout this journey the topology of the Cortex is preserved. <a href="http://www.scholarpedia.org/article/Basal_ganglia" target="_blank">Image source</a>.</td></tr>
</tbody></table>
<h2 style="text-align: left;">
3. Layers and Cells</h2>
Each Column has <a href="http://blog.agi.io/2015/05/a-nomenclature-for-cortical-columns-and.html" target="_blank">approximately 6 distinct “layers”</a>. Like every biological rule, there are exceptions; but it suffices for the level of detail we require here. The layers are visual artefacts resulting from variations in cell type, morphology, connectivity patterns and therefore function between the layers (figure 5).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-UHdfWvEw378/VlpjKUXM01I/AAAAAAAAGCI/ToNpplRQnrQ/s1600/cortex_cell_stain_layers.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="301" src="http://1.bp.blogspot.com/-UHdfWvEw378/VlpjKUXM01I/AAAAAAAAGCI/ToNpplRQnrQ/s320/cortex_cell_stain_layers.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 5: Various stainings showing variation in cell type and morphology between the layers of the Cortex. <a href="http://what-when-how.com/neuroscience/the-thalamus-and-cerebral-cortex-integrative-systems-part-1/" target="_blank">Image source</a>.</td></tr>
</tbody></table>
The Cortex has only 5 functional layers. Structurally, it has 6 gross layers; but one layer is just wires; no computation. In addition, the functional distinction between layers 2 and 3 is uncertain, so we will group them together. This gives us just 4 unique functional layers to explain.<br />
<br />
We will use the notation C1 ... C6 to refer to the gross layers:<br />
<br />
<ul style="text-align: left;">
<li>C1 - just wiring, no computation; not functional</li>
<li><b>C2/3</b> (indistinct)</li>
<li><b>C4</b></li>
<li><b>C5</b></li>
<li><b>C6</b> (known as the “multiform” layer due to the variety of cell types)</li>
</ul>
<br />
The cortex is made of a veritable menagerie of oddly shaped cells (i.e. Neurons) that are often confined to specific layers (see figure 6). Neurons have a body (soma), dendrites and axons. Dendrites provide input to the cell, and reach out to find that data. Axons transmit the output of the cell to places where it can be intercepted by other cells. Both dendrites and axons have branches.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-SgC-dF7oR18/VlpkU2TeF_I/AAAAAAAAGCc/PlH900UeVTk/s1600/cortical_cell_types.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="411" src="http://2.bp.blogspot.com/-SgC-dF7oR18/VlpkU2TeF_I/AAAAAAAAGCc/PlH900UeVTk/s640/cortical_cell_types.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 6: Some of the cell types found in different cortical layers. <a href="http://clinicalgate.com/the-cerebral-cortex-2/" target="_blank">Image source</a>. </td></tr>
</tbody></table>
An important feature of the Cortex is the presence of specialized Neuron cells with pyramidal Soma (bodies) (figure 7). Pyramidal cells are predominantly found in C2/3, C5 and C6. They are very large and common cells in these layers.<br />
<br />
Pyramidal cells do not behave like classical artificial neurons. We agree with <a href="http://arxiv.org/abs/1511.00083" target="_blank">Hawkins' characterisation of them</a>. Pyramidal cells have two or three dendrite types: <a href="https://en.wikipedia.org/wiki/Apical_dendrite#Background" target="_blank">Apical (proximal or distal) dendrites and Basal dendrites</a>. <a href="http://www.princeton.edu/~sswang/frontiers_STDP/larkum_nevian08_curr_opin_neurobiol.pdf" target="_blank">Distal Apical dendrites seem to behave like a classical integrate-and-fire neuron in their own right</a>, requiring a complete pattern of input to “fire” a signal to the body of the cell. In consequence, each cell can respond to a number of different input patterns, depending on which apical dendrites become active from their input.<br />
<br />
Hawkins suggests that the Basal dendrites provide a sequential or temporal context in which the Pyramidal cell can become active. Output from the cell along its axon branches only occurs if the cell observes particular instantaneous input patterns in a particular historical context of previous Pyramidal cell activity.<br />
<br />
Within one layer of a Column, Pyramidal cells exhibit a self-organising property that results in sparse activation. Only a few Pyramidal cells respond to each input stimulus. The Pyramidal cells are powerful pattern and sequence classifiers that also perform a <a href="https://en.wikipedia.org/wiki/Dimensionality_reduction" target="_blank">dimensionality-reduction</a> function; when active, the activity of a single Pyramidal cell represents a pattern of input over a period of time.<br />
<br />
The training mechanism for sparsity and self-organisation is local inhibition. In addition to Pyramidal cells, most of the other Neurons in the Cortex are so-called “<a href="https://en.wikipedia.org/wiki/Interneuron" target="_blank">Interneurons</a>” that we believe play a key role in training the Pyramidal cells by implementing a competitive learning process. For example, Interneurons could inhibit Pyramidal cells around an active Pyramidal cell ensuring that the local population of Pyramidal cells responds uniquely to different input.<br />
<br />
Unlike Pyramidal cells, which receive input from outside the Column and transmit output outside the Column, Interneurons generally only work within a Column. Since we consider Interneurons play a supporting role to Pyramidal cells, we won’t have much more to say about them.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-tM54u2hp3uY/VlpkT1Q3qgI/AAAAAAAAGCY/_OG4M4K8sZM/s1600/pyramidal%2Bcell.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://4.bp.blogspot.com/-tM54u2hp3uY/VlpkT1Q3qgI/AAAAAAAAGCY/_OG4M4K8sZM/s320/pyramidal%2Bcell.jpg" width="219" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 7: A Pyramidal cell as found in the Cortex. Note the Apical and Basal dendrites, hypothesised to recognise simultaneous and historical inputs patterns respectively. The complete Pyramidal cell is then a powerful classifier that when active represents a particular set of input in a specific historical context. <a href="http://clinicalgate.com/the-cerebral-cortex-2/" target="_blank">Image source</a>. </td></tr>
</tbody></table>
<h2 style="text-align: left;">
Summary</h2>
<div>
That's all we feel is necessary to say about the gross physical structure of the Thalamo-Cortical system and the microscopic structure of its cells and layers. The next article will look at the circuits and pathways by which these cells are connected, and the computational properties that result.</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-30302322012635443572015-11-12T03:06:00.001+11:002015-11-12T03:06:46.702+11:00New HTM paper - “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”<div dir="ltr" style="text-align: left;" trbidi="on">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-Yt0MLkiyb1w/VkHy_e8ngtI/AAAAAAAAF0k/qIFkAPH-dS0/s1600/ann.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="398" src="http://2.bp.blogspot.com/-Yt0MLkiyb1w/VkHy_e8ngtI/AAAAAAAAF0k/qIFkAPH-dS0/s640/ann.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The artificial neuron model used by Jeff Hawkins and Subutai Ahmad in their new paper (image reproduced from their paper, and cropped). Their neuron model is inspired by the pyramidal cells found in neocortex layers 2/3 and 5.</td></tr>
</tbody></table>
It has been several years since Jeff Hawkins and Numenta published the <a href="http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf" target="_blank">Cortical Learning Algorithm (CLA) whitepaper</a>.<br />
<br />
Now, Hawkins and Subutai Ahmad have pre-published a new paper (currently to arXiv, but peer-review will follow):<br />
<br />
<a href="http://arxiv.org/abs/1511.00083" style="background-color: white; color: #1155cc; font-family: Calibri, sans-serif; font-size: 14.6666669845581px;" target="_blank">http://arxiv.org/abs/1511.<wbr></wbr>00083</a><br />
<br />
The paper is interesting for a number of reasons, most notably the combination of computational and biological detail. This paper expands on the artificial neuron model used in CLA/HTM. A traditional <a href="https://en.wikipedia.org/wiki/Artificial_neural_network" target="_blank">integrate-and-fire artificial neuron</a> has one set of inputs and a transfer function. This doesn't accurately represent the structure or function of cortical neurons, that come in various <a href="http://betarhythm.blogspot.com/2009/04/that-which-i-cannot-build-i-do-not.html" target="_blank">shapes & sizes</a>. The <a href="http://blog.agi.io/2015/03/another-look-at-retina.html" target="_blank">function of cortical neurons is affected by their structure</a> and quite unlike the the <a href="https://en.wikipedia.org/wiki/Artificial_neural_network" target="_blank">traditional artificial neuron</a>.<br />
<br />
Hawkins and Ahmad propose a model that best fits <a href="https://en.wikipedia.org/wiki/Pyramidal_cell" target="_blank">Pyramidal cells in neocortex layers 2/3 and 5</a>. They explain the morphology of these neurons by assigning specific roles to the various <a href="https://en.wikipedia.org/wiki/Dendrite" target="_blank">dendrite</a> types observed.<br />
<br />
They propose that each dendrite is individually a pattern-matching system similar to a traditional artificial neuron: The dendrite has a set of inputs to which it responds, and a transfer function that decides whether enough inputs are observed to "fire" the output (although nonlinear continuous transfer functions are more widely used than binary output).<br />
<br />
In the paper, they suggest that a single pyramidal cell has dendrites for recognising feed-forward input (i.e. external data) and other dendrites for feedback input from other cells. The feedback provides contextual input that allows the neuron to "fire" only in specific sequential contexts (i.e. given a particular history of external input).<br />
<br />
To produce an output along its <a href="https://en.wikipedia.org/wiki/Axon" target="_blank">axon</a>, the complete neuron needs both an active feed-forward dendrite and an active contextual dendrite; when the neuron fires, it implies that a particular pattern has been observed in a specific historical context.<br />
<br />
In the original CLA whitepaper, multiple sequential contexts were embodied by a "column" of cells that shared a proximal dendrite, although they acknowledged that this differed from their understanding of the biology.<br />
<div>
<br /></div>
<div>
The new paper suggests that basket cells provide the inhibitory function that ensures sparse output from a column of pyramidal cells having similar receptive fields. Note that this definition of column differs from the one in the CLA whitepaper!</div>
<div>
<br /></div>
<div>
The other interesting feature of the paper is its explanation of the sparse, distributed sequence memory that arises from a layer of the artificial pyramidal cells with complex, specialised dendrites. This is also a feature of the older CLA whitepaper, but there are some differences.<br />
<br />
Hawkins and Ahmad's paper does match the morphology and function of pyramidal cells more accurately than traditional artificial neural networks. Their conceptualisation of a neuron is far more powerful. However, this doesn't mean that it's better to model it this way <i>in silico</i>. What we really need to understand is the computational benefit of modelling these extra details. The new paper claims that their method has the following advantages over traditional ANNs:<br />
<br />
- continuous learning<br />
- robustness of distributed representation<br />
- ability to deal with multiple simultaneous predictions<br />
<br />
We follow Numenta's work because we believe they have a number of good insights into the AGI problem. It's great to see this new theoretical work and to have a solid foundation for future publications. </div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-70274394963497433102015-10-30T02:33:00.004+11:002015-12-09T23:20:48.341+11:00How to build a General Intelligence: What we think we already know<div dir="ltr" style="text-align: left;" trbidi="on">
<i>Authors: D Rawlinson and G Kowadlo</i><br />
<br />
This is the first of three articles detailing our latest thinking on <i>general</i> intelligence: A one-size-fits-all algorithm that, like people, is able to learn how to function effectively in almost any environment. This differs from most Artificial Intelligence (AI), which is designed by people for a specific purpose. This article will set out assumptions, principles, insights and design guidelines based on what we <i>think</i> we <i>already</i> know about general intelligence. It turns out that we can describe general intelligence in some detail, although not enough detail to actually build it...yet.<br />
<br />
The second article will look at how these ideas fit existing computational neuroscience, which helps to refine and filter the design; and the third article will describe a (high-level) algorithm that is, at least, not contradictory to the design goals and biology already established.<br />
<br />
As usual, our plans have got ahead of implementation, so code will follow in a few weeks after the end of the series (or months...)<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-01ZL601Bcg4/VjI0ZD82M0I/AAAAAAAAFss/gR3-Gco5WUo/s1600/part1fig1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="243" src="http://1.bp.blogspot.com/-01ZL601Bcg4/VjI0ZD82M0I/AAAAAAAAFss/gR3-Gco5WUo/s640/part1fig1.png" width="640" /></a></div>
<br />
<i>FIGURE 1: A hierarchy of units. Although units start out identically, they become differentiated as they learn from their unique input. The input to a unit depends on its position within the hierarchy and the state of the units connected to it. The hierarchy is conceptualized as having levels; the lowest levels are connected to sensors and motors. Higher levels are separated from sensors and motors by many intermediate units. The hierarchy may have a tree-like structure without cycles, but the number of units per level does not necessarily decrease as you move higher.</i><br />
<h2 style="text-align: left;">
Architecture of General Intelligence</h2>
Let’s start with some fundamental assumptions and outline the structure of a system that has general intelligence characteristics.<br />
<h3 style="text-align: left;">
It Exists</h3>
We assume there exists a “general intelligence algorithm” that is not irreducibly complex. That is, we don’t need to understand it in excruciating detail. Instead, we can break it down into simpler models that we can easily understand in isolation. This is not <i>necessarily</i> a reasonable assumption, but there is evidence for it:<br />
<h3 style="text-align: left;">
Units</h3>
A general intelligence algorithm can be described more simply as a collection of many simpler, functionally-identical units. Again, this is a big assumption, but it is supported by at least two pieces of evidence. First, it has often been observed that the <a href="https://en.wikipedia.org/wiki/Vernon_Benjamin_Mountcastle" target="_blank">human cortex has quite uniform structure across areas having greatly varying functional roles</a>. Second, this structure has revealed that the cortex is made up of many smaller units (called columns, at one particular scale). It is reasonable to decompose the cortex in this way due to <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1569491/" target="_blank">high and varied <b>intra</b>-column connectivity and limited variety of <b>inter</b>-column connectivity</a>. The patterns of inter and intra column connectivity are very similar throughout the cortex. <a href="https://en.wikipedia.org/wiki/Cortical_column" target="_blank">“Columns” contain only a few thousand neurons organized into layers and micro-columns</a> that further simplify understanding of the structure. That’s not overwhelmingly complex, although we are making simplifying assumptions about neuron function.<br />
<h3 style="text-align: left;">
Hierarchy</h3>
Our reading and experimentation has suggested that <a href="https://www.scribd.com/book/182534736/On-Intelligence" target="_blank">hierarchical representation</a> is <a href="https://en.wikipedia.org/wiki/How_to_Create_a_Mind#Pattern_Recognition_Theory_of_Mind" target="_blank">critical for the types of information processing involved in general intelligence</a>. Hierarchies are built from many units connected together in layers. Typically, only the lowest level of the hierarchy receives external input. Other levels receive input from lower levels of the hierarchy instead. For more background on hierarchies, see <a href="http://blog.agi.io/2014/04/architecture-of-memory-prediction.html" target="_blank">earlier</a> <a href="http://blog.agi.io/2014/11/toward-universal-cortical-algorithm.html" target="_blank">posts</a>. Hierarchy allows units in higher layers to model more complex and abstract features of input, despite the fixed complexity of each unit. Hierarchy also allows units to cover all available input data and allow combinations of features to be jointly represented within a reasonable memory limit. It’s a crucial concept.<br />
<h3 style="text-align: left;">
Synchronization</h3>
Do we need synchronization between units? Synchronization can simplify sequence modelling in a hierarchy by restricting the number of possible permutations of events. However, synchronization between units may significantly hinder fast execution on parallel computing hardware, so this question is important. A point of confusion may be the difference between synchronization and timing / clock signals. <a href="https://en.wikipedia.org/wiki/Asynchronous_circuit" target="_blank">We can have synchronization without clocks</a>, but in any case there is biological evidence of timing signals within the brain. <a href="http://blog.agi.io/2015/01/is-missing-data-actually-missing.html" target="_blank">Pathological conditions can arise without a sense of time</a>. In conclusion we’re going to assume that <a href="http://rstb.royalsocietypublishing.org/content/370/1668/20140174" target="_blank">units should be functionally asynchronous</a>, but might make use of clock signals.<br />
<h3 style="text-align: left;">
Robustness</h3>
<a href="https://en.wikipedia.org/wiki/Cognitive_reserve" target="_blank">Your brain doesn’t completely stop working if you damage it</a>. Robustness is a characteristic of a distributed system and one we should hope to emulate. Robustness applies not just to internal damage but external changes (i.e. it doesn't matter if your brain is wrong or the world has changed; either way you have to learn to cope).<br />
<h3 style="text-align: left;">
Scalability</h3>
Adding more units should improve capability and performance. The algorithm must scale effectively without changes other than having more of the same units appended to the hierarchy. Note the specific criteria for how scalability is to be achieved (i.e. enlarge the hierarchy rather than enlarge the units). It is important to test for this feature to demonstrate the generality of the solution.<br />
<h3 style="text-align: left;">
Generality</h3>
The same unit should work reasonably well for all types of input data, without preprocessing. Of course, tailored preprocessing could make it <b>better</b>, but it shouldn’t be essential.<br />
<h3 style="text-align: left;">
Local interpretation</h3>
The unit must locally interpret all input. In real brains it isn’t plausible that neuron X evolved to target neuron Y <i>precisely</i>. Neurons develop dendrites and synapses with sources and targets that are carefully guided, but not to the extent of identifying specific cells amongst thousands of peers. Any algorithm that requires exact targeting or mapping of long-range connections is biologically implausible. Rather, units should locally select and interpret incoming signals using characteristics of the input. Since many AI methods require exact mapping between algorithm stages, this principle is actually quite discriminating.<br />
<h3 style="text-align: left;">
Cellular plausibility</h3>
Similarly, we can validate designs by questioning whether they could develop by biologically plausible processes, such as <a href="https://www.jsmf.org/about/j/neural_connections.htm" target="_blank">cell migration</a> or preferential affinity for specific <a href="http://www.brainfacts.org/brain-basics/brain-development/articles/2012/making-connections" target="_blank">signal coding or molecular markers</a>. However, be aware that brain neurons rarely match the traditional integrate-and-fire model.<br />
<h2 style="text-align: left;">
Key Insights</h2>
It’s surprising that in careers cumulatively spanning more than 25 years we (the authors) had very little idea how the methods we used everyday could lead to general intelligence. It is only in the last 5 years that we have begun to research the particular sub-disciplines of AI that may lead us in that direction.<br />
<br />
Today, those who have studied this area can talk in some detail about the nature of general intelligence without getting into specifics. Although we don’t yet have all the answers, the problem has become more approachable. For example, we’re really looking to understand a much simpler unit, not an entire brain holistically. Many complex systems can be easily understood when broken down in the right way, because we can selectively ignore detail that is irrelevant to the question at hand.<br />
<br />
From our experience, we've developed some insights we want to share. Many of these insights were already known, and we just needed to find the right terminology. By sharing this terminology we can help others to find the right research to read.<br />
<h3 style="text-align: left;">
We’re looking for a stackable building block, not the perfect monolith</h3>
We must find a unit that can be assembled into an arbitrarily large - yet still functional - structure. In fact, a similar feature was instrumental in the success of “deep” learning: <a href="http://papers.nips.cc/paper/1679-modeling-high-dimensional-discrete-data-with-multi-layer-neural-networks.pdf" target="_blank">Networks could suddenly be built up to arbitrary depths</a>. Building a stackable block is surprisingly hard and astonishingly important.<br />
<h3 style="text-align: left;">
We’re not looking to beat any specific benchmark</h3>
... but if we could do reasonably well at a wide range of benchmarks, that would be exciting. This is why the <a href="https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf" target="_blank">DeepMind Atari demos</a> are so exciting; the same algorithm could succeed in very different problems.<br />
<h3 style="text-align: left;">
Abstraction by accumulation of invariances</h3>
This insight comes from Hawkins’ work on <a href="https://www.scribd.com/book/182534736/On-Intelligence" target="_blank">Hierarchical Temporal Memory</a>. He proposes that abstraction towards symbolic representation comes about <i>incrementally</i>, rather than as a single mapping process. Concepts accumulate invariances - such as appearance from different angles - until labels can correctly be associated with them. This neatly avoids the fearful “<a href="https://en.wikipedia.org/wiki/Symbol_grounding_problem" target="_blank">symbol grounding problem</a>” from the early days of AI.<br />
<h3 style="text-align: left;">
Biased Prediction and Selective Attention are both action selection</h3>
We believe that selective bias of predictions and expectations is responsible for both narrowing of the range of anticipated futures (selective ignorance of potential outcomes) and the mechanism by which motor actions are generated. A selective prediction of oneself performing an action is a great way to generate or “select” that action. Similarly, selective attention to external events affects the way data is perceived and in turn the way the agent will respond. Filtering data flow between hierarchy units implements both selective attention and action selection, if data flowing towards motors represents candidate futures including both self-actions and external consequences.<br />
<h3 style="text-align: left;">
The importance of spatial structure in data</h3>
As you will see in later parts of this article series, the spatial structure of input data is actually quite important when training our latest algorithms. This is <i>not true</i> of many algorithms, especially in Machine Learning where each input scalar is often treated as an independent dimension. Note that we now believe spatial structure is important both in raw input <i>and in data communicated between units</i>. We’re not simply saying that external data structure is important to the algorithm - we’re claiming that simulated spatial structure is actually an essential part of algorithms for dynamically dividing a pool of resources between hierarchy units.<br />
<h3 style="text-align: left;">
Binary data</h3>
There's a lot of simplification and assumption here, but we believe this is the most useful format for input and internal data. In any case, the algorithms we’re finding most useful can't easily be refactored for the obvious alternative (continuous input values). However, <a href="https://github.com/numenta/nupic/wiki/Encoders" target="_blank">continuous input can be encoded with some loss of precision as subsets of bits</a>. There is some <a href="https://en.wikipedia.org/wiki/Neural_coding" target="_blank">evidence</a> that this is biologically plausible, but it is not definitive. Why binary? <a href="https://en.wikipedia.org/wiki/Dimensionality_reduction" target="_blank">Dimensionality reduction</a> is an essential feature of a hierarchical model; it may be that <a href="http://blog.agi.io/2014/12/sparse-distributed-representations-sdrs_24.html" target="_blank">sparse binary representations</a> are simply a good compromise between data loss and qualities such as compositionality:<br />
<h3 style="text-align: left;">
Sparse, Distributed Representations</h3>
We will be using <a href="http://blog.agi.io/2014/12/sparse-distributed-representations-sdrs_24.html" target="_blank">Sparse, Distributed Representations</a> (SDRs) to represent agent and world state [RE ]. SDRs are binary data (i.e. all values are 1 or 0). SDRs are sparse, meaning that at any moment, only a fraction of the bits are 1's (active). The most complex feature to grasp is that SDRs are distributed: No individual bit uniquely represents anything. Instead, data features are <i>jointly</i> represented by sets of bits. <a href="http://lists.numenta.org/pipermail/nupic-theory_lists.numenta.org/2015-August/003122.html" target="_blank">SDRs are overcomplete representations</a> - not all bits in a feature-set are required to “detect” a feature, which also means that degrees of similarity can also be expressed as if the data were continuous. These characteristics also mean that SDRs are robust to noise - missing bits are unlikely to affect interpretation. .<br />
<h3 style="text-align: left;">
Predictive Coding</h3>
SDRs are a specific form of <a href="http://www.scholarpedia.org/article/Sparse_coding" target="_blank">Sparse (Population) Coding</a> where state is jointly represented by a set of active bits. Transforming data into a sparse representation is necessarily lossy and balances representational capacity against bit-density. The most promising sparse coding scheme we have identified is <a href="http://blog.agi.io/2014/10/on-predictive-coding-and-temporal.html" target="_blank">Predictive Coding</a>, in which internal state is represented by prediction errors. PC has the benefit that errors are propagated rather than hidden in local states, and data dimensionality automatically reduces in proportion to its predictability. Perfect prediction implies that data is fully understood, and produces no output. A specific description of PC is given by <a href="https://en.wikipedia.org/wiki/Bayesian_approaches_to_brain_function#Predictive_coding" target="_blank">Friston et al</a> but a more general framework has been discussed in several papers by <a href="http://journal.frontiersin.org/article/10.3389/fpsyg.2012.00254/full" target="_blank">Rao, Ballard et al</a> since about 1999. The latter is quite similar to the inter-region coding via temporal pooling described in the <a href="http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf" target="_blank">HTM Cortical Learning Algorithm</a>.<br />
<h3 style="text-align: left;">
Generative Models</h3>
Training an SDR typically produces a <a href="https://en.wikipedia.org/wiki/Generative_model" target="_blank">Generative Model</a> of its input. This means that the system encodes observed data in such a way that it can generate novel instances of observed data. In other words, the system can generate predictions of all inputs (with varying uncertainty) from an arbitrary internal state. This is a key prerequisite for a general intelligence that must simulate outcomes for planned novel action combinations.<br />
<h3 style="text-align: left;">
Dimensionality Reduction</h3>
<div>
In constructing models, we will be looking to extract stable features and in doing so reduce the complexity of input data. This is known as <a href="https://en.wikipedia.org/wiki/Dimensionality_reduction" target="_blank">dimensionality reduction</a>, for which we can use algorithms such as <a href="https://en.wikipedia.org/wiki/Autoencoder" target="_blank">auto-encoders</a>. To cope with the vast number of possible permutations and combinations of input, an incredibly efficient incremental process of compression is required. So how can we detect stable features within data?</div>
<h3 style="text-align: left;">
<a href="https://en.wikipedia.org/wiki/Unsupervised_learning" target="_blank">Unsupervised Learning</a></h3>
By the definition of general intelligence, we can’t possibly hope to provide a tutor-algorithm that provides the optimum model update for every input presented. It’s also worth noting that internal representations of the world and agent should be formed without consideration of the <i>utility</i> of the representations - in other words, internal models should be formed for completeness, generality and accuracy rather than task-fulfilment. This allows less abstract representations to become part of more abstract, long-term plans, despite lacking immediate value. It requires that we use unsupervised learning to build internal representations.<br />
<h3 style="text-align: left;">
Hierarchical Planning & Execution</h3>
We don’t want to have to model the world twice: Once for understanding what’s happening, and again for planning & control. The same model should be used for both. This means we <a href="http://blog.agi.io/2014/12/agency-and-hierarchical-action-selection.html" target="_blank">have to do planning & action selection within the single hierarchical model used for perception</a>. It also makes sense, given that the agent’s own actions will help to explain sensor input (for example, turning your head will alter the images received in a predictable way). As explained earlier, we can generate plans by simply biasing “predictions” of our own behaviour towards actions with rewarding outcomes.<br />
<h3 style="text-align: left;">
<a href="https://en.wikipedia.org/wiki/Reinforcement_learning" target="_blank">Reinforcement Learning</a></h3>
In the context of an intelligent agent, it is generally impossible to discover the “correct” set of actions or output for any given situation. There are many alternatives of varying quality; we don’t even insist on the best action but expect the agent to usually pick rewarding actions. In these scenarios, we will require a Reinforcement Learning system to model the quality of the actions considered by the agent. Since there is value in exploration, we may also expect the agent to occasionally pick suboptimal strategies, to learn new information.<br />
<h3 style="text-align: left;">
<a href="https://en.wikipedia.org/wiki/Supervised_learning" target="_blank">Supervised Learning</a></h3>
There is still a role for supervised learning within general intelligence. Specifically, during the execution of hierarchical control tasks we can describe both the ideal outcome and some metric describing similarity of actual outcome to desired. Supervised learning is ideal for discovery of actions with <a href="https://en.wikipedia.org/wiki/Sense_of_agency" target="_blank">agency</a> to bring about desired results. Supervised Learning can tell us how best to execute a plan constructed in an Unsupervised Learning model, that was later selected by Reinforcement Learning. <br />
<h2 style="text-align: left;">
Challenges Anticipated </h2>
The features and constraints already identified mean that we can expect some specific difficulties when creating our general intelligence.<br />
<br />
Among other problems, we are particularly concerned about:<br />
<br />
1. Allocation of limited resources<br />
2: Signal dilution<br />
3: Detached circuits within the hierarchy<br />
4: Dilution of executive influence<br />
5: Conflict resolution<br />
6: Parameter selection<br />
<br />
Let’s elaborate:<br />
<h3 style="text-align: left;">
Allocation of limited resources</h3>
This is an inherent problem when allocating a fixed pool of computational resources (such as memory) to a hierarchy of units. Often, resources per unit are fixed, ensuring that there are sufficient resources for the desired hierarchy structure. However, this is far less efficient than dynamically allocating resources to units to globally maximize performance. It also presupposes the ideal hierarchy structure is known, and not a function of the data. If the hierarchy structure is also dynamic, this becomes particularly difficult to manage because resources are being allocated at two scales simultaneously (resources → units and units → hierarchy structure), with constraints at both scales.<br />
<br />
In our research we will initially adopt a fixed resource quota per hierarchy unit and a fixed branching factor for the hierarchy, allowing the structure of the hierarchy and resources per unit to be determined by data. This arrangement is the one most likely to work given a universal unit with constant parameters, as the number of inputs to each unit is constrained (due to the branching factor). It is interesting that the human cortex is a continuous sheet, and evidences dynamic resource allocation as <a href="https://en.wikipedia.org/wiki/Neuroplasticity" target="_blank">neuroplasticity</a> - resources can be dynamically assigned to working areas and sensors when others fail.<br />
<h3 style="text-align: left;">
Signal Dilution</h3>
As data is transformed from raw input into a hierarchical model, information will be lost (not represented anywhere). This problem is certain to occur in all realistic tasks because input data will be modelled locally in each unit without global oversight over which data is useful. Given local resource constraints, this will be a lossy process. Moreover, we have also identified the need for units to identify patterns in the data and output a simplified signal for higher-order modelling by other units in the hierarchy (dimensionality reduction). Therefore, each unit will deliberately and necessarily lose data during these transformations. We will use techniques such as Predictive Coding to allow data that is not understood (i.e. not predictable) to flow through the system until it can be modelled accurately (predicted). However, it will still be important to characterise the failure modes in which important data is eliminated before it can be combined with other data that provides explanatory power.<br />
<h3 style="text-align: left;">
Detached circuits within the hierarchy</h3>
Consider figure 2. Here we have a tree of hierarchy units. If the interactions between units are reciprocal (i.e. X outputs to Y and receives data <i>from</i> Y) there is a strong danger of small self-reinforcing circuits forming in the hierarchy. These feedback circuits exchange mutually complementary data between a pair or more units, causing them to ignore data from the rest of the hierarchy. In effect, the circuit becomes “detached” from the rest of the hierarchy. Since sensor data enters via leaf-units at the bottom of the hierarchy, everything above the detached circuit is also detached from the outside world and the system will cease to function satisfactorily.<br />
<br />
In any hierarchy with reciprocal connections, this problem is very likely to occur, and disastrous when it does. In <a href="https://en.wikipedia.org/wiki/Belief_propagation" target="_blank">Belief Propagation</a>, another graphical model, this problem manifests as “<a href="http://www.cs.huji.ac.il/~yweiss/cbcl.pdf" target="_blank">double counting</a>” and is avoided by nodes carefully ignoring their own evidence returned to them.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-O3ejNXjLUhs/VjI0xmMyhhI/AAAAAAAAFs0/XEr8MFyj6Cg/s1600/part1fig2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-O3ejNXjLUhs/VjI0xmMyhhI/AAAAAAAAFs0/XEr8MFyj6Cg/s320/part1fig2.png" width="199" /></a></div>
<br />
<i>FIGURE 2: Detached circuits within the hierarchy. Units X and Y have formed a mutually reinforcing circuit that ignores all data from other parts of the hierarchy. By doing so, they have ceased to model the external world and have divided the hierarchy into separate components.</i><br />
<h3 style="text-align: left;">
Dilution of executive influence</h3>
A generally-intelligent agent needs to have the ability to execute abstract, high-level plans as easily as primitive, immediate actions. As people we often conceive plans that may take minutes, hours, days or even longer to complete. How is execution of lengthy plans achieved in a hierarchical system?<br />
<br />
If abstract concepts exist only in higher levels of the hierarchy, they need to control large subtrees of the hierarchy over long periods of time to be successfully executed. However, if each hierarchy unit is independent; how is this control to be achieved? If higher units do not effectively <a href="http://ai.eecs.umich.edu/cogarch3/Brooks/Brooks.html" target="_blank">subsume</a> lower ones, executive influence will dilute as plans are incrementally re-interpreted from abstract to concrete (see figure 3). Ideally, abstract units will have quite specific control over concrete units. However, it is impractical for abstract units to have the complexity to "micro-manage" an entire tree of concrete units.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-QIq5Ub1GvpM/VjI1DaXAtPI/AAAAAAAAFs8/cCoR2OurvUk/s1600/part1fig3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-QIq5Ub1GvpM/VjI1DaXAtPI/AAAAAAAAFs8/cCoR2OurvUk/s320/part1fig3.png" width="267" /></a></div>
<br />
<i>FIGURE 3: Dilution of executive influence. A high-level unit within the hierarchy wishes to execute a plan; the plan must be translated towards the most concrete units to be performed. However, each translation and re-interpretation risks losing details of the original intent which cannot be fully represented in the lower levels. Somehow, executive influence must be maintained down through an arbitrarily deep hierarchy. </i><br />
<br />
Let’s define <a href="http://en.wikipedia.org/wiki/Sense_of_agency" target="_blank">“agency” as the ability to influence or control outcomes</a>. Lacking the ability to cause a particular outcome is a lack of agency over the desired and actual outcomes. By making each hierarchy unit responsible for the execution of goals defined in the hierarchy level immediately above, we indirectly maximise the agency of more abstract units. Without this arrangement, more abstract units would have little or no agency at all.<br />
<br />
Figure 4 shows what happens when an abstract plan gets “lost in translation” to concrete form. I walked up to my car and pulled my keys from my pocket. The car key is on a ring with many others, but it’s much bigger and can’t be mistaken by touch. It can only be mistaken if you don’t care about the differences.<br />
<br />
In this case, when I got to the car door I tried to unlock it with the house key! I only stopped when the key wouldn't fit in the keyhole. Strangely, all low-level mechanical actions were performed skillfully, but high level knowledge (which key) was lost. Although the plan was put in motion, it was not successful in achieving the goal.<br />
<br />
Obviously this is just a hypothesis about why this type of error happens. What’s surprising is that it isn't more common. Can you think of any examples?<br />
<br />
<div style="text-align: center;">
<span id="docs-internal-guid-f64fd95a-b414-f167-7466-f891429453c0"><span style="font-family: "arial"; font-size: 14.6666666666667px; vertical-align: baseline; white-space: pre-wrap;"><img alt="car_key_ampf_agi_translation.jpg" height="468px;" src="https://lh4.googleusercontent.com/-LGPIl5D6jbYlq_BmOGtx9tB2HW6j0tKY7FuZWOhUgCeTVHAnBD43dKst4xem0tG9vFQeX7dSUJTgHJYXd2btwVkAz6Bm6yyozfcoVMCp2fGY1OiEb7gIKlghmLDsIvYahPZQKUt" style="-webkit-transform: rotate(0rad); border: none; transform: rotate(0rad);" width="624px;" /></span></span></div>
<i>FIGURE 4: Abstract plan translation failure: Picking the wrong key but skilfully trying it in the lock. This may be an example of abstract plans being carried out, but losing relevant details while being transformed into concrete motor actions by a hierarchy of units.</i><br />
<br />
In our model, planning and action selection occur as biased prediction. There is an inherent conflict between accurate prediction and bias. Attempting to bias predictions of events beyond your control leads to unexpected failure, which is even worse than expected failure.<br />
<br />
The alternative is to predict accurately, but often the better outcome is the less likely one. There must be a mechanism to increase the probability of low-frequency events where the agent has agency over the real-world outcome.<br />
<br />
Where possible, lower units must separate learning to predict and trying to use that learning to satisfy higher units’ objectives. Units should seek to maximise the probability of goal outcomes, given an accurate estimate of the state of the local unit as prior knowledge. But units should not become blind to objective reality in the process.<br />
<h3 style="text-align: left;">
Conflict resolution</h3>
General intelligence must be able to function effectively in novel situations. Modelling and prediction must work in the first instance, without time for re-learning. This means that existing knowledge must be combined effectively to extrapolate to a novel situation.<br />
<br />
We also want the general intelligence to spontaneously create novel combinations of behaviour as a way to innovate and discover new ways to do things. Since we assume that behaviour is generated by filtering predictions, we are really saying we need to be able to predict (simulate) accurately when extrapolating combinations of existing models to new situations. So we also need conflict resolution for non-physical or non-action predictions. The agent needs a clear and decisive vision of the future, even when simulating outcomes it has never experienced.<br />
<br />
The downside of all this creativity is that there’s really no way to tell whether these combinations are valid. Often they will be, but not always. For example, you can’t touch two objects that are far apart at the same time. When incompatible, we need a way to resolve the conflict.<br />
<br />
<a href="http://www.scholarpedia.org/article/Basal_ganglia#Action_selection" target="_blank">There’s a good discussion of different conflict resolution strategies on Scholarpedia</a>; our preferred technique is selecting a solitary active strategy in each hierarchy unit, choosing locally to optimise for a single objective when multiple are requested.<br />
<br />
Evaluating alternative plans is most easily accomplished as a centralised task - you have to bring all the potential alternatives together where they can be compared. This is because we can only assign relative rewards to each alternative; it is impossible to calculate meaningful absolute rewards for the experiences of an intelligent agent. It is also important to place all plans on a level playing-field regardless of the level of abstraction; therefore abstract plans should be competing against more concrete ones and vice-versa.<br />
<br />
Therefore, unlike most of the pieces we've described, action selection should be a centralised activity rather than a distributed one.<br />
<h3 style="text-align: left;">
Parameter Selection</h3>
In a hierarchical system the input to “higher” units will be determined by modelling in “lower” units and interactions with the world. The agent-world system will develop in markedly different ways each time. It will take an unknown amount of time for stable modelling to emerge, first in the lower units and then moving higher in the hierarchy.<br />
<br />
As a result of all these factors it will be very difficult to pick suitable values for time-constants and other parameters that control the learning processes in each unit, due to compounded uncertainty about lower units’ input. Instead, we must allow recent input to each unit to determine suitable values for parameters. This is <a href="https://en.wikipedia.org/wiki/Online_machine_learning" target="_blank">online learning</a>. Some parameters cannot be automatically adjusted in response to data. For these, to have any hope of debugging a general intelligence, a fixed parameter configuration must work for all units in all circumstances. This constraint will limit the use of some existing algorithms.<br />
<h2 style="text-align: left;">
Summary</h2>
That wraps up our theoretical overview of what we think a general intelligence algorithm must look like. The next article in this series will explain what we've learnt from biology’s implementation of general intelligence - ourselves! The final article will describe how we hope to build an algorithm that satisfies all these requirements.<br />
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-77702880239679908272015-10-15T01:49:00.003+11:002015-10-15T01:54:25.209+11:00Digital Reconstruction of Neocortical Microcircuitry (resource)<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-6st8IbkUpN0/Vh5pd5BPhuI/AAAAAAAAFbs/lVrO03Quc6w/s1600/Screenshot%2Bfrom%2B2015-10-15%2B01%253A40%253A32.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="360" src="http://2.bp.blogspot.com/-6st8IbkUpN0/Vh5pd5BPhuI/AAAAAAAAFbs/lVrO03Quc6w/s640/Screenshot%2Bfrom%2B2015-10-15%2B01%253A40%253A32.png" width="640" /></a></div>
<br />
We have found a fantastic resource, part of the <a href="https://en.wikipedia.org/wiki/Blue_Brain_Project" target="_blank">IBM Blue Brain Project</a>, that clearly and interactively maps out interactions between neocortical neurons. The data comes from their attempts to simulate a piece of cortex down to the level of biologically-realistic neurons.<br />
<div>
<br /></div>
Interactive neocortex browser tool here:<br />
<div>
<br />
<a href="https://bbpnmc.epfl.ch/nmc-portal/web/guest/microcircuit">https://bbpnmc.epfl.ch/nmc-portal/web/guest/microcircuit</a><br />
<br />
The paper containing the original research on which the website is based is here:<br />
<div>
<br /></div>
<div>
<a href="http://www.cell.com/abstract/S0092-8674(15)01191-5">http://www.cell.com/abstract/S0092-8674(15)01191-5</a></div>
<div>
Thanks to Yuwei Cui who posted the link to the <a href="http://lists.numenta.org/mailman/listinfo/nupic-theory_lists.numenta.org" target="_blank">Nupic Theory Mailing list</a>.</div>
</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-9665167837369733292015-10-15T01:39:00.006+11:002015-10-15T01:53:37.063+11:00SDR-RL (Sparse, Distributed Representation with Reinforcement Learning)<div dir="ltr" style="text-align: left;" trbidi="on">
Erik Laukien is back with a demo of Sparse, Distributed Representation with Reinforcement Learning.<br />
<br />
This topic is of intense interest to us, although the problem is quite a simple one. SDRs are a natural fit with Reinforcement Learning because bits jointly represent a state. If you associate each bit-pattern with a reward value, it is easy to determine the optimum action.<br />
<div>
<br /></div>
<div>
However, since this is an enormous state-space, it is not practical to do so. Instead, one might associate only all observed bit patterns with reward, or cluster them somehow to reduce the number of reward values that must be stored. Anyway, these are thoughts for another day.<br />
<br />
Here's his explanation of the demo:<br />
<br />
<a href="http://twistedkeyboardsoftware.com/?p=90">http://twistedkeyboardsoftware.com/?p=90</a><br />
<br />
Here's the demo itself. Note, we had to set the stepsPerFrame parameter to 100 to get it working quickly.<br />
<br />
<a href="http://twistedkeyboardsoftware.com/?p=1">http://twistedkeyboardsoftware.com/?p=1</a></div>
</div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1180536024131440638.post-74135009174931224282015-10-09T15:48:00.001+11:002015-10-09T15:48:50.196+11:00"Quantum computing" via Sparse distributed coding?<div dir="ltr" style="text-align: left;" trbidi="on">
An interesting article by Gerard Rinkus comparing the qualities of sparse distributed representation and quantum computing. In effect, he argues that because distributed representations can simultaneously represent multiple states, you get the same effect as a quantum superposition.<br />
<br />
The article was originally titled "sparse distributed coding via quantum computing" but I think that gets the key conclusions backwards (maybe I'm wrong?).<br />
<br />
The full article is here:<br />
<br />
<a href="http://people.brandeis.edu/~grinkus/SDR_and_QC.html">http://people.brandeis.edu/~grinkus/SDR_and_QC.html</a><br />
<br />
Rinkus says:<br />
<br />
<div>
<i>"<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;">I believe that SDR constitutes a classical instantiation of quantum superposition and that switching from localist representations to SDR, which entails no new, esoteric technology, is </span><span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;">the</span><span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"> key to achieving quantum computation in a </span><span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;">single-processor</span><span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;">, classical (Von Neumann) computer."</span></i></div>
<div>
<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"><br /></span></div>
<div>
<span style="font-family: Tahoma, Geneva, sans-serif;"><span style="background-color: white; font-size: 14.3999996185303px; line-height: 18.7199993133545px;">I think that goes a bit too far. Yes, it</span></span><span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"> would seem to have some of the same advantages as quantum computing, with the additional benefit of fitting classical computing technology that is mass manufactured at low cost.</span></div>
<div>
<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"><br /></span></div>
<div>
<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;">However may be moot, now that true quantum computing looks to become practical:</span></div>
<div>
<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"><br /></span></div>
<div>
<span style="font-family: Tahoma, Geneva, sans-serif;"><span style="font-size: 14.3999996185303px; line-height: 18.7199993133545px;"><a href="http://www.quantumcomputingtechnologyaustralia.com/2015/10/06/australian-engineers-build-world-first-two-qubit-logic-gate-in-silicon/">http://www.quantumcomputingtechnologyaustralia.com/2015/10/06/australian-engineers-build-world-first-two-qubit-logic-gate-in-silicon/</a></span></span></div>
<div>
<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"><br /></span></div>
<div>
All in all I think the analogy between sparse distributed representation and quantum computing is very thought-provoking.</div>
<div>
<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"><br /></span></div>
<div>
<span style="background-color: white; font-family: Tahoma, Geneva, sans-serif; font-size: 14.3999996185303px; line-height: 18.7199993133545px;"><br /></span></div>
<br /></div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1180536024131440638.post-29480040195098662582015-09-10T17:08:00.001+10:002015-09-10T18:00:10.096+10:00AGI Experimental Framework: A platform for AGI R&D<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
By Gideon Kowadlo and David Rawlinson<br />
<br />
<h2>
Introduction</h2>
We’ve been building and testing AGI algorithms for the last few years. As the systems become more complex, we have found it ever more difficult to run meaningful experiments. To summarise, the main challenges are:<br />
<ul>
<li>testing a version of the algorithm repeatedly and over some range of parameters or conditions,</li>
<li>scaling it up so that it can run quickly,</li>
<li>debugging: the complexity of the ‘brain’ makes visualising and interpreting its state almost as hard as the problem itself! </li>
</ul>
Platforms for testing AIs already exist, such as the <a href="http://www.arcadelearningenvironment.org/">Arcade Learning Environment</a>. There are also a number of <a href="http://homepages.inf.ed.ac.uk/rbf/IAPR/researchers/MLPAGES/mldat.htm" target="_blank">standard datasets</a> and frameworks for testing them. What we want is a framework for understanding the behaviour of an AI that can be applied successfully to any problem - it is supposed to be an Artificial <i>General</i> Intelligence, after all. The goal isn't to advance the gold-standard incrementally; instead we want to better understand the behaviour of algorithms that might work reasonably well on many different problems.<br />
<div>
<br />
Whereas most AI testing frameworks are designed to facilitate a particular problem, we want to facilitate understanding of the algorithms used. Further, the algorithms will have complex internal state and be variably parameterised from small instances on trivial problems to large instances - comprising many computers - on complex problems. As such there will be a lot of emphasis on interfaces that allow the state of the algorithm to be explored.</div>
<div>
<br /></div>
<div>
These design goals mean that we need to look more at the enterprise and web-scale frameworks for <a href="https://en.wikipedia.org/wiki/Distributed_computing" target="_blank">distributed systems</a>, than test harnesses for AIs. There's a huge variety of tools out there: <a href="https://en.wikipedia.org/wiki/Apache_Hadoop" target="_blank">Distributed filesystems</a>, cloud resourcing (such as <a href="https://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud" target="_blank">Elastic Compute</a>), and cluster job management (e.g. many scientific packages available in <a href="https://wiki.python.org/moin/ParallelProcessing" target="_blank">Python</a>). We'll design a framework with the capability to jump between platforms as available technologies evolve.</div>
<div>
<br /></div>
<div>
Developing distributed applications is significantly harder than single-process software. Synchronization and coordination is harder (c.f. <a href="https://zookeeper.apache.org/" target="_blank">Apache Zookeeper</a>), and there's a lot of <a href="http://docs.mongodb.org/master/crud/" target="_blank">crud</a> to get right before you can actually get to the interesting bits (i.e. the AGI). We're going to try to get the boring stuff done nicely, so that others can focus on the interesting bits!<br />
<h3>
Foundational Principles</h3>
<ul>
<li style="font-weight: bold;"><b>Agent/World conceptualisation</b></li>
<ul>
<li>For AGI, we have developed a system based around Experiments, with each Experiment having Agents situated in a World.</li>
</ul>
<li><b>Reproducible</b></li>
<ul>
<li>All data is persisted by default so that any experiment can be reproduced from any time step.</li>
</ul>
<li><b>Easy to run and use</b></li>
<ul>
<li>Minimal setup and dependencies.</li>
<li>No knowledge of the implementation is required to implement a custom module (primarily the intelligent Agent or World in which it operates).</li>
</ul>
<li><b>Highly modular (Scalability)</b></li>
<ul>
<li>Different parts of the system can be customised, extended or overridden independently.</li>
<li>Distributed architecture (Scalability)</li>
<li>Modules can be run on physically separated machines, without any modification to the interactions between modules (i.e. the programmer’s perspective is not affected by scaling of the system to multiple computers).</li>
</ul>
<li><b>Easy to develop</b></li>
<ul>
<li>Code is open source.</li>
<li>Code is well documented.</li>
<li>All API’s well documented and using standard protocols (at the moment <a href="https://en.wikipedia.org/wiki/Representational_state_transfer" target="_blank">RESTful</a>, in future could be <a href="https://en.wikipedia.org/wiki/WebSocket" target="_blank">websockets</a> or other).</li>
</ul>
<li><b>Explorable / Visualisable</b></li>
<ul>
<li>High priority placed on debugging and understanding of data rather than simply efficiency and throughput. We don’t yet know what the algorithm should look like!</li>
<li>All state is accessible, relations are can be explored.</li>
<li>Execution is on demand (step-by-step) or automatic (until criteria, or batches of experiments completed).</li>
<li>It must be easy for anyone to build a UI client that can explore the state of all parts of the system.</li>
</ul>
</ul>
<h3>
Conceptual Entities</h3>
</div>
<div>
We have defined a number of components that make up an experiment. We refer to these components as Entities, and give them a specific <a href="https://en.wikipedia.org/wiki/Application_programming_interface" target="_blank">interface</a>.<br />
<ul>
<li><b>World</b></li>
<ul>
<li>The simulated environment within which all the other simulated components exist.</li>
</ul>
<li><b>Agent </b></li>
<ul>
<li>The intelligent agent itself. It operates within a World, and interacts with that World and (optionally) other Agents via a set of Sensors and Actuators.</li>
</ul>
<li><b>Sensor</b></li>
<ul>
<li>A means by which the Agent senses the world. The output is a function of a subset of the World state. For example, a unidirectional light sensor may provide the perceived brightness at the location of the sensor.</li>
</ul>
<li><b>Actuator</b></li>
<ul>
<li>A means by which an Agent acts on the World. The output is a simulated physical action. For example, a motor rotating a wheel.</li>
</ul>
<li><b>Experiment</b></li>
<ul>
<li>The Experiment Entity is a container for a World, and a set of Agents (each of which have a set of Sensors and Actuators), and an Objective Function which determines the terminating condition of the experiment (which may be a time duration).</li>
</ul>
<li><b>Laboratory</b></li>
<ul>
<li>A collection of Experiments that form a suite to be analysed collectively. This may be a set of Experiments that have similar setups with minor parameter variations.</li>
</ul>
<li><b>ObjectiveFunction</b></li>
<ul>
<li>The objective function computes metrics about the World and/or Agents that are necessary to provide Supervised Learning or Reinforcement Learning signals. It might instead provide a multivariate Optimization function. The ObjectiveFunction is a useful encapsulation because it is often easy to separate objective measurements from the AI that is needed to achieve them.</li>
</ul>
</ul>
<h3>
Architecture</h3>
To enforce good design principles, the architecture is multi-layered and highly modular. Multiple layers (also known as multi-tier architecture) allows you to work with concepts that are at the appropriate level of abstraction, which simplifies development and use of the system.<br />
<br />
Each entity is a module. Use of particular entities is optional and extensible. A user will inherit the entities that they choose, and implement the desired functionality. Another modularisation occurs with the AGIEF Nodes. They communicate via interprocess conventions so that components can be split between multiple host computers. <br />
<br />
Interprocess communication occurs via a central interface called the Coordinator, which is a single point of contact for all Entities and the shared system state. This also enables graphical user interfaces to be built to control and explore the system.<br />
<br />
These concepts are expanded in the sections below.<br />
<br />
<h4>
Design Considerations</h4>
The various components of the system may have huge in-memory data-structures. This is an important consideration for persisting state, distributed operation, and ability to visualise the state.<br />
<br />
Processing to update the state of Worlds and Agents will be compute-intensive. Many AI methods can easily be accelerated by parallel execution. Therefore, the system can be broken down into many computing nodes, each tasked with performing a specific computational function on some part of the shared system state. We hope to support massively parallel hardware such as GPUs in these compute nodes.<br />
<br />
We will write the bulk of the framework and initial algorithm implementations in Java. Others can extend on this, or develop against the framework in other languages. We will also write a graphical user interface using web technologies that will allow easy management of the system.<br />
<br />
<h4>
Perspectives on the system design</h4>
The architectural layers are shown in the diagram below.<br />
<div style="text-align: justify;">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-HcG6Yup0nsw/VfExsOOHDCI/AAAAAAAASRM/QFx4bEPqeUE/s1600/AGIEF%2BArchitectural%2BLayers%2B%25281%2529.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="394" src="http://1.bp.blogspot.com/-HcG6Yup0nsw/VfExsOOHDCI/AAAAAAAASRM/QFx4bEPqeUE/s640/AGIEF%2BArchitectural%2BLayers%2B%25281%2529.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 12.8px;">Figure 1: 'Architectural Layers'</span></td></tr>
</tbody></table>
<br />
<span style="text-align: left;">Each layer is distinct, with strict separation. No layer has access to the layers above, which operate at a higher level of abstraction.</span></div>
<ul>
<li style="text-align: left;"><b>State:</b></li>
<ul>
<li>State persistence: storage and retrieval of state of all parts of the system at every time step. This comprises the shared filesystem.</li>
</ul>
<li><b>Interprocess:</b></li>
<ul>
<li>Communications between all modules running in the system, locally and/or across a network.</li>
<li>Provides a single point of contact via a local interface, to any part of the system (which may be running in different physical locations), for both control signals and state.</li>
</ul>
<li><b>Experiment:</b></li>
<ul>
<li>Provides all of the entities that are required for an experiment. These are expanded shortly.</li>
</ul>
<li><b>UI:</b></li>
<ul>
<li>The user interface that an experimenter uses to run experiments, debug and visualise results.</li>
<li>The typical features would be:</li>
<ul>
<li>set up parameters of an experiment,</li>
<li>run, stop, step through an experiment,</li>
</ul>
</ul>
<ul>
<ul>
<li>save/load an experiment,</li>
<li>visualise the state of any part of the experiment.</li>
</ul>
</ul>
<li><b>Specific Experiments:</b></li>
<ul>
<li>This is defined by the person experimenting with the system. For example, a specific Agent that seeks light, a specific World that contains a light source, and an objective function that defines the time span for operation.</li>
</ul>
</ul>
Another perspective on the design is to view the Services and Entities and their lines of communication. The diagram is colour coded to indicate Layers, as per the diagram above.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-rbR_VggsWsQ/VfEvlpGtm6I/AAAAAAAASQ4/BTH5yy6ejCU/s1600/Untitled%2Bdrawing.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="451" src="http://4.bp.blogspot.com/-rbR_VggsWsQ/VfEvlpGtm6I/AAAAAAAASQ4/BTH5yy6ejCU/s640/Untitled%2Bdrawing.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 12.8px;">Figure 2: 'Services and Entities'</span></td></tr>
</tbody></table>
<br />
The Coordinator and Database are services. The Coordinator is shown at the centre, as described earlier (Architecture section), being the primary point of contact for Entities and potentially other clients such as a Graphical User Interface.<br />
<br />
A similar perspective is shown in an expanded diagram below that illustrates the Database API module and the distributed implementation of the Coordinator in the Interprocess layer, enabling Entities to run on separate machines. This is just one possible configurations; there can be multiple slaves, each with multiple entities. <br />
<br />
<div style="text-align: left;">
Each bounding box indicates what we refer to as an AGIEF Node (or node for short). The node comprises a process that provides a context for execution of one or more entities, or other clients such as the GUI.</div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img height="360" src="https://lh3.googleusercontent.com/o5dlT8S72ajOCEo3-m1GgteUECjylKFq_d7VHH3IXizuAp7j-cbs1MF8BjS8r5aGokFywgfsnW1QaE7Sz-8NPVz1g4co84tXGafsw3wSLA8wH4D6F24zr0sUUzIPCT27VRWZPQQ" style="margin-left: auto; margin-right: auto;" width="640" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 3: 'AGIEF Nodes'</td></tr>
</tbody></table>
<div style="text-align: center;">
<br /></div>
We looked at popular <a href="https://www.mongodb.com/nosql-explained" target="_blank">No-SQL</a> web storage systems (basically key-value stores) which are very convenient and flexible due to the inherently dynamic, software-defined <a href="https://en.wikipedia.org/wiki/Database_schema" target="_blank">schemas</a> and HTTP interfaces. However, we have a relatively static schema for our data, on which we will build utilities for managing experiments and visualising data. In addition, <a href="https://en.wikipedia.org/wiki/Relational_database" target="_blank">relational databases</a> such as <a href="https://www.mysql.com/" target="_blank">MySQL</a> and <a href="http://www.postgresql.org/" target="_blank">PostgreSQL</a> are beginning to offer HTTP interfaces as well. Whether we pick a NoSQL or Relational Database, we will require a HTTP interface.<br />
<br />
A third perspective is the data model that represents the system in its entirety. This is the model implemented in the database.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-5l-TZIrVnSQ/VfEwIQ_19rI/AAAAAAAASRA/UoXzTYhb-P0/s1600/Data%2BModel.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="331" src="http://2.bp.blogspot.com/-5l-TZIrVnSQ/VfEwIQ_19rI/AAAAAAAASRA/UoXzTYhb-P0/s640/Data%2BModel.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 12.8px;">Figure 4: 'Data Model'</span></td></tr>
</tbody></table>
<br />
The data model stores the entire system state, including hierarchy and relationship between entities, as well as the state of each entity. With a <a href="https://en.wikipedia.org/wiki/Representational_state_transfer" target="_blank">RESTful API</a> exposing the database, we have a shared filesystem accessible as a service, essential for distributed operation and restoring the system at any point in time.<br />
<h3>
Future Work</h3>
We will shortly be releasing an initial version of our framework and we'll post about the technology choices we've made, and some alternatives. We'll include a demonstration problem with the initial release and then start rolling out some more exciting algorithms and graphics, including lots of AI methods from the literature (we have hundreds in our old codebase ready to go).<br />
<br /></div>
</div>
Gideon Kowadlohttp://www.blogger.com/profile/06783501071538911513noreply@blogger.com2tag:blogger.com,1999:blog-1180536024131440638.post-78244986483478952212015-07-26T23:58:00.002+10:002015-07-26T23:58:17.297+10:00Reading list - July 2015<div dir="ltr" style="text-align: left;" trbidi="on">
This month's reading list continues with a subtheme on recurrent neural networks, and in particular Long Short Term Memory (LSTM).<br />
<br />
First here's an interesting report on a panel discussion about the future of Deep Learning at the International Conference on Machine Learning (ICML), 2015:<br />
<br />
<a href="http://deeplearning.net/2015/07/13/a-brief-summary-of-the-panel-discussion-at-dl-workshop-icml-2015/">http://deeplearning.net/2015/07/13/a-brief-summary-of-the-panel-discussion-at-dl-workshop-icml-2015/</a><br />
<br />
Participants included Yoshua Bengio (University of Montreal), Neil Lawrence (University of Sheffield), Juergen Schmidhuber (IDSIA), Demis Hassabis (Google DeepMind), Yann LeCun (Facebook, NYU) and Kevin Murphy (Google).<br />
<div>
<br /></div>
<div>
It was great to hear the panel express an interest in some of our favourite topics, notably hierarchical representation, planning and action selection (reported as sequential decision making) and unsupervised learning. From the Deep Learning community this is a new focus - most DL is based on supervised learning.</div>
<div>
<br /></div>
<div>
In the Q&A session, it was suggested that reinforcement learning be used to motivate the exploration of search-spaces to train unsupervised algorithms. In robotics, robustly trading off the potential reward of exploration vs using existing knowledge has been a hot topic for several years (<a href="http://yann.lecun.com/exdb/publis/pdf/sermanet-iros-08.pdf" target="_blank">example</a>).</div>
<div>
<br /></div>
<div>
The theory of Predictive Coding suggests that the brain strives to eliminate unpredictability. This presents difficulties for motivating exploration - critics have asked why we don't seek out quiet, dark solitude! Friston suggests that prior expectations balance the need for immediate predictability with improved understanding in the longer term. For a good discussion, see <a href="http://www.fil.ion.ucl.ac.uk/~karl/Whatever%20next.pdf" target="_blank">here</a>.</div>
<div>
<br /></div>
<div>
Our in-depth reading this month has continued on the theme of LSTM. The most thorough introduction we have found is Alex Graves' "Supervised Sequence Labelling with Recurrent Neural Networks":</div>
<div>
<br /></div>
<div>
<a href="http://www.cs.toronto.edu/~graves/preprint.pdf">www.cs.toronto.edu/~graves/preprint.pdf</a></div>
<div>
<br /></div>
<div>
However, a critical limitation of LSTM as presented in Graves' work is that online training is not possible - so you can't use this variant of LSTM in an embodied agent.</div>
<div>
<br /></div>
<div>
The best and online variant of LSTM seems to be Derek Monner's Generalized LSTM algorithm, introduced in D. Monner and J. A. Reggia (2012). "A generalized LSTM-like training algorithm for second-order recurrent neural networks". You download the paper from Monner's website here:</div>
<div>
<br /></div>
<div>
<a href="http://www.overcomplete.net/papers/nn2012.pdf">http://www.overcomplete.net/papers/nn2012.pdf</a></div>
<div>
<br /></div>
<div>
We'll be back with some actual code soon, including our implementation of Generalized LSTM. And don't worry, we'll be back to unsupervised learning soon with a focus on Growing Neural Gas.</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-25966313951910756062015-06-01T18:58:00.000+10:002015-07-26T23:59:01.571+10:00Reading List - May 2015<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
John Lisman, "<a href="http://www.cell.com/neuron/abstract/S0896-6273%2815%2900256-1">The Challeng</a><a href="http://www.cell.com/neuron/abstract/S0896-6273%2815%2900256-1">e of Understanding the Brain: Where We Stand in 2015</a><i>", Neuron</i>, 2015<br />
<blockquote class="tr_bq">
For many in ML and AI, biological knowledge is focussed on cortex. This paper gives an excellent broad overview of current biological understanding of intelligence.</blockquote>
<br />
Sebastian Billaudelle and Subutai Ahmad, "<a href="http://arxiv.org/abs/1505.02142">Porting HTM Models to the Heidelberg Neuromorphic Computing Platform</a>", <i>arXiv.org</i>, 2015<br />
<blockquote class="tr_bq">
Progress in Numenta's HTM and particularly interesting to see collaboration with the Human Brain Project. Does this signify growing collaboration with computational biologists?</blockquote>
<br />
Yann LeCun, Yoshua Bengio and Geoffrey Hinton<i>, "</i><a href="http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html">Deep learning</a>", <i>Nature</i>, 2015<br />
<blockquote class="tr_bq">
Review of Deep learning by the guys that are recognised as THE fathers of the field - they have been working on NN for years without due acclaim, <a href="http://blog.agi.io/2015/01/deep-learning-history-in-context.html">until recently</a>.</blockquote>
<br />
<a href="http://lstm.iupr.com/">LSTM Tutorial</a>, Department of Computer Science, University of Kaiserslautem, 2015<br />
<i><br /></i>
Nitish Srivastava, Elman Mansimov and Ruslan Salakhutdinov, <a href="http://arxiv.org/abs/1502.04681">Unsupervised Learning of Video Representations using LSTMs</a>", <i>arXiv.org, </i>2015<br />
<i><br /></i>Andrej Karpathy, "<a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">The Unreasonable Effectiveness of Recurrent Neural Networks</a>", <i>Andrej Karpathy blog, </i><i>2015</i><br />
<blockquote class="tr_bq">
We have a renewed interest in RNN's and LSTM's and their great potential. Here are a few papers and tutorials that are well worthwhile for learning about this area.</blockquote>
</div>
Gideon Kowadlohttp://www.blogger.com/profile/06783501071538911513noreply@blogger.com1tag:blogger.com,1999:blog-1180536024131440638.post-88403954931025899342015-05-15T17:59:00.000+10:002015-11-23T17:49:42.882+11:00Consciousness & "Free Will": The Elephants in the Room<div dir="ltr" style="text-align: left;" trbidi="on">
<h4 style="text-align: left;">
By David Rawlinson and Gideon Kowadlo</h4>
The aim of this post is to take a short departure from more technical issues and deal with some of the philosophical questions about <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence" target="_blank">Artificial General Intelligence </a>(AGI).<br />
<h2 style="text-align: left;">
The Elephant in the room</h2>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-Gjyxz-tM0Wg/VUXPMlQPOUI/AAAAAAAADow/iJ51hvtT-Ds/s1600/elephant.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="245" src="http://3.bp.blogspot.com/-Gjyxz-tM0Wg/VUXPMlQPOUI/AAAAAAAADow/iJ51hvtT-Ds/s1600/elephant.jpg" width="320" /></a></div>
<a href="http://en.wikipedia.org/wiki/%20Elephant_in_the_room" target="_blank">This metaphor</a> implies willful ignorance of something blindingly obvious. Something that's difficult to ignore. It requires effort to avoid talking about it.<br />
<br />
While consciousness & "free will" are hot topics for philosophers, most "serious" Artificial Intelligence (AI) literature avoids them (exceptions include <a href="http://www.scholarpedia.org/article/Adaptive_resonance_theory" target="_blank">1</a>,<a href="http://www.scholarpedia.org/article/Global_workspace_theory" target="_blank">2</a>,<a href="http://en.wikipedia.org/wiki/Neuroscience_of_free_will#Free_will_as_illusion" target="_blank">3</a>). However, I'm yet to meet someone with an interest in AI who hasn't thought about these deep questions (consciousness being the other big one). We just don't talk about them in scholarly company.<br />
<br />
The <a href="https://en.wikipedia.org/wiki/History_of_artificial_intelligence" target="_blank">history of AI</a> is a series of speculative bubbles: False hopes, dashed promises and unrealized dreams have created the mainstream perception that AI has gone nowhere for decades. We seem to re-brand AI every 10 years to shed past disappointment. For example, Machine Learning currently enjoys huge popularity, yet <a href="http://en.wikipedia.org/wiki/Machine_learning#History_and_relationships_to_other_fields" target="_blank">half the community do not consider ML to be a subset of AI</a>, viewing the latter as a narrower discipline concerned with symbolic reasoning. A decade ago everything was about explicit treatment of uncertainty.<br />
<br />
Today, it is rare to write about “AI” in scholarly journals; we talk about specific approaches instead. AI is tainted terminology.<br />
<br />
Having been repeatedly burnt, AI & ML researchers now focus on short-term, achievable goals while (publicly) ignoring questions about fundamental, qualitative distinctions between artificial intelligence methods that already work, and the natural intelligence of people and animals.<br />
<br />
Many others make the same implicit distinction, assuming that no form of software has "true" intelligence. When challenged, <a href="https://en.wikipedia.org/wiki/Explanatory_gap" target="_blank">few are able to describe what is missing</a>; for others the goalposts keep moving. Despite computers being able to <a href="http://en.wikipedia.org/wiki/Google_driverless_car" target="_blank">drive cars on busy roads</a>, <a href="http://www.nasa.gov/mission_pages/msl/" target="_blank">fly on another planet</a>, <a href="http://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov" target="_blank">excel in games</a>, <a href="http://en.wikipedia.org/wiki/List_of_Jeopardy!_tournaments_and_events#IBM_Challenge" target="_blank">reasoning & insight</a>, <a href="http://deeplearning.net/demos/" target="_blank">perception</a>, <a href="https://www.youtube.com/watch?v=W1czBcnX1Ww" target="_blank">fine, adaptive motor control</a> and many other intellectual pursuits, there remains a popular suspicion that this is merely sophisticated trickery and not true intelligence at all!<br />
<br />
We have a “<a href="http://en.wikipedia.org/wiki/God_of_the_gaps" target="_blank">God of the Gaps</a>” problem: The qualitative differences between man and machine keep receding into ineffable gaps in machine capability. Humans keep raising the bar.<br />
<br />
Physicists have repeatedly proposed (e.g. <a href="http://en.wikipedia.org/wiki/Roger_Penrose#Physics_and_consciousness" target="_blank">Penrose</a>) that consciousness requires some special kind of physics. This rationalizes machine inferiority, but seems like an answer in search of a problem without strong prior evidence of such unusual phenomena.<br />
<br />
The idea that consciousness requires some form of exceptional phenomena may be a peculiarly Western philosophical trait, resulting from exposure to Descartes' <a href="http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)" target="_blank">Dualism</a>.<br />
<br />
An opposing view is that when properly defined these problems might simply be <a href="http://en.wikipedia.org/wiki/Emergence" target="_blank">emergent</a> characteristics of a particular type of algorithm, making the type of physical embodiment unimportant.<br />
<br />
The "mere algorithm" claim makes many people uncomfortable, as if it removes part of their humanity or individuality. We would argue it does not. Besides, as the philosopher Daniel Dennett said, <a href="http://en.wikipedia.org/wiki/Consciousness_Explained" target="_blank">"Only a theory that explained conscious events in terms of unconscious events, could explain consciousness at all"</a>.<br />
<h2 style="text-align: left;">
A Problem of Definition</h2>
<div class="separator" style="clear: both; text-align: center;">
</div>
Our position is that the ongoing difficulty with consciousness & free will is simply lack of a satisfactory definition for these features. For those of us who regard Artificial Intelligence as a practical toolkit akin to spanners or wrenches, it's time to hold the less applied thinkers to account. Exactly what performance, qualities or abilities do people accept as demonstrating consciousness?<br />
<br />
To improve the specificity of these problems let's replace “Free Will” with Self-Determination and swap Consciousness with Self-Awareness.<br />
<h3 style="text-align: left;">
Self-Determination</h3>
The most problematic aspect of free will seems to be <a href="https://en.wikipedia.org/wiki/Determinism#With_free_will" target="_blank">determinism</a>. In a deterministic system the relationship between inputs and outputs is fixed; there are no choices. Instead the inputs determine the outputs. Clearly there is no room for free will if our sensory inputs exactly determine our actions in response.<br />
<br />
"Free" will implies that normal causation doesn't apply to thinking - as if it were "free" from any and all constraints. This smacks of a preoccupation with Dualism and the theoretical constraints of hard determinism derived from Newtonian models of the universe. Both Descartes and Newton lived and wrote 300 years ago. We have moved on since then.<br />
<br />
A much better description might be "self determination": deciding what to do via some reasoning process, making use of knowledge, experience and personal biases. The big issue is then whether this definition is lacking some essential “freedom” quality.<br />
<br />
What if our sensor inputs don't determine our actions, but an unconscious neurological process does instead? Brain imaging suggests that conscious awareness of decisions often occurs long after decisions are actually made and even after action begins. In this model, due to "our" lack of access and control over decisions, <a href="http://en.wikipedia.org/wiki/Neuroscience_of_free_will#Free_will_as_illusion" target="_blank">free will is merely an illusion</a>. Although this looks hopelessly conclusive, the word "our" is key to unlocking a greater range of more satisfying answers. It is ownership and access that is lacking. We will argue that self-determination restores both.<br />
<h3 style="text-align: left;">
Self-Awareness</h3>
If we are to make our own choices, we need a framework for modelling the world and the values we attach to concepts, events and entities within it. The framework enables us to become aware of ourselves and the world we inhabit.<br />
<br />
Consciousness is often described as <a href="https://en.wikipedia.org/wiki/Consciousness" target="_blank">awareness</a> - of the self, of the past and future, and of the world around. There's also awareness of sensation - <a href="https://en.wikipedia.org/wiki/Qualia" target="_blank">Qualia</a> - and an ongoing "stream" of consciousness, such as an <a href="https://en.wikipedia.org/wiki/Internal_monologue" target="_blank">internal monologue</a>. <a href="https://en.wikipedia.org/wiki/Attention" target="_blank">Attention</a> - selective awareness - is also entwined with consciousness.<br />
<br />
This article isn’t going to be able to describe how to artificially reproduce human awareness. For one thing, evidence from animals and inter-personal variation suggest that the qualities and properties of human awareness are a matter of degree, not a binary feature. Most of the other articles on this blog discuss ways to generate artificial representations of the world from sensory data, but there’s more about current progress in artificial awareness in a postscript to this article.<br />
<br />
In contrast, we can ask some pretty definitive questions about the feasibility of self-determination, which will be the focus of the rest of this article.<br />
<h2 style="text-align: left;">
Models of Self-Determination</h2>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-sLhN9Pk8ybA/VUXPWNII3jI/AAAAAAAADo4/nSedNFHIUK4/s1600/Fork_in_the_road_sign.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="133" src="http://1.bp.blogspot.com/-sLhN9Pk8ybA/VUXPWNII3jI/AAAAAAAADo4/nSedNFHIUK4/s1600/Fork_in_the_road_sign.jpg" width="200" /></a></div>
Let's explore some thought experiments to see if we can find a set of physically-plausible qualities that fulfill a satisfying definition of self-determination.<br />
<br />
Free Will is typically described as the ability to make decisions, or choose actions, in a way that is not determined by external events. In some sense, we are "free" to express personal preference and values over strong external persuasion or evidence.<br />
<br />
But Free Will is not simply a random or unintelligent reaction; we would like it to be a choice of some kind. We would like to make informed, deliberate choices that balance our interpretation of objective and subjective criteria: Let our history, knowledge and experiences shape our choices.<br />
<br />
A choice without knowledge and understanding is not really a choice at all. Only via an understanding of consequences can we assign value to the choices available. Comprehension of available choices relies on existing knowledge and experience. Execution of decisions requires ongoing, grounded understanding of the world and our action capabilities, to ensure we execute choices as intended. And you're not going to "choose" an action that has no personal meaning - for example, if you live in a desert where it never rains, you won’t invent an umbrella*.<br />
<br />
We believe the answer to the whole Free Will problem is also the source of limits to our freedom: Our knowledge, experience and personal values. This is no bad thing: Individual characteristics develop from our personal history, and experience of consequences affects the values we express in future decisions. We don't have to express these limits negatively; we can say that experience, knowledge and beliefs guide us to choices we find reasonable. Personal identity is defined by experience and knowledge and expressed in choices: It’s what makes you, you. And it makes the choices yours.<br />
<br />
This is not a new idea. In philosophy, those who believe that Free Will is compatible with some level of determinism are known as <a href="https://en.wikipedia.org/wiki/Compatibilism" target="_blank">compatibilists</a>. Compatibilitists (such as us) believe that the free-will vs determinism debate is a false dilemma that can be sidestepped by clearly defining the objective.<br />
<br />
Self-determination is a positive way to summarise the influence of internal constraints. So, if we were to accept these constraints, what freedom <i>do</i> we have, to make choices?<br />
<h4 style="text-align: left;">
<i>* The etymology of the word “Umbrella” is relevant. It derives from the Latin Umbra, meaning shade. The novel application of an existing sunshade tool to rain protection may be an example of ideas being guided, inspired and yet constrained by previous experience. The adaptation of an existing idea to a new problem is a common innovation technique.</i></h4>
<h2 style="text-align: left;">
Uncertainty and Stability</h2>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/1/13/A_Trajectory_Through_Phase_Space_in_a_Lorenz_Attractor.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/1/13/A_Trajectory_Through_Phase_Space_in_a_Lorenz_Attractor.gif" /></a></div>
The universe we inhabit appears to be mostly deterministic, at human scales, but also quite random, at smaller scales. Some systems, such as weather, are unpredictable at large scales because they are very sensitive to small-scale changes. This phenomenon is known as “<a href="http://en.wikipedia.org/wiki/Butterfly_effect" target="_blank">sensitive-dependence</a>”: <a href="http://en.wikipedia.org/wiki/Chaos_theory" target="_blank">Chaos theory</a> is the study of such systems. A famous example is the “Butterfly Effect”, in which the beating of a butterfly’s wings alters the path of a tropical storm thousands of miles away.<br />
<br />
Other large-scale systems are stable against small-scale changes. For example, the behaviour of a brick house or a steel girder does not change significantly in response to butterflies’ flapping. The structure of these systems absorbs these changes, and the behaviour we care about is not affected.<br />
<br />
We can easily design information-processing systems to have specific sensitivity to random events. <a href="http://www.johndcook.com/blog/2013/11/12/sensitive-dependence-on-initial-conditions/" target="_blank">Even simple functions can exhibit sensitive-dependence</a>.<br />
<br />
Conversely, we can design robust information processing systems. For example, your computer’s processor can execute millions of instructions per second without error. Nature can do this too: Error-checking mechanisms ensure remarkably accurate copying of DNA into every cell in your body; but mutations do occur at a low but necessarily nonzero rate. Natural selection of some mutations amplifies the frequency of some variants, resulting in evolution.<br />
<br />
In <a href="http://en.wikipedia.org/wiki/Estimation_theory" target="_blank">Estimation theory</a>, events are represented by a combination of explicit models and terms that capture the effects of random events. We call these random events “noise”. In a sensitive-dependent system, these random events can make the future state of the system unpredictable.<br />
<br />
Noise can be selectively amplified or suppressed by carefully designed systems. Complex, large-scale systems can be sensitive to small-scale random events, or they can be designed to be almost entirely unaffected by noise.<br />
<h3 style="text-align: left;">
Exploiting Uncertainty</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-ycl7OfOP_No/VUXb8bj1lRI/AAAAAAAADpo/rsqhXU_pOyY/s1600/die-20.jpeg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://1.bp.blogspot.com/-ycl7OfOP_No/VUXb8bj1lRI/AAAAAAAADpo/rsqhXU_pOyY/s1600/die-20.jpeg" width="200" /></a></div>
For our self-determination model, the ideal solution is a mix of the two extremes. We need deterministic reasoning to accurately evaluate potential actions and outcomes. And we can exploit uncertainty and stability in a system to amplify the consequences of random events when exploring options, and stabilize decisions once made.<br />
<br />
Consider perception. You want your senses to reliably tell you what's going on outside your head. This is essential: If all you saw were crazy swirls and flashing imaginary lights, you wouldn't be able to find your way around. But we also want some uncertainty in perception to allow us to test different hypotheses for what we're seeing and decide which fits best. For example, the perception of <a href="http://en.wikipedia.org/wiki/Ambiguous_image" target="_blank">ambiguous images</a> depends on expectations.<br />
<br />
Imagine a scenario with 2 alternative choices: A, and B. Our internally "preferred" choice is A. But perceptual cues and third party advice strongly suggests action B. Let’s assume this "external bias" means that the probability of us even conceiving plan A is small (e.g. 0.1). So in this case it seems that external causes are dominating internal preferences. We're simply responding to external cues, like a puppet. Not ideal.<br />
<br />
Now let's change the system a bit. We include a method of repeatedly, randomly generating combinations of ideas, and strongly reinforcing the ideas with the greatest <i>internal</i> value. Imagine rolling a 10-sided die. If we get a '1', we are lucky enough to think of plan A. If we get any other number, we only think of plan B, due to the external bias.<br />
<br />
The dice will only roll a '1' rarely. But if we allow ourselves 100 dice-rolls we will get quite a few '1's. As long as we produce a strong response to rolling the occasional '1' (due to the anticipated outcome of plan A), we can design a system that will be largely determined by internal preferences, not dictated by chance properties of the outside world.<br />
<br />
This setup is relatively easy to create in software. For example, Arunava Banerjee has designed an <a href="http://www.cise.ufl.edu/~arunava/papers/jcns-sensitivity.pdf" target="_blank">artificial neural network that behaves just like this</a>. What’s more, <a href="http://www.scholarpedia.org/article/Models_of_basal_ganglia" target="_blank">models of human decision-making in the Basal Ganglia</a> appear very similar to the dynamics described here. A recursive, competitive process between competing potential actions are eventually reduced to a set of selected actions that are then reinforced, and executed. A similar process occurs in the <a href="https://en.wikipedia.org/wiki/Superior_colliculus" target="_blank">Superior Colliculus</a>, the brain component that decides what your eyes will look at.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/5/5f/My_Wife_and_My_Mother-In-Law_(Hill).svg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br class="Apple-interchange-newline" /><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/5/5f/My_Wife_and_My_Mother-In-Law_(Hill).svg" height="320" width="231" /></a></div>
<br />
<i>Figure: An <a href="http://en.wikipedia.org/wiki/Ambiguous_image" target="_blank">ambiguous image</a>; the same sensory input can be perceived in two ways (in this case either an old or young lady), depending on the prior or internal bias of the viewer.</i><br />
<br />
The <a href="https://en.wikipedia.org/wiki/Cortex_(anatomy)" target="_blank">Cortex</a> is widely accepted as the origin of high-level, abstract thought, including strategy and planning. In essence, the Cortex provides understanding of contexts and potential choices. The design of the brain <a href="http://www.scholarpedia.org/article/Thalamocortical_circuit_model" target="_blank">routes inter-cortical connections via central hubs</a> - the Basal Ganglia, and the Thalamus. During routing, messages are filtered. This design closely matches the architecture needed for a competitive evaluation of competing plans influenced by an understanding of their consequences.<br />
<br />
If the part of your brain that generates ideas is sensitive-dependent, then it can overcome the influence of external factors and suggest all sorts of things until an internally resonating plan is found. Of course, your evaluation of the ideas should be quite deterministic, based on previous experience and knowledge.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.scholarpedia.org/w/images/1/12/Basal_Ganglia_fig3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://images.scholarpedia.org/w/images/1/12/Basal_Ganglia_fig3.jpg" height="275" width="320" /></a></div>
<br />
<i>Figure: Flow of information from the cortex through deeper structures such as the Basal Ganglia and Thalamus, before routing back to Cortex. This architecture allows deeper structures that filter and select the activity in the cortex. <a href="http://www.scholarpedia.org/article/Basal_ganglia" target="_blank">Image from Scholarpedia</a>.</i><br />
<br />
We can predict the weather a couple of days into the future, but beyond that we're no better than random chance at guessing whether it will be sunny. The impact of noise increases over time. Noise in the brain could also lead to vastly different outcomes even given similar initial states.<br />
<br />
In just a few seconds of thought, our brains are sensitive and complex enough to generate and evaluate thousands of different ideas. With a little time for consideration, the probability of generating your "preferred" plan may be very high. It’s not the end of the world if this doesn't always happen; sometimes we need expert or friendly advice!<br />
<h3 style="text-align: left;">
Attributes of Self-Determinism</h3>
Let's review the attributes of self-determinism, as defined above:<br />
<br />
<ul style="text-align: left;">
<li>Choices cannot be exactly predicted</li>
<li>Demonstrates internal motivations and preference</li>
<li>Stable despite irrelevant and/or gross variations in external stimuli</li>
<li>Sensitive to external changes that do impact internal value system</li>
<li>Consistent perception and decision-making in value-equivalent scenarios</li>
<li>Can't decide to do things completely outside our previous experience, although innovation possible via novel composition and extrapolation</li>
<li>Values based on previous decisions, lived consequences, experience and acquired knowledge</li>
<li>Continuity of experience (i.e. an <a href="https://en.wikipedia.org/wiki/Online_machine_learning" target="_blank">online</a> learning system)</li>
</ul>
<br />
As discussed above, this form of self-determination requires only a highly tuned, sensitive, but perfectly ordinary physical mechanism.<br />
<h3 style="text-align: left;">
Retrospective Future Self-Determination!</h3>
Sometimes we have to pause and think carefully about things. Sometimes we will mull over a big decision for a few days. Other “decisions” are trivial and are made instantly – perhaps most. But in these cases retrospective awareness of instinctive choices makes us more than helpless bystanders: We can reflect consciously on decisions already made, and re-assess for next time.<br />
<br />
Our imagined experience of potential outcomes might modify the values we associate with various actions. We can imagine better outcomes from other choices, that will then be more readily selected in future. In this model, you can improve yourself by conscious reflection on your actions.<br />
<br />
Retrospective re-evaluation of actions and consequences allows self-determination even in the face of <a href="http://www.nature.com/news/2008/080411/full/news.2008.751.html" target="_blank">evidence that decisions can be made before we're aware of them</a>. As long as we later take the time to review the consequences, we can modify the values that will determine future choices.<br />
<h2 style="text-align: left;">
Conclusion</h2>
The issue of free will has profound moral implications: Are we responsible for our actions? The answer is often said to lie in the existence (or otherwise) of a magic property of conscious awareness that allows understanding and decision-making to occur outside conventional physical processes. This process is often called “free-will”: The ability to choose a course that is not pre-determined, but instead a novel and unpredictable response: Chosen, not inevitable. The problem is, we know of no physical mechanism for this type of choice; we would argue that it can't even be defined coherently.<br />
<br />
Instead, this article offers a limited but positive alternative to free will as intelligent self-determinism, exploiting both sensitive-dependence and stability via feedback within ordinary physics.<br />
<br />
It's interesting to consider our concept of identity in relation to this type of self-determinism. Preferences and emotional expectations resulting from past experiences and personal consequences define our personalities and character via the values and emotional responses we learn to attach to events. This means that self-determined decisions define our identities. We are literally the product of our choices: Your personality is formed by a continuity of experience, choice and consequence.<br />
<br />
Having described self-determinism without resorting to special physics or magic beans, there’s nothing to prevent us creating artificial copies of it. We have every reason to believe we can create machines that self-determine their own destiny just as we do. I can't wait to have this debate with one of them.<br />
<h2 style="text-align: left;">
Postscript</h2>
<h3 style="text-align: left;">
Capabilities of Artificial Self-Awareness</h3>
Machines can already construct sophisticated internal representations of the world. Machines can interpret data in ways that have similar qualities and performance to human vision (e.g. <a href="http://how-old.net/">how-old.net</a>). So is there anything that fundamentally divides machines' internal representations from the experience of awareness we humans enjoy? Some philosophers call this the "<a href="http://en.wikipedia.org/wiki/Hard_problem_of_consciousness" target="_blank">hard problem</a>" of consciousness; other philosophers say <a href="http://en.wikipedia.org/wiki/Hard_problem_of_consciousness#Deflationary_accounts" target="_blank">the problem doesn't exist</a>, because the missing qualities can't be properly defined or do not really exist! This paradox is best illustrated with the Zombie thought experiment.<br />
<h3 style="text-align: left;">
The Zombie Conundrum</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-ksOumxLAYQs/VUXU0P6R79I/AAAAAAAADpY/oBnilaDlD5o/s1600/zombie_silhouettes_by_symbiopticstudios-d5n8rmk.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="100" src="http://4.bp.blogspot.com/-ksOumxLAYQs/VUXU0P6R79I/AAAAAAAADpY/oBnilaDlD5o/s1600/zombie_silhouettes_by_symbiopticstudios-d5n8rmk.png" width="200" /></a></div>
A <a href="https://en.wikipedia.org/wiki/Philosophical_zombie" target="_blank">philosophical "zombie"</a> is an entity that has the external appearance of consciousness, but internally is merely a simulation of the real thing. In fact, we might all be zombies, depending on the quality of consciousness required for the "real" thing. If you set impossible requirements, then we are all zombies.<br />
<br />
Actually, it is easier to prove the veracity of conscious experience in software than in humans. This is because we can “pause” and explore software brains in great detail, with access to all internal state. Given an algorithm that is expected to produce the qualities of consciousness, we can inspect and measure these qualities directly. It might even be easier to build genuine consciousness, than a convincing simulation.<br />
<br />
Consciousness may be a continuum rather than a binary feature: Awareness with varying degrees of quality and depth. Chimpanzees deliberately <a href="http://www.scientificamerican.com/article/chimpanzee-plans-throws-stones-zoo/" target="_blank">construct tools for later use</a>. Dogs are capable of sophisticated social interactions. So there may not be a yes/no answer to the Hard Problem.<br />
<h3 style="text-align: left;">
Progress in Artificial Awareness</h3>
The <a href="http://blog.agi.io/2014/04/introduction.html" target="_blank">purpose of this blog</a> is to look at practical techniques for automatically creating hierarchical, increasingly abstract representations of an embodied, adaptive agent in its environment. There's also some discussion of "<a href="https://en.wikipedia.org/wiki/Symbol_grounding_problem" target="_blank">symbol grounding</a>" - making the jump between sensory and symbolic representations (we believe the problem goes away when defined as "<a href="https://www.scribd.com/book/182534736/On-Intelligence" target="_blank">accumulating invariances</a>" instead).<br />
<br />
Persistent belief in the hard problem of consciousness may stem from our inability to imagine both how our awareness would scale down to current computer levels of complexity, and how computer representations would feel when scaled to human proportions.<br />
<br />
I find it fascinating to visualize internal representations created by AI, such as the image below. They do seem to capture the essential qualities and variations of broad classes, such as “Cat”. Often, when machines are wrong, their mistakes are reminiscent of the errors made by young children (e.g. “sheeps” or “shoeses” <a href="https://en.wikipedia.org/wiki/Generalization_error" target="_blank">overgeneralization</a>).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-BIf1fByKb_Q/VUXTXNAJI7I/AAAAAAAADpM/hnMd3aotBVo/s1600/cat%2Bdetection.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-BIf1fByKb_Q/VUXTXNAJI7I/AAAAAAAADpM/hnMd3aotBVo/s1600/cat%2Bdetection.jpeg" /></a></div>
<br />
<i>Figure: <a href="http://googleblog.blogspot.com/2012/06/using-large-scale-brain-simulations-for.html" target="_blank">Google trained an artificial neural network</a> (ANN) on stills from YouTube videos. Reverse engineering of one of the resulting neurons reveals this input; it is effectively a self-taught class-label for a common set of inputs. We call this class "Cat". Although the ANN didn't give it a name, it was able to experience this concept. If we had also taught it to speak like <a href="https://en.wikipedia.org/wiki/Siri_(Software)" target="_blank">SIRI</a>, it is likely the ANN would correctly associate this visual perception with the "cat" word. How is this different to human qualia of Cat?</i></div>
Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-1180536024131440638.post-63452654443959490652015-05-08T23:57:00.005+10:002015-05-16T18:52:26.539+10:00A nomenclature for Cortical Columns and related concepts.<div dir="ltr" style="text-align: left;" trbidi="on">
<h4>
<span style="font-weight: normal;">By Gideon Kowadlo and David Rawlinson</span></h4>
In our last <a href="http://blog.agi.io/2015/04/mini-macro-micro-and-hyper-columns.html">blog post,</a> we discussed the repeating functional columnar structure of the neocortex, and the inconsistent terminology used to discuss it throughout the literature. As mentioned in that post, the function of the column is an important concept for understanding the function of the neocortex, and as a consequence, for designing algorithms that are inspired by the neocortex. We therefore require a clear nomenclature for discussing and working with these concepts.<br />
<br />
As promised, here is a follow-up post with definitions of columns and associated concepts. The definitions are based on a paper by Rinkus [1] (introduced in the previous post). For decades it was widely accepted that the structure of columns in the neocortex is uniform across species and individuals. Recent studies have shown that to be not entirely correct [3] (summarised <a href="http://www.kurzweilai.net/neuroscientists-find-cortical-columns-in-brain-not-uniform-challenging-large-scale-simulation-models">here</a> and in [4] ). Rinkus provides a well founded functional basis for the definition of columns. This approach is more meaningful and robust, and directly relevant to understanding the neocortex algorithmically.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img height="319" src="https://lh6.googleusercontent.com/07li35ot3t7ku6JP4J8ur2KPqonUcx9l_zlpqtB9StDl8aoiRNkF6gbzj3EpHDh4NoiJkcTuxoi9Cnv3cik7_PSfDSVCIv6FQmlfiPGmIP5qmj03Utb2mAUWKjE7p-04ZzXdAgc" style="margin-left: auto; margin-right: auto;" width="320" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-family: Times, Times New Roman, serif; font-size: x-small;"><span style="text-align: start;">Illustration of layers and columns in the neocortex.</span><br style="text-align: start;" /><span style="text-align: start;">Reproduced from “</span><a href="http://www.benbest.com/science/anatmind/anatmd5.html" style="text-align: start;">Basic Cerebral Cortex Function with Emphasis on Vision</a><span style="text-align: start;">” by Ben Best.</span></span></td></tr>
</tbody></table>
<div>
<h3>
Layer</h3>
<div>
<h4>
Function</h4>
</div>
Defining the cortical layers is necessary for discussions on cortex. The cortex is a surface that consists of several layers of cells. The density, morphology and function of cells varies between layers. The distribution of connections to other layers varies for each layer, but is relatively constant within a layer.<br />
<br />
Although cells in any layer may connect to cells in all other layers, they do this only for cells within the same macrocolumn.<br />
<br />
This means that columns extend through all cortex layers. Columns are organised perpendicularly to layers. Since the layers consist of different patterns of cell connectivity and type, layer distinctions are also functional distinctions.<br />
<h4>
Anatomy</h4>
Typically 5-7 layers, described as:<br />
<ul>
<li>L1 Molecular Layer</li>
<li>(non-cellular, just axons)</li>
<li>[L2, L3] Small pyramidal cells (of two sizes)</li>
<li>L4 Spherical neurons.</li>
<li>[L5a, L5b] Large pyramidal cells (a & b often distinguished)</li>
<li>L6 multiform layer</li>
</ul>
<div>
<br /></div>
<div>
<h3>
Macrocolumn (also referred to as a Region or Hypercolumn)</h3>
<h4>
Function</h4>
“The function of a macrocolumn is to store <a href="http://blog.agi.io/2014/12/sparse-distributed-representations-sdrs_24.html">sparse distributed representations</a> of its overall input patterns, and to act as a recognizer of those patterns.” [1]<br />
<br />
Overall input includes bottom up input from thalamus and lower cortical areas, top down from higher cortical areas and horizontal from adjacent cortical areas. This is also referred to as the context. The macrocolumn responds to context dependent input patterns.<br />
<br />
A standard definition of a macrocolumn is a set of cells that have the same receptive field. In this definition, we specify that all cells in the macrocolumn don’t necessarily have same learned receptive field, but the same potential receptive field.<br />
<h4>
Anatomy</h4>
<ul>
<li>300–600 μm</li>
<li>60 - 80 minicolumns per macrocolumn</li>
</ul>
<div>
<br /></div>
<h3>
Minicolumn</h3>
<h4>
Function</h4>
A subset of cells in the macrocolumn, for which there is a winner take all (WTA) cell, for a given macrocolumn context (overall input pattern). According to this definition, the function of the minicolumn is to enforce sparseness.<br />
<br />
The fact that there is only one winner results in an SDR in the macrocolumn. Therefore, the macrocolumn output contains a signal from 1 winning cell in each minicolumn, in each layer (~70 cells in total per layer). In most implementations, WTA is implemented with a competitive process.<br />
<br />
A standard definition of a minicolumn is that all cells within it describe a similar feature within that the receptive field of the macrocolumn. This will occur in most cases, but it emerges from the function, which is the basis of our definition.<br />
<h4>
Anatomy</h4>
<ul>
<li>~20 cells (physically localised)</li>
<li>20–50 μm</li>
</ul>
<div>
<br /></div>
<h3>
Potential Receptive Field</h3>
<h4>
Function</h4>
A set of input bits that can be connected to a cell.<br />
<h4>
Anatomy</h4>
A set of axons that potentially could be synapsed by the dendrites of a neuron.</div>
<div>
<br />
<h3>
Learned Receptive Field</h3>
<h4>
Function</h4>
The actual set of input bits synapsed to a cell after learning and the effects of mutual inhibition or self-organisation with its neighbours.<br />
<h4>
Anatomy</h4>
The synapses formed by the dendrites of a neuron on input axons.<br />
<br />
<br />
<br />
<br />
Many researchers believe that the set of active cells in a single macrocolumn layer can be described as a Sparse Distributed Representation (SDR). We assume this to be the case in our definitions. SDRs can be understood as having the following properties:<br />
<ul>
<li>Attributes</li>
<li>Compositionality</li>
<li>Distribution</li>
</ul>
<div>
<br /></div>
<h3>
SDR: Attributes</h3>
A subset of an SDR that has some semantic meaning; 1 or more bits, NOT the whole set of active bits in an SDR.</div>
<div>
<br /></div>
<div>
<h3>
SDR: Compositionality</h3>
Compositionality of SDRs emerges from the fact that an SDR contains many attributes in combination.</div>
<div>
<br />
<h3>
SDR: Distributed</h3>
A distributed representation is one that consists of multiple attributes, those attributes can exist independently, be shared between representations and overlap.</div>
<div>
<br />
<h3>
References</h3>
[1] Rinkus G.J., “<a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00017/full#B79">A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality</a>”, <i>Frontiers in Neuroanatomy</i>, vol. 4, no. 17, 2010.<br />
[2] Rakic P., “<a href="http://www.pnas.org/content/105/34/12099.full">Confusing cortical columns</a>”, <i>Proceedings of the National Academy of Sciences</i>, vol. 105, no. 34, 2008.<br />
[3] Meyer, Hanno S., et al. "<a href="http://www.pnas.org/content/110/47/19113">Cellular organization of cortical barrel columns is whisker-specific.</a>" <i>Proceedings of the National Academy of Sciences</i>, vol. 110, no. 47, 2013.<br />
[4] Herculano-Houzel, Suzana, et al. "<a href="http://www.pnas.org/content/105/34/12593.abstract?ijkey=24753c9cca038accb2d81662721bd359a9fb2e42&keytype2=tf_ipsecsha">The basic nonuniformity of the cerebral cortex.</a>" <i>Proceedings of the National Academy of Sciences</i>, vol. 105, no.34, 2008.</div>
</div>
</div>
Gideon Kowadlohttp://www.blogger.com/profile/06783501071538911513noreply@blogger.com1tag:blogger.com,1999:blog-1180536024131440638.post-70213279199508089162015-04-27T20:26:00.000+10:002015-05-16T18:52:47.582+10:00Mini, macro, micro and hyper columns: confusing terminology<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" trbidi="on">
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-nLhrBSa-0ok/VT4OaBjlC7I/AAAAAAAAGsQ/XTiJdmczyhs/s1600/cortical-column.jpeg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="245" src="http://2.bp.blogspot.com/-nLhrBSa-0ok/VT4OaBjlC7I/AAAAAAAAGsQ/XTiJdmczyhs/s1600/cortical-column.jpeg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="text-align: start;"><span style="font-family: Times, Times New Roman, serif; font-size: x-small;">Cell-type-specific 3D reconstruction of five neighboring barrel columns in rat vibrissal cortex <br />(credit: Marcel Oberlaender et al.)</span></span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
It is well established that there is a functional columnar structure repeated throughout the cortex across species (the concept received significant attention after a study by Mountcastle in 1957 [<a href="http://brain.oxfordjournals.org/content/120/4/701.short">1</a>]; see [<a href="http://www.ncbi.nlm.nih.gov/pubmed/15937015">2</a>] and [<a href="http://www.ncbi.nlm.nih.gov/pubmed/13439410">3</a>] for reviews). This suggests a universal cortical algorithm, which is fascinating if you are trying to understand the computational underpinnings of the mind from a biological perspective, or like us, to implement AI aided by learnings from neural principles.<br />
<br />
Assuming that there is in fact a universal cortical algorithm, we would like to define the column, and understand its function through first understanding how it is composed. It’s possible to find a columnar structure at multiple scales, which has produced a range of terms in the literature, from micro-column to mini-column, to macro and hyper-column. Many studies have approached it differently, leading to inconsistent definitions. It can be really confusing to switch between sources that make different assumptions. This has caused us to confuse ourselves as well, alternating between different internal definitions over time.<br />
<br />
The situation is beautifully expressed by Rakic in [<a href="http://www.pnas.org/content/105/34/12099.full">4</a>] below.<br />
<br />
<div dir="ltr" style="line-height: 1.2; margin-bottom: 0pt; margin-left: 36pt; margin-right: 51pt; margin-top: 0pt;">
<span style="font-family: Times, Times New Roman, serif;">“Although the anatomical and functional columnarity of the neocortex has never been in doubt, the size, cell composition, synaptic organization, expression of signaling molecules, and function of various types of “columns” are dramatically different. Columns could be defined by cell constellation, pattern of connectivity, myelin content, staining property, magnitude of gene expression, or functional properties. For example, there are ocular dominance columns, orientation columns, hypercolumns, and color columns, to mention only those described in the primary visual cortex (<a href="http://www.pnas.org/content/105/34/12099.full#ref-12">12</a>), that differ from each other as well as from the columns of the alternating callosal and ipsilateral projection in the frontal lobe (<a href="http://www.pnas.org/content/105/34/12099.full#ref-8">8</a>) or various minicolumns advocated by Szentgahotai (<a href="http://www.pnas.org/content/105/34/12099.full#ref-7">7</a>), Eccles (<a href="http://www.pnas.org/content/105/34/12099.full#ref-9">9</a>), Buxhoeveden and Casanova (<a href="http://www.pnas.org/content/105/34/12099.full#ref-10">10</a>), and a more recent detailed reconstruction of barrel field columns by Sakmann and colleagues (<a href="http://www.pnas.org/content/105/34/12099.full#ref-13">13</a>) and their visibility in vivo by neuroimaging (<a href="http://www.pnas.org/content/105/34/12099.full#ref-14">14</a>). The only connections between these diverse structures and concepts is that they refer to the vertical or radial columnar organization of its elements as opposed to the horizontal or laminar organization that is more explicit in histological preparations of the mature neocortex. Thus, the term cortical “column” is used in so many ways that it can be very confusing to the nonspecialist if not more precisely defined.”</span><br />
<span style="font-family: Times, Times New Roman, serif;"><br />
See original paper for the quoted included citations.</span></div>
</div>
<div dir="ltr" style="line-height: 1.2; margin-bottom: 0pt; margin-left: 36pt; margin-right: 51pt; margin-top: 0pt;">
<span style="background-color: white; color: #333333; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
This quote is from an editorial in PNAS, where Rakic discusses a paper in that edition which disproves a long held belief that the anatomy of the cortex is uniform. This may appear to undermine the notion of a universal cortical algorithm, but it is focussed on anatomical features rather than the presence of functional columns.<br />
<br />
So we were very keen to clearly define a nomenclature consistent with the most accepted definitions, and stick to that. We discovered a paper by Rinkus from 2010 [<a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00017/full#B79">5</a>], which is part of a whole <a href="http://journal.frontiersin.org/researchtopic/the-neocortical-column-22">edition</a> of Frontiers on the cortical column. It’s an excellent resource, highly recommended. There are many good papers, but I found Rinkus’ most useful for the best definition of cortical columns that I’ve read. That’s because it is nuanced, based on function, and part of an understanding (or at least proposal) of the algorithm of a region of neocortex. From this, we’ve created our canonical definition of mini and macro columns. To be published in the next blog post.<br />
<h4>
References</h4>
[1] Mountcastle, V. B., “<a href="http://brain.oxfordjournals.org/content/120/4/701.short">The columnar organization of the neocortex</a>”, <i>Brain</i>, vol. 120, no. 4, 1997.<br />
[2] Horton, J. C., and Adams, D. L., “<a href="http://www.ncbi.nlm.nih.gov/pubmed/15937015">The cortical column: a structure without a function</a>”, <i>Philos. Trans. R. Soc. Lond., B, Biol. Sci</i>, vol. 360, no. 1456, 2005.<br />
[3] Mountcastle V.B., Davies P.W., Berman A.L., “<a href="http://www.ncbi.nlm.nih.gov/pubmed/13439410">Modality and topographic properties of single neurons of cat’s somatic sensory cortex</a>”, <i>J Neurophysiol</i>, vol. 20, no. 4, 1957.<br />
[4] Rakic P., “<a href="http://www.pnas.org/content/105/34/12099.full">Confusing cortical columns</a>”, <i>Proceedings of the National Academy of Sciences</i>, vol. 105, no. 34, 2008.<br />
[5] Rinkus G.J., “<a href="http://journal.frontiersin.org/article/10.3389/fnana.2010.00017/full#B79">A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality</a>”, <i>Frontiers in Neuroanatomy</i>, vol. 4, no. 17, 2010.</div>
Gideon Kowadlohttp://www.blogger.com/profile/06783501071538911513noreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-35471564929714103452015-03-16T21:22:00.001+11:002015-03-16T21:22:21.859+11:00Another look at the retina<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: left;">
by David Rawlinson and Gideon Kowadlo</div>
<h2 style="text-align: left;">
Why the retina is worth a deeper look</h2>
Recently, we have been looking at the retina - light sensitive cell layers inside the eyeball, that detect wavelength and intensity, and compute some initial encoding of this data that is transmitted to the brain proper via the optic nerve. The retina is very interesting and informative because:<br />
<br />
- The retina is smaller and less complex than the cortex making it easier to reverse-engineer.<br />
<br />
- The retina has fewer layers of largely local processing than e.g. the visual cortex.<br />
<br />
- The retina does not receive feedback from other parts of the CNS, so processing is a unidirectional process rather than a more complex recurrent one.<br />
<br />
- The retina only receives input from the outside world, in the form of light. Therefore, unlike other parts of the Central Nervous System (CNS), the input data to the retina can be controlled, allowing precise experimentation.<br />
<br />
- You don't have to remove the retina or risk insertion of foreign objects to experiment on the retina. For example, the retina performs an encoding of light wavelengths detected by cells sensitive to different frequencies into an opponent-color representation <a href="https://en.wikipedia.org/wiki/Opponent_process" target="_blank">[1]</a>. This gives rise to perceptual artifacts we can all observe, such as "Impossible colours" we can only perceive via careful trickery <a href="https://en.wikipedia.org/wiki/Impossible_color" target="_blank">[2]</a>.<br />
<br />
- The retina is "an extension of the brain", not an entirely unique structure. What does this mean? During foetal development, the retina initially develops within the brain but is later pinched off, only remaining attached via the optic nerve <a href="http://www.britannica.com/EBchecked/topic/500012/retina" target="_blank">[3]</a>. There are many similarities between CNS tissues and retina tissue, for example in immune response and disease pathologies, that enable retinal experiments to inform brain research <a href="http://www.ncbi.nlm.nih.gov/pubmed/23165340" target="_blank">[4]</a>.<br />
<h2 style="text-align: left;">
Retinal Cell Dendritic Morphology and Function</h2>
<div>
Fergal Byrne recently referred us to this <a href="https://www.youtube.com/watch?v=rWBW-OrVGAA" target="_blank">fascinating lecture by Christof Koch </a>concerning the retina <a href="https://www.youtube.com/watch?v=rWBW-OrVGAA" target="_blank">[5]</a>. I was struck by the variety of <a href="http://youtu.be/rWBW-OrVGAA?t=29m" target="_blank">cell types catalogued [6]</a> and by the discussion of <a href="http://synapses.clm.utexas.edu/pubs/dendrites.pdf" target="_blank">dendritic morphology and its relationship to cell function [7]</a>. Dendrites of some retinal cells can perform simple computations on their input synapses that affect the signal relayed to the cell body.<br />
<br />
The role of dendrite computation is significant because the traditional artificial neuron (used in e.g. most recurrent and convolutional artificial neural networks today) has linearly weighted dendrites that are all integrated simultaneously (not hierarchically). The weighted-sum is then passed to an activation function, which is usually nonlinear. It's a good reminder that this simplified model may not be good enough - or perhaps not efficient.<br />
<br />
Retinal cells with multiple levels of dendrite branching and simple dendrite threshold testing would seem to fit the HTM cell model <a href="http://numenta.com/assets/img/pages/blog/2013-02-19/main.jpg" target="_blank">[8]</a> better than the conventional ANN one. The complex dendrite architecture of some retinal and HTM cells might be functionally hierarchical; at the very least, if there is an integration and test in each dendrite prior to the cell soma integration, then each cell is a 2 level hierarchy.</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://numenta.com/assets/img/pages/blog/2013-02-19/main.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://numenta.com/assets/img/pages/blog/2013-02-19/main.jpg" height="232" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The HTM cell model. Note that the cell can be activated by any of the dendrites on the right using a logical OR function; each dendrite responds to a unique combination of active input synapses (blue dots). A separate type of dendrite encodes sequential synapses from prior cells (green). Image from Numenta.com.</td></tr>
</tbody></table>
<h2 style="text-align: left;">
A success story</h2>
<div>
The video then looks at some impressive results from an artificial retinal encoder that seems to accurately mimic the output of a natural retina, enabling high quality prostheses. The results from from <a href="http://www.pnas.org/content/109/37/15012.full" target="_blank">this paper [9]</a> by Nirenberg and Pandarinath. Nirenberg also gave a <a href="http://www.ted.com/talks/sheila_nirenberg_a_prosthetic_eye_to_treat_blindness#" target="_blank">TED talk on the encoder [10]</a>. It's looking likely that the retina will soon be well understood. Perhaps the information gateway to the brain will also become the gateway to our understanding of the brain.</div>
<h2 style="text-align: left;">
References</h2>
[1] <a href="https://en.wikipedia.org/wiki/Opponent_process">https://en.wikipedia.org/wiki/Opponent_process</a><br />
<div>
[2] <a href="https://en.wikipedia.org/wiki/Impossible_color">https://en.wikipedia.org/wiki/Impossible_color</a></div>
<div>
[3] <a href="http://www.britannica.com/EBchecked/topic/500012/retina">http://www.britannica.com/EBchecked/topic/500012/retina</a></div>
<div>
[4] <a href="http://www.ncbi.nlm.nih.gov/pubmed/23165340">http://www.ncbi.nlm.nih.gov/pubmed/23165340</a></div>
<div>
[5] <a href="https://www.youtube.com/watch?v=rWBW-OrVGAA">https://www.youtube.com/watch?v=rWBW-OrVGAA</a></div>
<div>
[6] <a href="http://youtu.be/rWBW-OrVGAA?t=29m">http://youtu.be/rWBW-OrVGAA?t=29m</a></div>
<div>
[7] <a href="http://synapses.clm.utexas.edu/pubs/dendrites.pdf">http://synapses.clm.utexas.edu/pubs/dendrites.pdf</a><br />
[8] <a href="http://numenta.com/assets/img/pages/blog/2013-02-19/main.jpg">http://numenta.com/assets/img/pages/blog/2013-02-19/main.jpg</a></div>
<div>
[9] <a href="http://www.pnas.org/content/109/37/15012.full">http://www.pnas.org/content/109/37/15012.full</a><br />
[10] <a href="http://www.ted.com/talks/sheila_nirenberg_a_prosthetic_eye_to_treat_blindness#">http://www.ted.com/talks/sheila_nirenberg_a_prosthetic_eye_to_treat_blindness#</a></div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-18934551763022379002015-03-08T19:46:00.002+11:002015-03-08T19:46:44.618+11:00The Arcade Learning Environment - a test suite for AGI<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: left;">
This might be our new test suite for the algorithms!</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<a href="http://www.arcadelearningenvironment.org/">http://www.arcadelearningenvironment.org/</a></div>
<div style="text-align: left;">
<i><br /></i>
<i>"The Arcade Learning Environment (ALE) is a simple object-oriented framework that allows researchers and hobbyists to develop AI agents for Atari 2600 games. It is built on top of the Atari 2600 emulator <a href="http://stella.sourceforge.net/">Stella</a> and separates the details of emulation from agent design."</i><br />
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-u5Z-VQl-fqc/VPqUO3LeWNI/AAAAAAAAC_Q/8wPzXokDW8o/s1600/arcade.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-u5Z-VQl-fqc/VPqUO3LeWNI/AAAAAAAAC_Q/8wPzXokDW8o/s1600/arcade.png" height="243" width="320" /></a></div>
<br />
Why are old computer games good tests for general intelligence? Well, these games can't be easily described with simple rules. You have to learn the rules by playing them. This is an acquired, general skill. In contrast, the best chess software includes heuristics and rules written by humans, that help it to play. The variety of arcade gameplay also ensures that algorithms aren't too tailored to specific problem types. Some games are strategic; others are simply reactive.<br />
<br />
Here's an article on 538.com that discusses the varying difficulty and relevance of training general purpose artificial intelligence algorithms on older computer games vs classical board games:<br />
<br />
<a href="http://fivethirtyeight.com/features/computers-are-learning-how-to-treat-cancer-and-diabetes-by-playing-poker-and-atari/">http://fivethirtyeight.com/features/computers-are-learning-how-to-treat-cancer-and-diabetes-by-playing-poker-and-atari/</a><br />
<br />
The 538 article also discusses the issue of imperfect and incomplete information. In board games such as chess, the entire state of the game is usually visible to both players. However, in arcade games, there are often graphical or design choices that make total knowledge of the game state impossible (e.g. events can occur out of sight).<br />
<br />
Here's a link to the developers' paper, which describes how to connect to the environment and visual encoding of game screens:<br />
<br />
<a href="http://www.arcadelearningenvironment.org/wp-content/uploads/2012/07/bellemare13arcade.pdf">http://www.arcadelearningenvironment.org/wp-content/uploads/2012/07/bellemare13arcade.pdf</a><br />
<br />
If you can write an algorithm that does well on a range of these algorithms you're in a very good position to advance the-state-of-the-art in artificial general intelligence (AGI). This is suitably demonstrated by the response to <a href="http://arxiv.org/abs/1312.5602" target="_blank">DeepMind's paper</a>, that also uses the Arcade Learning Environment.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1180536024131440638.post-69281934542883477372015-02-25T19:56:00.000+11:002015-03-10T19:50:54.557+11:00Eric Horvitz on the new era of AI<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-4hxc1h5O68U/VOh-5MMWGuI/AAAAAAAAGpc/Yx1og-zXsfo/s1600/ericHorvitz.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-4hxc1h5O68U/VOh-5MMWGuI/AAAAAAAAGpc/Yx1og-zXsfo/s1600/ericHorvitz.jpeg" /></a></div>
<br />
<br />
I found this video interview with Eric Horvitz, the head of Microsoft Research, to be a really interesting window into their work and vision - a large part of which is AI. As Eric states, there is a "growing consensus that the next, if not last, enduring competitive battlefield, among major IT companies will be AI". Very well said.<br />
<br />
The full interview, <a href="http://research.microsoft.com/en-us/about/luminaries/eric-horvitz.aspx">here</a>, is worth the watch.Gideon Kowadlohttp://www.blogger.com/profile/06783501071538911513noreply@blogger.com0