logo

Project AGI

Building an Artificial General Intelligence

This site has been deprecated. New content can be found at https://agi.io

Tuesday 9 February 2016

Some interesting finds: Acyclic hierarchical modelling and sequence unfolding

This week we have a couple of interesting links to share.

From our experiments with generative hierarchical models, we claimed that the model produced by feed-forward processing should not have loops. Now we have discovered a paper by Bengio et al titled "Towards biologically plausible deep learning" [1] that supports this claim. The paper looks for biological mechanisms that mimic key features of deep learning. Probably the credit assignment problem is the most difficult feature to substantiate - ensuring each weight is updated correctly in response to its contribution to the overall output of the network - but the paper does leave me thinking it's plausible.

Anyway the reason I'm talking about it is this quote:

"There is strong biological evidence of a distinct pattern of connectivity between cortical areas that distinguishes between “feedforward” and “feedback” connections (Douglas et al., 1989) at the level of the microcircuit of cortex (i.e., feedforward and feedback connections do not land in the same type of cells). Furthermore, the feedforward connections form a directed acyclic graph with nodes (areas) updated in a particular order, e.g., in the visual cortex (Felleman and Essen, 1991)."

This says that the feedforward modelling process (which we believe is constructing a hierarchical model) is a directed acyclic graph (DAG) - which means it does not have loops, as we predicted. Secondly, it is another source claiming that the representation produced is hierarchical (in this case, a DAG). The cited work is a much older paper - "Distributed hierarchical processing in the primate cerebral cortex" [2]. We're still reading, but there's a lot of good background information here.

The second item to look at this week is a demo by Felix Andrews featuring temporal pooling [3] and sequence unfolding. "Unfolding" means transforming the pooled sequence representation back into its constituent parts - i.e. turning a sequence into a series of steps.

Felix demonstrates that high-level sequence selection can successfully be used to track and predict through observation of the corresponding lower-level sequence. This is achieved by causing the high-level sequence to predict all steps, and then tracking through the predicted sequence using first-order predictions in the lower level. Both levels are necessary - the high level prediction provides guidance for the low-level to ensure it predicts correctly through forks. The low level prediction keeps track of what's next in the sequence.

[1] "Towards Biologically Plausible Deep Learning" Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein and Zhouhan Lin (2015) http://arxiv.org/pdf/1502.04156v2.pdf

[2] "Distributed hierarchical processing in the primate cerebral cortex" Felleman DJ, Van Essen DC (1991) http://www.ncbi.nlm.nih.gov/pubmed/1822724

[3] Felix Andrews HTM temporal pooling and sequence unfolding demo http://viewer.gorilla-repl.org/view.html?source=gist&id=95da4401dc7293e02df3&filename=seq-replay.clj

Wednesday 3 February 2016

Intuition over reasoning for AI


By Gideon Kowadlo

I’m reading a fascinating book called The Righteous Mind, by Jonathan Haidt. It’s one of those reads that can fundamentally shift the way that you see the world. In this case, the human world, everyone around you, and yourself.

A central idea of the book is that our behaviour is mainly dictated by intuition rather than reasoning and that both are aspects of cognition.

Many will be able to identify in themselves and others, the tendency to act first and rationalise later - even though it feels like the opposite. But more than that, our sense of morality arises from intuition and it enables us to act quickly and make good decisions.

A compelling biological correlate is the ventromedial prefrontal cortex. The way it enables us to use emotion/intuition for decision making is described well in this passage:

Damasio had noticed an unusual pattern of symptoms in patients who had suffered brain damage to a specific part of the brain - the ventromedial (i.e., bottom-middle) prefrontal cortex (abbreviated vmPFC; it’s the region just behind and above the bridge of the nose). Their emotionality dropped nearly to zero. They could look at the most joyous or gruesome photographs and feel nothing. They retained full knowledge of what was right and wrong, and they showed no deficits in IQ. They even scored well on Kohlberg’s tests of moral reasoning. Yet when it came to making decisions in their personal lives and at work, they made foolish decisions or no decisions at all. They alienated their families and their employers, and their lives fell apart.

Damasio’s interpretation was that gut feelings and bodily reactions were necessary to think rationally, and that one job of the vmPFC was to integrate those gut feelings into a person’s conscious deliberations. When you weigh the advantages and disadvantages of murdering your parents … you can’t even do it, because feelings of horror come rushing in through the vmPFC.

But Damasio’s patients could think about anything, with no filtering or coloring from their emotions. With the vmPFC shut down, every option at every moment felt as good as every other. The only way to make a decision was to examine each option, weighting the pros and cons using conscious verbal reasoning. If you’ve ever shopped for an appliance about which you have few feelings - say, a washing machine - you know how hard it can be once the number of options exceeds six or seven (which is the capacity of our short-term memory). Just imagine what your life would be like if at every moment, in every social situation, picking the right thing to do or say became like picking the best washing machine among ten options, minute after minute, day after day. You’d make foolish decisions too.

Our aim has always been to build a general reasoning machine that can be scaled up. We aren’t interested in building an artificial human, which carries the legacy of a long evolution through many incarnations.

This is the first time I’ve considered the importance of building intuition into the algorithm as a fundamental component. ‘Gut’ reactions are not to be underestimated. It may be the only way to make effective AGI, not to mention the need to create ‘pro-social’ agents with which we can interact in daily life.

It is possible though, that this is an adaption to the limitations of our reasoning, rather than a fundamentally required feature. If the intelligence was implemented in silicon and not bound by 'cognitive effort' in the same way that we are, it could potentially select favourable actions efficiently based on intellectual reasoning, without the ‘intuition’.

This is fascinating to think about in terms of human intelligence and behaviour. It raises exciting questions about the nature of intelligence itself and the relationship between cognition and both reasoning and intuition. We’ll be sure to consider these questions as we continue to develop an algorithm for AGI.

Addendum

From a functional perspective the vmPFC appears to be a separate parallel ‘component’ that is richly connected to many other brain areas.

"The ventromedial prefrontal cortex is connected to and receives input from the ventral tegmental area, amygdala, the temporal lobe, the olfactory system, and the dorsomedial thalamus. It, in turn, sends signals to many different brain regions including; The temporal lobe, amygdala, the lateral hypothalamus, the hippocampal formation, the cingulate cortex, and certain other regions of the prefrontal cortex.[4] This huge network of connections affords the vmPFC the ability to receive and monitor large amounts of sensory data and to affect and influence a plethora of other brain regions, particularly the amygdala." 
 Wikipedia Ventromedial prefrontal cortex