logo

Project AGI

Building an Artificial General Intelligence

This site has been deprecated. New content can be found at https://agi.io

Tuesday 9 February 2016

Some interesting finds: Acyclic hierarchical modelling and sequence unfolding

This week we have a couple of interesting links to share.

From our experiments with generative hierarchical models, we claimed that the model produced by feed-forward processing should not have loops. Now we have discovered a paper by Bengio et al titled "Towards biologically plausible deep learning" [1] that supports this claim. The paper looks for biological mechanisms that mimic key features of deep learning. Probably the credit assignment problem is the most difficult feature to substantiate - ensuring each weight is updated correctly in response to its contribution to the overall output of the network - but the paper does leave me thinking it's plausible.

Anyway the reason I'm talking about it is this quote:

"There is strong biological evidence of a distinct pattern of connectivity between cortical areas that distinguishes between “feedforward” and “feedback” connections (Douglas et al., 1989) at the level of the microcircuit of cortex (i.e., feedforward and feedback connections do not land in the same type of cells). Furthermore, the feedforward connections form a directed acyclic graph with nodes (areas) updated in a particular order, e.g., in the visual cortex (Felleman and Essen, 1991)."

This says that the feedforward modelling process (which we believe is constructing a hierarchical model) is a directed acyclic graph (DAG) - which means it does not have loops, as we predicted. Secondly, it is another source claiming that the representation produced is hierarchical (in this case, a DAG). The cited work is a much older paper - "Distributed hierarchical processing in the primate cerebral cortex" [2]. We're still reading, but there's a lot of good background information here.

The second item to look at this week is a demo by Felix Andrews featuring temporal pooling [3] and sequence unfolding. "Unfolding" means transforming the pooled sequence representation back into its constituent parts - i.e. turning a sequence into a series of steps.

Felix demonstrates that high-level sequence selection can successfully be used to track and predict through observation of the corresponding lower-level sequence. This is achieved by causing the high-level sequence to predict all steps, and then tracking through the predicted sequence using first-order predictions in the lower level. Both levels are necessary - the high level prediction provides guidance for the low-level to ensure it predicts correctly through forks. The low level prediction keeps track of what's next in the sequence.

[1] "Towards Biologically Plausible Deep Learning" Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein and Zhouhan Lin (2015) http://arxiv.org/pdf/1502.04156v2.pdf

[2] "Distributed hierarchical processing in the primate cerebral cortex" Felleman DJ, Van Essen DC (1991) http://www.ncbi.nlm.nih.gov/pubmed/1822724

[3] Felix Andrews HTM temporal pooling and sequence unfolding demo http://viewer.gorilla-repl.org/view.html?source=gist&id=95da4401dc7293e02df3&filename=seq-replay.clj

2 comments :

  1. I looked up the papers referred to by Bengio for the "acyclic" claim. On the way I found this paper:

    http://www.ini.uzh.ch/admin/extras/doc_get.php?id=42092
    Canonical Cortical Circuits
    Rodney J. Douglas and Kevan A. C. Martin

    'More recent experimental and theoretical considerations of the cortical circuits, however,have suggested a rather different architecture: one in which local circuits of cortical neurons are connected in a series of nested positive and negative feedback loops, called “recurrent circuits”'

    I'm guessing you mean "feedback" up and down layers? But I'd like to put my finger on exactly what this "acyclic" claim refers to.

    ReplyDelete
  2. Looking at the Felleman & Essen ref:

    "...the pure form of this hypothesis is difficult to reconcile with the finding of highly reciprocal connectivity and parallel channels discovered in more recent studies of the visual pathway (cf. Rockland and Pandya, 1979; Stone et al., 1979; Lennie, 1980; Lennie et al., 1990; Shapley, 1990). On the other hand, there is no a priori reason to restrict the notion of hierarchical processing to a strictly serial sequence."

    It sounds like they are breaking a sweat trying to account for an avalanche of evidence for feedback.

    Why are you guys against feedback anyway?

    I feel the same about Bengio's paper. Feels like he's working hard to squeeze the possibility of his model out of the evidence, despite it not fitting naturally.

    Like he's saying "we could still get this round peg into this square hole, they don't absolutely exclude each other."

    ReplyDelete