@cogsci
The notion of role-filler binding as central to cognitive(ish) representations has been around for ages (possibly under different names, such as slot-value in GOFAI: https://en.wikipedia.org/wiki/Frame_(artificial_intelligence)). This is hardly surprising because it's effectively the same as variable-value.
The role is generally treated as though it's an atomic symbol, whereas it's not uncommon for the filler to be taken as a composite value (e.g. a tree). I am toying with embracing the idea of roles also being composite representations.
In a cognitive-agent/robotic context, I think it might be useful for the role to be a "sensorimotor program" and the filler to be the sensory input arising from running the sensorimotor program specified by the role. (This is heading towards a Predictive State Representation: https://en.wikipedia.org/wiki/Predictive_state_representation).
(1) I would greatly appreciate any pointers to discussions of role-filler bindings as sensorimotor predictions (similar or related to the sense above).
"Attention" could be construed as a "run/don't_run" flag in the sensorimotor program. This is basically treating attention as a kind of action and "don't attend" as not doing that action. (If that were true it's possible that there may also be other attention mechanisms, e.g. the precision weighting posited by Predictive Coding: https://en.wikipedia.org/wiki/Predictive_coding#Precision_weighting).
(2) I would greatly appreciate any pointers to discussions of attention as a kind of executable sensorimotor action.
Re "Structure Learning in Predictive Processing Needs Revision": This paper appears to assume a (network structured) generative model that that generates predictions (of representations). My original query is somewhat different, and is perhaps most easily understood in a GOFAI context (i.e. forget about neural nets and pretend someone is writing a LISP program).
In that context a role:filler binding is treated like assigning a value to a variable, e.g. height:182. This can be implemented by the association of two representations, the role identifier (height) and the filler value (182). In GOFAI it's usual for the role value to be an atomic symbol, while the filler value could be a complex data structure (e.g. a tree). The role:filler binding is then a data structure associating the role value and filler value in such a way that the role value can be used as a key to retrieve the filler value.
My question was considering the case where the role value is a complex data structure rather than an atomic symbol. We can still create a data structure to represent the binding and use the (complex) role value to retrieve the filler value.
Then I was proposing a possible use for such a (perverse) data structure. What if the role value was a data structure representing a "sensorimotor program" and the filler value was a data structure representing the sensory input resulting (or predicted to result) from executing the sensorimotor program? In this framework, for the example height:182, the value 'height' would be a data structure representing a sensorimotor program expressing something like "stand the person next to a wall and mark the wall at the level of the top of the person's head; find the toolbox and open it; find the tape measure in the toolbox; take the tape measure to the wall and measure the distance from the mark to the floor; look at the number on the tape measure adjacent to the mark" and the filler value 182 is the sensory input resulting from all that.
My question was whether people have discussed that type of representational scheme.
In the structure learning paper the generative model doesn't necessarily correspond to a sensorimotor, but even if it does it's the implementation of the program rather than a representation of the program - so it doesn't really make contact with the role:filler representation part of my question. (Or I may have misconstrued the paper - feedback welcome.)
Re "Downstream network transformations dissociate neural activity from causal functional contributions": I can't see any direct relevance (again, possibly my failure to understand). My question doesn't require any use of neural networks (although ultimately I am looking at vector space embedding implementations).
Of minor historical interest, DMN/PMML/Drools were developed for precisely the content of my former day-job. Never say never, but I would have struggled to implement something that looked like cognitive robotics in them.
It's quite a while since I read Brooks' "Intelligence without representation", so I'll be very interested to read Jordanous' historical perspective of it. My (dimly recollected) view of "Intelligence without representation" is that it was very much a reaction to the research bandwagons du jour, so I was inclined to interpret Brooks' views as corrections relative to the then-prevailing hype, rather than as absolutes.
I think you might be interested in Rich Sutton's Predictive State Representation (not the same thing as Predictive Coding). Here's a link to some old slides of his, only available on the Internet Archive: https://web.archive.org/web/20240224030716/https://incompleteideas.net/Talks/McGill_2005.pdf
The notion is that an agent's representation of its current state is encoded as a bundle of predicted future sensorimotor trajectories. This seems to me to be an implementation of the concept of affordances (What would I observe if I did x?).
Re Sutton (and others) and predictive State Representations: There are a reasonable number of later technical papers, but I think they tend to narrow their focus to concentrate on approaches that are tractable in their chosen mathematical approach. I prefer to centre the big picture: What if all representations (including abstract conceptual ones) were sensorimotor trajectory predictions? Sutton used to have a bundle of presentations that collectively provided a reasonable coverage at that high level, but they've succumbed to bit rot (hence the link to the Internet Archive).
I presume that representing abstract conceptual content in terms of sensorimotor trajectories will rely on analogical mapping (e.g. addition mapping onto experiences manipulating lengths of objects). This also suggests that representations are constructed on the fly (from analogical mappings of historical fragments) in response to current task demands. This is a long way from Brooks' conception of representation in "Intelligence without representation".
You might be interested in "High-level perception ...":
DOI: 10.1080/09528139208953747