(New direction for this blog.)
“Modules (carefully defined) are important in biological brains for efficiency reasons”.
Older models of learning demonstrated the effects on random networks, but real brain networks are small-world networks (of some varying degree), with highly-interconnected hubs, for various reasons including metabolic cost of long-range connectivity (‘wiring cost’). It is therefore unrealistic to demonstrate learning on random networks.
“Random” has multiple definitions — random networks take one aspect of graph generation (such as number of nodes and average number of edges) and hold it constant while varying another aspect (e.g. the from and to nodes of the edges)
“Module” is overloaded. Modules definitely aren’t repeated neural circuits — there is a lot of variability in the actual wiring. Instead modules may be “characteristic patterns of ‘average connectivity’ that can inform dynamic models of local or large-scale cortical dynamics”.
Highly-connected hubs are association regions?
- Review of modularity: Modular and Hierarchically Modular Organization of Brain Networks
- Spatial interleaving of subnetworks: Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex
- Assortativity: of graphs, positive if similar nodes are more likely to be connected dissimilar nodes; negative if the opposite is true.
- Sonic hedgehog
- “A large repertoire of diverse states may be beneficial to an organism as it contributes to its capacity to process signals from an environment that can only be partially predicted”. Pure, but interesting, speculation, on rapid variation of functional connectivity in the default network of human brains.