Dim(N) Week 2 - Donggyu Kim (07/11/24) A deep learning framework for neuroscience
[Paper] Richards, B. A., Lillicrap, T. P., Beaudoin, P., Bengio, Y., Bogacz, R., Christensen, A., ... & Kording, K. P. (2019). A deep learning framework for neuroscience. Nature neuroscience, 22(11), 1761-1770. https://www.nature.com/articles/s41593-019-0520-2#citeas
[Abstract] Systems neuroscience seeks to explain how the brain performs various perceptual, cognitive, and motor tasks. In contrast, artificial intelligence designs computational systems based on tasks to be solved. In artificial neural networks, three elements—objective functions, learning rules, and architectures—are determined by design. With the success of deep learning, brain-inspired architectures are becoming increasingly important, and these three elements play a central role in how we model, design, and optimize complex artificial learning systems. We argue that a greater focus on this optimization-based framework will also greatly benefit systems neuroscience. We provide examples of how this framework can lead to theoretical and experimental advances. We believe that this principled approach to systems neuroscience will contribute to achieving more rapid progress.
[Summary] This paper proposes a framework for applying deep learning principles to systems neuroscience. It argues that three key elements are crucial for understanding the brain: objective functions, learning rules, and architectures. These are analogous to how artificial neural networks (ANNs) are designed. The paper emphasizes that these elements play a crucial role in modeling brain function and shaping neural computations.The authors suggest that traditional neuroscience approaches, which focus on the specific computations performed by neurons and their circuits, may not scale well when dealing with the complexity of the whole brain. A deep learning-inspired framework, which emphasizes learning from data and optimizing objective functions, can provide a more scalable and comprehensive approach. This framework views brain function and behavior as the outcome of an evolutionary optimization process.
Furthermore, the paper discusses how deep learning neural networks can mimic some of the representational transformations and behaviors of the brain, and how these networks can be used as models for understanding neural processes. The authors emphasize the importance of identifying appropriate objective functions that the brain might be optimizing, while acknowledging the difficulty of this task. (They propose moving away from detailed top-down descriptions of neural circuits towards a more abstract, bottom-up approach that identifies optimization principles governing brain function.)
However, the paper also acknowledges the complexity of real neural systems and the potential limitations of their framework. For example, the brain's architecture and learning rules are influenced by evolutionary constraints, which may not always align with deep learning principles.
Therefore, they emphasize that this framework is not meant to replace traditional methods but rather serve as a complementary approach that can enhance our understanding of the brain.
Comments