top of page
작성자 사진동규 김

Dim(N) Week 2 - A deep learning framework for neuroscience

Dim(N) Week 2 - Donggyu Kim (07/11/24) A deep learning framework for neuroscience


[Paper] Richards, B. A., Lillicrap, T. P., Beaudoin, P., Bengio, Y., Bogacz, R., Christensen, A., ... & Kording, K. P. (2019). A deep learning framework for neuroscience. Nature neuroscience, 22(11), 1761-1770. https://www.nature.com/articles/s41593-019-0520-2#citeas


[Abstract] Systems neuroscience seeks to explain how the brain performs various perceptual, cognitive, and motor tasks. In contrast, artificial intelligence designs computational systems based on tasks to be solved. In artificial neural networks, three elements—objective functions, learning rules, and architectures—are determined by design. With the success of deep learning, brain-inspired architectures are becoming increasingly important, and these three elements play a central role in how we model, design, and optimize complex artificial learning systems. We argue that a greater focus on this optimization-based framework will also greatly benefit systems neuroscience. We provide examples of how this framework can lead to theoretical and experimental advances. We believe that this principled approach to systems neuroscience will contribute to achieving more rapid progress.


[Summary] This paper proposes a framework for applying deep learning principles to systems neuroscience. It argues that three key elements are crucial for understanding the brain: objective functions, learning rules, and architectures. These are analogous to how artificial neural networks (ANNs) are designed. The paper emphasizes that these elements play a crucial role in modeling brain function and shaping neural computations.The authors suggest that traditional neuroscience approaches, which focus on the specific computations performed by neurons and their circuits, may not scale well when dealing with the complexity of the whole brain. A deep learning-inspired framework, which emphasizes learning from data and optimizing objective functions, can provide a more scalable and comprehensive approach. This framework views brain function and behavior as the outcome of an evolutionary optimization process.

Furthermore, the paper discusses how deep learning neural networks can mimic some of the representational transformations and behaviors of the brain, and how these networks can be used as models for understanding neural processes. The authors emphasize the importance of identifying appropriate objective functions that the brain might be optimizing, while acknowledging the difficulty of this task. (They propose moving away from detailed top-down descriptions of neural circuits towards a more abstract, bottom-up approach that identifies optimization principles governing brain function.)

However, the paper also acknowledges the complexity of real neural systems and the potential limitations of their framework. For example, the brain's architecture and learning rules are influenced by evolutionary constraints, which may not always align with deep learning principles.

Therefore, they emphasize that this framework is not meant to replace traditional methods but rather serve as a complementary approach that can enhance our understanding of the brain.

조회수 0회댓글 0개

최근 게시물

전체 보기

Dim(N) Week 15

[Abstract] 최근 뇌 영상 연구는 특정 뇌 영역의 활성화를 넘어, 뇌 전체의 연결망을 분석하는 방향으로 발전하고 있습니다. 뇌의 다양한 영역이 서로 연결되어 복잡한 네트워크를 형성하고, 이 네트워크의 상호작용이 인지 기능을 수행하는 데...

Dim(N) Week 14

[Abstract] 지금까지 발표했던 Bayesian Decision Model이 행동적인 측면만을 다뤘다면, 이번에는 Neural evidence로 어떻게 뒷받침될 수 있는지에 대하여 고전부터 최신논문까지 다뤄보려고 합니다. 필수는 아니지만,...

Dim(N) Week 13

[Abstract] BDM 1: Likelihood & cue combination Bayesian decision model의 아주 고전적인 짧은 논문 두편과 함께 다른 modality의 sensory information이 어떻게...

Comments


bottom of page