ORCID

http://orcid.org/0000-0001-6778-6392

Date of Award

Winter 1-15-2021

Author's School

McKelvey School of Engineering

Author's Department

Electrical & Systems Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

Creation of quantitative models of neural functions and discovery of underlying principles of how neural circuits learn and compute are long-standing challenges in the field of neuroscience. In this work, we blend ideas from computational neuroscience, information and control theories with machine learning to shed light on how certain key functions are encoded through the dynamics of neural circuits. In this regard, we pursue the ‘top-down’ modeling approach of engineering neuroscience to relate brain functions to basic generative dynamical mechanisms. Our approach encapsulates two distinct paradigms in which ‘function’ is understood. In the first part of this research, we explore the synthesis of neural dynamics for task-independent, well-defined objective function: the information processing capacity of neural circuits/networks. We contribute our efforts to devise a strategy to optimize the dynamics of the network at hand using information maximization as an objective function. In this vein, our principle contributions are in terms of mathematical formulation of the optimization problem and proposing a simplification method to reduce the computational burden associated with mutual information optimization. Then, we illustrate the novelty of our ideas for well-understood dynamical systems. Our methodology results in dynamics that generically perform as encoder of afferent inputs distribution and facilitate information propagation. However, determining a well-defined mathematical objective function may not be straightforward in all cases, e.g. complex cognitive functions. To address this issue, in the second part of this research we consider top-down synthesis on the basis of a surrogate task. In particular, we optimize ‘artificial’ recurrent networks in order to perform a computational task that embodies the function we are interested in studying, i.e. working memory. We contribute our efforts to propose a realistic training paradigm for recurrent neural networks and elucidate how dynamics of the optimized artificial networks can support computations implemented in memory functions. We will discuss the theoretical and technical steps involved in our interpretations, as well as remaining open questions and future directions.

Language

English (en)

Chair

ShiNung S. Ching

Committee Members

ShiNung S. Ching, Jr-Shin J. Li, Joseph J. O'Sullivan, Barani B. Raman,

Share

COinS