ORCID

https://orcid.org/ 0000-0003-3736-9787

Date of Award

Summer 8-15-2018

Author's School

McKelvey School of Engineering

Author's Department

Electrical & Systems Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

The brain produces complex patterns of activity that occur at different spatio-temporal scales. One of the fundamental questions in neuroscience is to understand how exactly these dynamics are related to brain function, for example our ability to extract and process information from the sensory periphery. This dissertation presents two distinct lines of inquiry related to different aspects of this high-level question. In the first part of the dissertation, we study the dynamics of burst suppression, a phenomenon in which brain electrical activity exhibits bistable dynamics. Burst suppression is frequently encountered in individuals who are rendered unconscious through general anesthesia and is thus a brain state associated with profound reductions in awareness and, presumably, information processing. Our primary contribution in this part of the dissertation is a new type of dynamical systems model whose analysis provides insights into the mechanistic underpinnings of burst suppression. In particular, the model yields explanations for the emergence of the characteristic two time-scales within burst suppression, and its synchronization across wide regions of the brain.

The second part of the dissertation takes a different, more abstract approach to the question of multiple time-scale brain dynamics. Here, we consider how such dynamics might contribute to the process of learning in brain and brain-like networks, so as to enable neural information processing and subsequent computation. In particular, we consider the problem of optimizing information-theoretic quantities in recurrent neural networks via synaptic plasticity. In a recurrent network, such a problem is challenging since the modification of any one synapse (connection) has nontrivial dependency on the entire state of the network. This form of global learning is computationally challenging and moreover, is not plausible from a biological standpoint. In our results, we overcome these issues by deriving a local learning rule, one that modifies synapses based only on the activity of neighboring neurons. To do this, we augment from first principles the dynamics of each neuron with several auxiliary variables, each evolving at a different time-scale. The purpose of these variables is to support the estimation of global information-based quantities from local neuronal activity. It turns out that the synthesized dynamics, while providing only an approximation of the true solution, nonetheless are highly efficacious in enabling learning of representations of afferent input. Later, we generalize this framework in two ways, first to allow for goal-directed reinforcement learning and then to allow for information-based neurogenesis, the creation of neurons within a network based on task needs. Finally, the proposed learning dynamics are demonstrated on a range of canonical tasks, as well as a new application domain: the exogenous control of neural activity.

Language

English (en)

Chair

ShiNung Ching

Committee Members

Ayan Chakrabarti, Zachary Feinstein, Jr-Shin Li, Shen Zeng,

Comments

Permanent URL: https://doi.org/10.7936/0mvk-m175

Available for download on Monday, August 15, 2118

Share

COinS