Date of Award
Doctor of Philosophy (PhD)
In the brain, neurons (brain cells) produce electrical impulses, or spikes, that are thought to be the substrate of information processing and computation. Through enigmatic processes, these spikes are ultimately decoded into perceptions and actions. The nature of this encoding and decoding is one of the most pervasive questions in theoretical neuroscience. In other words, what are the specific functions enacted by neural circuits, through their biophysics and dynamics? This thesis examines the dynamics of neural networks from the perspective of control theory and engineering. The pivotal concept is that of the normative synthesis of neural circuits, wherein neural dynamics are built from fundamental mathematical objectives. Emergent properties of the synthesized circuits thus constitute a hypothesis regarding how actual networks in the brain might be achieving the functions in question. Several sub-problems are considered. First, we propose an optimization problem to find the dynamics of neural networks that can generate spike trains over a period of time to drive a linear dynamic system along a prescribed trajectory, a canonical problem in control engineering. It turns out that this problem can be solved by a recurrent spiking network with integrate-and-fire dynamics. The network amounts to an efficient, event-based controller in which each neuron (node) contributes spikes only when doing so is advantageous to the overall cost. We then turn our attention to another classical control problem: state transfer. We introduce a distributed spiking network with multiple layers that enacts a minimum energy control policy for this problem. We then generalize this approach to enact a classical linear quadratic regulator (LQR) control policy with Kalman filter, thus building networks that achieve another highly prevalent control-theoretic construct. While the above results are restricted to linear systems, we also show how an offline nonlinear control policy can be distributed onto a spiking neural network. Finally, we study the idea of iterative learning control, an adaptive control method, and show how this can be integrated into a spiking network through the introduction of a slow time-scale. It turns out that for each of the above synthesis problems, the networks display emergent properties that are compatible with known features of motor cortical networks. Hence, our results show that classical control architectures, including, planning, feedback control, estimation, and learning, could indeed be enacted by a single recurrent spiking network with biophysically plausible dynamics. Further, from an engineering perspective, these network realizations provide a way to achieve distributed control over networks, which may confer robustness properties above and beyond traditional centralized control designs.
Jr-Shin Li, Shen Zeng, Shantanu Chakrabartty, Carlos Ponce,