ORCID

http://orcid.org/0000-0001-8029-267X

Date of Award

Summer 8-15-2021

Author's School

McKelvey School of Engineering

Author's Department

Electrical & Systems Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

As computation increasingly moves from the cloud to the source of data collection, there is a growing demand for specialized machine learning algorithms that can perform learning and inference at the edge in energy and resource-constrained environments. In this regard, we can take inspiration from small biological systems like insect brains that exhibit high energy-efficiency within a small form-factor, and show superior cognitive performance using fewer, coarser neural operations (action potentials or spikes) than the high-precision floating-point operations used in deep learning platforms. Attempts at bridging this gap using neuromorphic hardware has produced silicon brains that are orders of magnitude inefficient in energy dissipation as well as performance. This is because neuromorphic machine learning (ML) algorithms are traditionally built bottom-up, starting with neuron models that mimic the response of biological neurons and connecting them together to form a network. Neural responses and weight parameters are therefore not optimized w.r.t. any system objective, and it is not evident how individual spikes and the associated population dynamics are related to a network objective. On the other hand, conventional ML algorithms follow a top-down synthesis approach, starting from a system objective (that usually only models task efficiency), and reducing the problem to the model of a non-spiking neuron with non-local updates and little or no control over the population dynamics. I propose that a reconciliation of the two approaches may be key to designing scalable spiking neural networks that optimize for both energy and task efficiency under realistic physical constraints, while enabling spike-based encoding and learning based on local updates in an energy-based framework like traditional ML models.

To this end, I first present a neuron model implementing a mapping based on polynomial growth transforms, which allows for independent control over spike forms and transient firing statistics. I show how spike responses are generated as a result of constraint violation while minimizing a physically plausible energy functional involving a continuous-valued neural variable, that represents the local power dissipation in a neuron. I then show how the framework could be extended to coupled neurons in a network by remapping synaptic interactions in a standard spiking network. I show how the network could be designed to perform a limited amount of learning in an energy-efficient manner even without synaptic adaptation by appropriate choices of network structure and parameters - through spiking SVMs that learn to allocate switching energy to neurons that are more important for classification and through spiking associative memory networks that learn to modulate their responses based on global activity. Lastly, I describe a backpropagation-less learning framework for synaptic adaptation where weight parameters are optimized w.r.t. a network-level loss function that represents spiking activity across the network, but which produces updates that are local. I show how the approach can be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the network-level spiking activity. I build upon this framework to introduce end-to-end spiking neural network (SNN) architectures and demonstrate their applicability for energy and resource-efficient learning using a benchmark dataset.

Language

English (en)

Chair

Shantanu Chakrabartty

Committee Members

ShiNung Ching, Carlos Ponce, Baranidharan Raman, Shen Zeng,

Share

COinS