Abstract

Synthesizing control policies that preserve the safety of autonomous systems is a challenge that remains to be solved. Towards that goal, control barrier functions (CBFs) have been developed as mathematical constructs that can be used in real-time to correct safety-violating nominal actions to ones which preserve the safety of control systems. However, synthesizing CBFs using correct-by-construction methods has not been scalable. Instead, recent research has proposed data-driven approaches for learning CBFs in the form of neural networks. Two main challenges face such approaches: (1) labeling states as unsafe or safe ones requires the knowledge of the states in the backward reachable set of the failure set--the true dynamics-dependent unsafe set, and (2) in the case of systems with high-dimensional observations, such as images and point clouds, enormous amount of data is needed to train these neural observation-based CBFs, which is expensive to obtain in robotic domains. We tackle the first challenge by using inverse constraint learning to infer a neural classifier that defines the backward reachable set from expert trajectories and use it to label sampled states. This method outperforms baselines and performs comparably to a CBF trained with ground truth labels in four environments. We tackle the second challenge by using existing vision models which are pre-trained on large and diverse datasets as frozen perception backbones on top of which latent dynamics and neural observation-based CBFs are trained. Our experimental results indicate that the resulting filters are competitive with those that have access to the ground truth state.

Committee Chair

Hussein Sibai

Committee Members

Andrew Clark, Nathan Jacobs

Degree

Master of Science (MS)

Author's Department

Computer Science & Engineering

Author's School

McKelvey School of Engineering

Document Type

Thesis

Date of Award

Spring 5-7-2025

Language

English (en)

Author's ORCID

https://orcid.org/0009-0006-7647-575X

Share

COinS