Abstract

AI is being deployed broadly, from conventional computing systems like IoT systems to more advanced agentic systems. It is shifting from being a specialized component responsible for specific functions to becoming the core of agentic systems, where it drives autonomous decision-making and task execution. These AI-enabled systems bring tremendous benefits. For example, large language models can interpret user intent, select appropriate tools, and access data to complete tasks with minimal human guidance. However, they also introduce new security, privacy, and safety risks. These risks arise not only from the models themselves but also from the broader system design and integration. Since these risks can span multiple system layers, they are difficult to limit and may affect the entire system. This complexity makes securing such systems particularly challenging, especially without compromising performance or functionality. We argue that addressing these challenges requires a holistic view of the system and an adaptation of system security principles to the layered nature of AI-enabled systems to reduce risks at each layer and limit their propagation. In this dissertation, we focus on securing both conventional and agentic AI-enabled systems by first measuring risks and then mitigating them through the design of secure system architectures. For conventional IoT systems, we identify the lack of adequate security protections and the prevalence of logic vulnerabilities in firmware that can expose on-device AI assets. We address this by developing a system-level intellectual property protection mechanism that enables secure model execution. Agentic systems, on the other hand, pose unique challenges due to their dynamic and often unpredictable behavior. Through measurement of real-world platforms such as OpenAI’s GPTs, we identify security and privacy issues, including insecure tool usage and data exposure. To mitigate these risks, we introduce execution isolation and access control that support safe tool use and data sharing. We also propose an automated permission management framework that learns user preferences to assist users in making safer permission decisions. Although new challenges continue to emerge as AI grows increasingly widespread, autonomous, and interconnected, we believe this work provides a strong foundation for securely developing and deploying AI-enabled systems.

Degree

Doctor of Philosophy (PhD)

Author's Department

Computer Science & Engineering

Author's School

McKelvey School of Engineering

Document Type

Dissertation

Date of Award

5-9-2025

Language

English (en)

Share

COinS