Abstract

Graph Neural Networks (GNNs) are becoming increasingly popular, with their applications expanding across diverse domains. As the scale of graph data continues to grow, including larger numbers of nodes, edges, and higher embedding dimensions, standardized libraries such as DGL and PyG have been developed to facilitate GNN computation. However, with the rapid increase in the number of processor cores and the evolution of multi-core architectures, these libraries often show poor scalability and fail to execute GNN inference efficiently on the latest multi-core systems, particularly those with upwards of a hundred cores. To address this limitation, we present FGI, a Fast GNN Inference system for large-scale graph data (over 100,000 vertices). FGI employs different parallelization strategies optimized for diverse graph structures, maximizing the utilization of multi-level cache hierarchies in multi- core systems. We evaluate a number of GNN models with FGI on a 128-core AMD EPYC system. Compared to state-of-the-art libraries, FGI achieves up to 3.35× and 3.20× speedups over DGL for GCN and GraphSAGE respectively, and even greater improvements over PyG, across a range of large-scale, high-dimensional graph datasets with diverse structural properties.

Committee Chair

Roger Chamberlain

Committee Members

Roger Chamberlain, Chris Gill, Michael Hall, Chenfeng Zhao

Degree

Master of Science (MS)

Author's Department

Electrical & Systems Engineering

Author's School

McKelvey School of Engineering

Document Type

Thesis

Date of Award

Summer 8-11-2025

Language

English (en)

Available for download on Wednesday, February 11, 2026

Share

COinS