Date of Award
Summer 8-11-2025
Degree Name
Master of Science (MS)
Degree Type
Thesis
Abstract
Graph Neural Networks (GNNs) are becoming increasingly popular, with their applications expanding across diverse domains. As the scale of graph data continues to grow, including larger numbers of nodes, edges, and higher embedding dimensions, standardized libraries such as DGL and PyG have been developed to facilitate GNN computation. However, with the rapid increase in the number of processor cores and the evolution of multi-core architectures, these libraries often show poor scalability and fail to execute GNN inference efficiently on the latest multi-core systems, particularly those with upwards of a hundred cores. To address this limitation, we present FGI, a Fast GNN Inference system for large-scale graph data (over 100,000 vertices). FGI employs different parallelization strategies optimized for diverse graph structures, maximizing the utilization of multi-level cache hierarchies in multi- core systems. We evaluate a number of GNN models with FGI on a 128-core AMD EPYC system. Compared to state-of-the-art libraries, FGI achieves up to 3.35× and 3.20× speedups over DGL for GCN and GraphSAGE respectively, and even greater improvements over PyG, across a range of large-scale, high-dimensional graph datasets with diverse structural properties.
Language
English (en)
Chair
Roger Chamberlain
Committee Members
Roger Chamberlain, Chris Gill, Michael Hall, Chenfeng Zhao
Included in
Computer and Systems Architecture Commons, Electrical and Electronics Commons, Hardware Systems Commons, Other Electrical and Computer Engineering Commons