Abstract
Graph Neural Networks (GNNs) are becoming increasingly popular, with their applications expanding across diverse domains. As the scale of graph data continues to grow, including larger numbers of nodes, edges, and higher embedding dimensions, standardized libraries such as DGL and PyG have been developed to facilitate GNN computation. However, with the rapid increase in the number of processor cores and the evolution of multi-core architectures, these libraries often show poor scalability and fail to execute GNN inference efficiently on the latest multi-core systems, particularly those with upwards of a hundred cores. To address this limitation, we present FGI, a Fast GNN Inference system for large-scale graph data (over 100,000 vertices). FGI employs different parallelization strategies optimized for diverse graph structures, maximizing the utilization of multi-level cache hierarchies in multi- core systems. We evaluate a number of GNN models with FGI on a 128-core AMD EPYC system. Compared to state-of-the-art libraries, FGI achieves up to 3.35× and 3.20× speedups over DGL for GCN and GraphSAGE respectively, and even greater improvements over PyG, across a range of large-scale, high-dimensional graph datasets with diverse structural properties.
Committee Chair
Roger Chamberlain
Committee Members
Roger Chamberlain, Chris Gill, Michael Hall, Chenfeng Zhao
Degree
Master of Science (MS)
Author's Department
Electrical & Systems Engineering
Document Type
Thesis
Date of Award
Summer 8-11-2025
Language
English (en)
DOI
https://doi.org/10.7936/gsh8-bb23
Recommended Citation
Ji, Binglin, "Accelerating GNN Inference on Multi-Core Systems" (2025). McKelvey School of Engineering Theses & Dissertations. 1260.
The definitive version is available at https://doi.org/10.7936/gsh8-bb23
Included in
Computer and Systems Architecture Commons, Electrical and Electronics Commons, Hardware Systems Commons, Other Electrical and Computer Engineering Commons