Technical Report Number
High-speed packet classiﬁcation is crucial to the implementation of several advanced network services and protocols; many QoS implementations, active networking platforms, and security devices (such as ﬁrewalls and intrusion-detection systems) require it. But performing classiﬁcation on multiple ﬁelds, at the speed of modern networks, is known to be a diﬃcult problem. The Recursive Flow Classiﬁcation (RFC) algorithm described by Gupta and McKeown performs classiﬁcation very quickly, but can require excessive storage when using thousands of rules. This paper studies a compressed representation for the tables used in RFC, trading some memory accesses for space. The compression’s eﬃciency can be improved by rear-ranging rows and columns of the tables. Finding a near-optimal rearrangement can be transformed into a Traveling Salesman Problem in which certain approximation algorithms can be used. Also, in evaluating the compressed representation of tables, we study the eﬀects of choosing diﬀerent reduction trees in RFC. We evaluate these methods using a real-world ﬁlter database with 159 rules. Results show a reduction in the size of the cross product tables by 61.6% in the median case; in some cases their size is reduced by 87% or more. Furthermore, experimental evidence suggests larger databases may be more compressible.
Spitznagel, Edward W., "Compressed Data Structures for Recursive Flow Classification" Report Number: WUCSE-2003-65 (2003). All Computer Science and Engineering Research.