Grant/Award Number and Agency
This work was supported by NSF grant CNS-1527510.
National Science Foundation
Analyzing big data is a task encountered across disciplines. Addressing the challenges inherent in dealing with big data necessitate solutions that cover its three defining properties: volume, variety, and velocity. However, what is less understood is the treatment of the data that must be completed even before any analysis can begin. Specifically, there is often a non-trivial amount of time and resources that are utilized to the end of retrieving and preprocessing big data. This problem, known collectively as data integration, is a term frequently used for the general problem of taking data in some initial form and transforming it into a desired form. Examples of this include the rearranging of fields, changing the form of expression of one or more fields, altering the boundary notation of records and/or fields, encrypting or decrypting records and/or fields, parsing non-record data and organizing it into a record-oriented form, etc. In this work, we present our progress in creating a benchmarking suite that characterizes a diverse set of data integration applications.
Creative Commons License
This work is licensed under a Creative Commons 1.0 Public Domain Dedication.
Cabrera, Anthony M.; Faber, Clayton; Cepeda, Kyle; Deber, Robert; Epstein, Cooper; Zheng, Jason; Cytron, Ron K.; and Chamberlain, Roger, "Data Integration Benchmark Suite v1" (2018). Digital Research Materials (Data & Supplemental files). 9.