TY - GEN
T1 - Benchmarking SciDB data import on HPC systems
AU - Samsi, Siddharth
AU - Brattain, Laura
AU - Arcand, William
AU - Bestor, David
AU - Bergeron, Bill
AU - Byun, Chansup
AU - Gadepally, Vijay
AU - Hubbell, Matthew
AU - Jones, Michael
AU - Klein, Anna
AU - Michaleas, Peter
AU - Milechin, Lauren
AU - Mullen, Julie
AU - Prout, Andrew
AU - Rosa, Antonio
AU - Yee, Charles
AU - Kepner, Jeremy
AU - Reuther, Albert
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/11/28
Y1 - 2016/11/28
N2 - SciDB is a scalable, computational database management system that uses an array model for data storage. The array data model of SciDB makes it ideally suited for storing and managing large amounts of imaging data. SciDB is designed to support advanced analytics in database, thus reducing the need for extracting data for analysis. It is designed to be massively parallel and can run on commodity hardware in a high performance computing (HPC) environment. In this paper, we present the performance of SciDB using simulated image data. The Dynamic Distributed Dimensional Data Model (D4M) software is used to implement the benchmark on a cluster running the MIT SuperCloud software stack. A peak performance of 2.2M database inserts per second was achieved on a single node of this system. We also show that SciDB and the D4M toolbox provide more efficient ways to access random sub-volumes of massive datasets compared to the traditional approaches of reading volumetric data from individual files. This work describes the D4M and SciDB tools we developed and presents the initial performance results. This performance was achieved by using parallel inserts, a in-database merging of arrays as well as supercomputing techniques, such as distributed arrays and single-program-multiple-data programming.
AB - SciDB is a scalable, computational database management system that uses an array model for data storage. The array data model of SciDB makes it ideally suited for storing and managing large amounts of imaging data. SciDB is designed to support advanced analytics in database, thus reducing the need for extracting data for analysis. It is designed to be massively parallel and can run on commodity hardware in a high performance computing (HPC) environment. In this paper, we present the performance of SciDB using simulated image data. The Dynamic Distributed Dimensional Data Model (D4M) software is used to implement the benchmark on a cluster running the MIT SuperCloud software stack. A peak performance of 2.2M database inserts per second was achieved on a single node of this system. We also show that SciDB and the D4M toolbox provide more efficient ways to access random sub-volumes of massive datasets compared to the traditional approaches of reading volumetric data from individual files. This work describes the D4M and SciDB tools we developed and presents the initial performance results. This performance was achieved by using parallel inserts, a in-database merging of arrays as well as supercomputing techniques, such as distributed arrays and single-program-multiple-data programming.
UR - http://www.scopus.com/inward/record.url?scp=85007125165&partnerID=8YFLogxK
U2 - 10.1109/HPEC.2016.7761617
DO - 10.1109/HPEC.2016.7761617
M3 - Conference contribution
AN - SCOPUS:85007125165
T3 - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
BT - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 13 September 2016 through 15 September 2016
ER -