Spectral effects of large matrices from oil reservoir simulators on performance of scalable direct solvers

Duran A. , Tunçel M.

SPE Large Scale Computing and Big Data Challenges in Reservoir Simulation Conference and Exhibition 2014, İstanbul, Türkiye, 15 - 17 Eylül 2014, ss.140-149 identifier

  • Cilt numarası:
  • Basıldığı Şehir: İstanbul
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.140-149


It is important to estimate the elapsed time to solve large sparse linear systems for time-restricted real life decision making applications such as oil and gas reservoir simulators. Challenging matrices should be distinguished and handled separately because they may lead to performance bottleneck. Therefore, we need to examine the spectral effects of large matrices on the performance of scalable direct solvers byusing eigenvalues. In this work, we check whether there is relationship between the eigenvalue distribution of a matrix and the performance of the solver. We try to examine the eigenvalue distribution of various sparse matrices. We may find all eigenvalues in order to obtain the distribution graph of eigenvalues, if possible. However, it is very expensive to find all eigenvalues. Therefore, Gerschgorin’s theorem may be used to bound the spectrum of square matrices. Several behaviors such as being disjoint, overlapped or clustered of Gerschgorin circles may give clue regarding the distribution of the eigenvalues and the performance of the solver for that matrix. In this paper, we consider a portfolio of test matrices which include randomly populated sparse matrices and various patterned matrices coming from reservoir modeling from single porosity single permeability to dual porosity dual permeability models (see [10]). We examined our modified HELM2D03LOWER_20K matrix and EMILIA_923 matrix from the University of Florida sparse matrixcollection (see [17]), in addition to the patterned matrices from 3 phase black-oil model and 7 component EOS model.

We define an optimal minimum number of cores as the number of cores that provides the minimumwall clock time for a given size of problem, where a right match occurs between the problem size, the spectral effects of matrix and the available resources such as memory, in presence of communication overhead. We find that the optimal minimum number of cores required depends on the sparsity level and size of the matrix. As the sparsity level of matrix decreases and the order of matrix increases, we expect that the optimal minimum number of cores increases slightly.