Optimizing parallel computation

By April 24, 2016Case Studies
jp-2015-07786c_0011

jp-2015-07786c_0011In computational physics and chemistry, there is often a need to perform linear algebra operations with large matrices. As researchers move to more realistic models, the size of the model grows, and so does the size of the matrices, slowing the computations down. Parallelizing the work over many processors allows researchers to tackle more realistic simulations in a reasonable time frame. Working with researchers from the Department of Chemistry studying photoinduced reaction dynamics, HPC consultant Alexander Gaenko used memory profiling and a debugger to identify the cause of a crash in a parallel C++/Fortran program. The root cause of the problem turned out to be anomalously high memory consumption in a matrix diagonalization procedure, likely due to a bug in an Intel-supplied linear algebra library. Switching to a different implementation of the diagonalization procedure successfully eliminated the crash. The researchersā€™ work is a promising avenue in developing the next-generation high-efficiency solar cells.