This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Skew-Symmetric Matrix Decompositions on Shared-Memory Architectures

Ishna Satyarth* Department of Computer Science,
Southern Methodist University,
Dallas, USA
[email protected]
  
Chao Yin*
Department of Chemistry,
Southern Methodist University,
Dallas, USA
[email protected]
  
Ruqing G. Xu
NVIDIA,
Santa Clara, USA
[email protected]
  
Devin A. Matthews
This work was funded in part by the National Science Foundation (grant CHE-2143725) and the US Department of Energy (grant DE-SC0022893). Computational resources for this research were provided by SMU’s O’Donnell Data Science and Research Computing Institute.* These authors contributed equally. Department of Chemistry,
Southern Methodist University,
Dallas, USA
[email protected]
Abstract

The factorization of skew-symmetric matrices is a critically understudied area of dense linear algebra (DLA), particularly in comparison to that of symmetric matrices. While some algorithms can be adapted from the symmetric case, the cost of algorithms can be reduced by exploiting skew-symmetry. A motivating example is the factorization X=LTLTX=LTL^{T} of a skew-symmetric matrix XX, which is used in practical applications as a means of determining the determinant of XX as the square of the (cheaply-computed) Pfaffian of the skew-symmetric tridiagonal matrix TT, for example in fields such as quantum electronic structure and machine learning. Such applications also often require pivoting in order to improve numerical stability. In this work we explore a combination of known literature algorithms and new algorithms recently derived using formal methods. High-performance parallel CPU implementations are created, leveraging the concept of fusion at multiple levels in order to reduce memory traffic overhead, as well as the BLIS framework which provides high-performance gemm kernels, hierarchical parallelism, and cache blocking. We find that operation fusion and improved use of available bandwidth via parallelization of bandwidth-bound (level-2 BLAS) operations are essential for obtaining high performance, while a concise C++ implementation provides a clear and close connection to the formal derivation process without sacrificing performance.

Index Terms:
Dense linear algebra, matrix factorization, skew-symmetric matrices, tridiagonal matrices, Gauss transforms, Pfaffian

I Introduction

The reduction of a skew-symmetric matrix (i.e. XT=XX^{T}=-X) to tridiagonal form is an important technique in the manipulation of such matrices in dense linear algebra (DLA), for example enabling the rapid computation of det(X)\operatorname{det}(X), solution of systems of equations XY=BXY=B, and computation of the matrix inverse X1X^{-1}, among others. As the eigenvalues of a skew-symmetric matrix are purely imaginary, reduction to diagonal form is not possible in real arithmetic, and while reduction to a 2x2 diagonal-blocked form is possible, tridiagonalization provides a convenient middle ground. Such a factorization can be computed by applying and accumulating Gauss transforms, X=LTLTX=LTL^{T} or with column pivoting PXPT=LTLTPXP^{T}=LTL^{T} with lower unit-triangular matrix LL and permutation matrix PP, or via Householder reduction X=QTQTX=QTQ^{T} with orthogonal QQ. As the former decomposition can be obtained in approximately n3/3n^{3}/3 FLOPs for an n×nn\times n matrix XX and can utilize efficient level-3 BLAS operations for the majority of the computation [13, 22, 25], we do not further consider the Householder approach. The Pfaffian of the tridiagonal factor TT satisfies Pf(T)2=det(X)\operatorname{Pf}(T)^{2}=\operatorname{det}(X), with Pf(T)=i=0n/21τ2i+1,2i\operatorname{Pf}(T)=\prod_{i=0}^{\lceil n/2\rceil-1}\tau_{2i+1,2i}. Note that we use uppercase roman letters to denote matrices and submatrices, lowercase roman letters to denote column (sub)vectors, and lowercase Greek letters to refer to scalars. Thus, the computation of the Pfaffian via tridiagonal factorization is a critical kernel in scientific applications working with skew-symmetric matrices, such as in machine learning (Markov random fields [10]), physics (partition functions of Ising spin models [15]), and quantum chemistry/materials science (electronic structure quantum Monte Carlo [3]).

Previously, the FLAME (Formal Linear Algebra Methods Environment) methodology [9, 6, 16] was used to derive a family of algorithms for X=LTLTX=LTL^{T} factorization [17]. The basic idea is a goal-oriented approach, where one represents the post-condition X=LTLTX=LTL^{T} (and other conditions as required) in a partitioned form: the partitioned matrix expression (PME),

(XTLXBLXBR)=(LTL0LBLLBR)×\displaystyle\left(\begin{array}[]{c | c}X_{TL}&\star\\ \hline\cr X_{BL}&X_{BR}\\ \end{array}\right)=\left(\begin{array}[]{c | c}L_{TL}&0\\ \hline\cr L_{BL}&L_{BR}\\ \end{array}\right)\times (5)
(TTLτBLelefTτBLefelTTBR)(LTLTLBLT0LBRT)\displaystyle\quad\quad\quad\left(\begin{array}[]{c | c}T_{TL}&-\tau_{BL}e_{l}e_{f}^{T}\\ \hline\cr\tau_{BL}e_{f}e_{l}^{T}&T_{BR}\\ \end{array}\right)\left(\begin{array}[]{c | c}L_{TL}^{T}&L_{BL}^{T}\\ \hline\cr 0&L_{BR}^{T}\\ \end{array}\right) (10)

where \star indicates redundant data (due to the skew-symmetry of XX), and ef/ele_{f}/e_{l} are column Euclidean unit vectors with a non-zero entry in the first and last position, respectively. For later use, we also define E¯f\bar{E}_{f} (E¯l\bar{E}_{l}) as the collection of all column unit vectors except efe_{f} (ele_{l}), i.e. the identity matrix with the first (last) column dropped. Essentially, inspection of the PME gives rise to a family of algorithmic invariants, which each then leads mechanically to a concrete algorithm.

In this paper, we present high-performance parallel shared-memory CPU implementations of this family of algorithms leveraging the concept of fusion at multiple levels in order to reduce memory traffic overhead, as well as the BLIS framework [19, 18] which provides high-performance gemm kernels, hierarchical parallelism, and cache blocking. As in [25], we add basic skew-symmetric BLAS operations as modifications of existing operations, while also re-implementing certain key operations (e.g. skr2) using in-house code. The specific contributions of this paper are:

  • High-performance shared-memory parallel implementation of six skew-symmetric LTLTLTL^{T} factorization algorithms, of which only two have been previously implemented (skew-symmetric Parlett-Reid and Aasen’s algorithms). All algorithms include optional partial pivoting, except the “two-step” and blocked left-looking algorithms.

  • Implementation of key skew-symmetric level-2 and level-3 BLAS operations in the BLIS framework.

  • Additional hand-optimized level-2 BLAS implementations, including shared-memory parallelization and operation fusion.

  • Implementation of fused “sandwich product” level-3 operations using the BLIS framework.

  • Algorithmic modifications which expose additional opportunities for operation fusion.

  • An expressive C++ interface for implementing both blocked and unblocked DLA operations which closely mirrors the FLAME notation while enabling extensive compiler optimization.

  • Benchmarking of our own and related implementations on AMD “Milan” EPYC 7763 CPUs.

These contributions impact not only the efficient implementation of X=LTLTX=LTL^{T} factorization, but also showcase the benefits of our approach, leveraging formal derivation together with expressive APIs and flexible frameworks, to DLA in general.

Algorithm: [L,T,p]:=ltlt_{blk,unb}_var(X)[L,T{\color[rgb]{.75,.5,.25},p}]:=\text{\sc ltlt\_\{blk,unb\}\_{\emph{var}}}(X)
L=I,T=0,p=0L=I,\quad T=0,\quad{\color[rgb]{.75,.5,.25}p=0}
X(XTL xTMXTRxMLT χMMxMRTXBL xBMXBR),L,T,pX\rightarrow\left(\begin{array}[]{c I c | c}X_{TL}\hfil\lx@intercol\vrule width=&x_{TM}&X_{TR}\\ \cr\hline\cr\cr x_{ML}^{T}\hfil\lx@intercol\vrule width=&\chi_{MM}&x_{MR}^{T}\\ \hline\cr X_{BL}\hfil\lx@intercol\vrule width=&x_{BM}&X_{BR}\end{array}\right),\quad L,T{\color[rgb]{.75,.5,.25},p}\rightarrow\ldots
where XTLX_{TL}, LTLL_{TL}, and TTLT_{TL} are 0×00\times 0
Initialization: Some algorithms require extra initialization where noted.
while m(XTL)<m(X)1m(X_{TL})<m(X)-1 do
(XTL xTMXTRxMLT χMMxMRTXBL xBMXBR)(X00 x10X20x30X40Tx10T χ11x21Tχ31Tx41TX20T x21X22x32X42Tx30T χ31x32Tχ33x43TX40 x41X42x43X44)\left(\begin{array}[]{c I c | c}X_{TL}\hfil\lx@intercol\vrule width=&x_{TM}&X_{TR}\\ \cr\hline\cr\cr x_{ML}^{T}\hfil\lx@intercol\vrule width=&\chi_{MM}&x_{MR}^{T}\\ \hline\cr X_{BL}\hfil\lx@intercol\vrule width=&x_{BM}&X_{BR}\end{array}\right)\rightarrow\left(\begin{array}[]{c I c | c | c | c}X_{00}\hfil\lx@intercol\vrule width=&x_{10}&X_{20}&x_{30}&X_{40}^{T}\\ \cr\hline\cr\cr x_{10}^{T}\hfil\lx@intercol\vrule width=&\chi_{11}&x_{21}^{T}&\chi_{31}^{T}&x_{41}^{T}\\ \hline\cr X_{20}^{T}\hfil\lx@intercol\vrule width=&x_{21}&X_{22}&x_{32}&X_{42}^{T}\\ \hline\cr x_{30}^{T}\hfil\lx@intercol\vrule width=&\chi_{31}&x_{32}^{T}&\chi_{33}&x_{43}^{T}\\ \hline\cr X_{40}\hfil\lx@intercol\vrule width=&x_{41}&X_{42}&x_{43}&X_{44}\end{array}\right)
and similarly for
(LTL lTMLTRlMLT λMMlMRTLBL lBMLBR),(TTL tTMTTRtMLT τMMtMRTTBL tBMTBR),(pTπMpB)\left(\begin{array}[]{c I c | c}L_{TL}\hfil\lx@intercol\vrule width=&l_{TM}&L_{TR}\\ \cr\hline\cr\cr l_{ML}^{T}\hfil\lx@intercol\vrule width=&\lambda_{MM}&l_{MR}^{T}\\ \hline\cr L_{BL}\hfil\lx@intercol\vrule width=&l_{BM}&L_{BR}\end{array}\right),\left(\begin{array}[]{c I c | c}T_{TL}\hfil\lx@intercol\vrule width=&t_{TM}&T_{TR}\\ \cr\hline\cr\cr t_{ML}^{T}\hfil\lx@intercol\vrule width=&\tau_{MM}&t_{MR}^{T}\\ \hline\cr T_{BL}\hfil\lx@intercol\vrule width=&t_{BM}&T_{BR}\end{array}\right){\color[rgb]{.75,.5,.25},\left(\begin{array}[]{c}p_{T}\\ \cr\hline\cr\cr\pi_{M}\\ \hline\cr p_{B}\end{array}\right)}
Update: Algorithm-specific update steps. For most unblocked algorithms, the central partition (2) is empty, leading to an effective 4×44\times 4 partitioning. For the unblocked “two-step” algorithm, the central partition contains exactly one row/column, and e.g. X22X_{22} is a single element χ22\chi_{22}.
(XTL xTMXTRxMLT χMMxMRTXBL xBMXBR)(X00x10X20 x30X40Tx10Tχ11x21T χ31Tx41TX20Tx21X22 x32X42Tx30Tχ31x32T χ33x43TX40x41X42 x43X44)\left(\begin{array}[]{c I c | c}X_{TL}\hfil\lx@intercol\vrule width=&x_{TM}&X_{TR}\\ \cr\hline\cr\cr x_{ML}^{T}\hfil\lx@intercol\vrule width=&\chi_{MM}&x_{MR}^{T}\\ \hline\cr X_{BL}\hfil\lx@intercol\vrule width=&x_{BM}&X_{BR}\end{array}\right)\leftarrow\left(\begin{array}[]{c | c | c I c | c}X_{00}&x_{10}&X_{20}\hfil\lx@intercol\vrule width=&x_{30}&X_{40}^{T}\\ \hline\cr x_{10}^{T}&\chi_{11}&x_{21}^{T}\hfil\lx@intercol\vrule width=&\chi_{31}^{T}&x_{41}^{T}\\ \hline\cr X_{20}^{T}&x_{21}&X_{22}\hfil\lx@intercol\vrule width=&x_{32}&X_{42}^{T}\\ \cr\hline\cr\cr x_{30}^{T}&\chi_{31}&x_{32}^{T}\hfil\lx@intercol\vrule width=&\chi_{33}&x_{43}^{T}\\ \hline\cr X_{40}&x_{41}&X_{42}\hfil\lx@intercol\vrule width=&x_{43}&X_{44}\end{array}\right)
and similarly for
(LTL lTMLTRlMLT λMMlMRTLBL lBMLBR),(TTL tTMTTRtMLT τMMtMRTTBL tBMTBR),(pTπMpB)\left(\begin{array}[]{c I c | c}L_{TL}\hfil\lx@intercol\vrule width=&l_{TM}&L_{TR}\\ \cr\hline\cr\cr l_{ML}^{T}\hfil\lx@intercol\vrule width=&\lambda_{MM}&l_{MR}^{T}\\ \hline\cr L_{BL}\hfil\lx@intercol\vrule width=&l_{BM}&L_{BR}\end{array}\right),\left(\begin{array}[]{c I c | c}T_{TL}\hfil\lx@intercol\vrule width=&t_{TM}&T_{TR}\\ \cr\hline\cr\cr t_{ML}^{T}\hfil\lx@intercol\vrule width=&\tau_{MM}&t_{MR}^{T}\\ \hline\cr T_{BL}\hfil\lx@intercol\vrule width=&t_{BM}&T_{BR}\end{array}\right){\color[rgb]{.75,.5,.25},\left(\begin{array}[]{c}p_{T}\\ \cr\hline\cr\cr\pi_{M}\\ \hline\cr p_{B}\end{array}\right)}
endwhile
Figure 1: Structure of the derived algorithms. Matrices XX, LL, and TT are repetitively re-partitioned with progress indicated by movement of the thick line. Individual algorithms share the same structure but differ in the specific initialization and update steps and blocking factor, which determines the size of the central partition and submatrices X22X_{22}, X20X_{20}, etc. The pivot vector pp (optional pivoting indicated by brown text) is also partitioned conformally to the other matrices, and indicates that in iteration kk, rows/columns kk and πk+k\pi_{k}+k were exchanged. Note that π00\pi_{0}\equiv 0.

II Related Work

The first algorithm for (skew-)symmetric matrix tridiagonalization, due to Parlett and Reid [12], can be considered a modification of LULU factorization and applies Gauss transforms from both the left and right. It is a right-looking algorithm and, for skew-symmetric matrices, makes use of a skew-symmetric rank-2 updates (denoted skr2 [22]), A:=A+(wyTywT)A:=A+(wy^{T}-yw^{T}). The leading-order asymptotic cost for Partlett-Reid is the same as LULU factorization at 2n3/32n^{3}/3 floating point operations (FLOPs) for an n×nn\times n matrix XX.

Shortly thereafter, Aasen derived a left-looking algorithm [1] which leverages level-2 matrix operations (gemv), while also reducing the asymptotic to n3/3n^{3}/3 FLOPs. Aasen also considered partial pivoting for improving numerical stability.

Bunch and Kaufman derived an algorithm for complete LDLTLDL^{T} factorization which shares some similarities with Aasen’s left-looking algorithm. A blocked version was described by Dongarra et al. [7] and implemented in LAPACK as sytrf. However, care must be taken with the Bunch-Kaufman method to avoid numerical instability [2].

Rozložník et al. [13] proposed a blocked right-looking algorithm which extends Aasen’s column-pivoted algorithm to the case of accumulated panel updates and which leverages more efficient level-3 BLAS operations. Their decomposition exploits the lower-Hessenberg structure of H=LTH=LT. Efficient implementations of this algorithm rely on the gemmt BLAS extension which updates only the upper or lower half of a general matrix product. If only the unique portions of the trailing matrix are updated in this way, then their algorithm also achieves n3/3n^{3}/3 FLOPs.

The Rozložník algorithm is implemented for symmetric tridiagonalization in LAPACK as sytrf_aa [26]. That implementation does not use gemmt but rather performs a blocked trailing updated which mixes gemm for off-diagonal blocks and a loop over gemv for diagonal portions. Ballard et al. [4] also implemented a tiled version of the block (right-looking) Aasen algorithm and a left-looking blocked variant. Their implementation leverages a high degree of parallelism via dynamic task DAG scheduling at the cost of a significantly higher total FLOP count due to several factors such as the use of gemm for constructing H=LTH=LT, tile LULU decomposition for pivoting, use of trsm to back out TT from HH, etc.

Wimmer [22] extended the Parlett-Reid algorithm to the case of blocked updates, and also considered the case of a partial tridiagonalization (in which even-number columns of XX are non-zero below the sub-diagonal) which is sufficient for computation of the Pfaffian. The blocked version uses a rank-2kk skew-symmetric update (skr2k), A:=A+(WYTYWT)A:=A+(WY^{T}-YW^{T}). When applied in the partial tridiagonalization case with rank kk equal to half the block size, it requires n3/3n^{3}/3 FLOPs. In the more general case of full tridiagonalization, 2n3/32n^{3}/3 FLOPs are required. Notably, only Wimmer has explicitly considered the case of skew-symmetric rather than symmetric matrices.

Wimmer implemented the blocked right-looking algorithm in PFAPACK [23], although it makes use of an unoptimized version of skr2k. Later, Xu et al. [25] re-implemented this algorithm with an optimized skr2k based on gemmt in the Pfaffine library [24].

Right-Looking Algorithm:
ltlt_unb_right(X,first_col)(X,\text{first\_col})
Initialization:
if first_col then
XBR:=XBR+(lBMxBMTxBMlBMT)\quad\quad X_{BR}:=X_{BR}+(l_{BM}x_{BM}^{T}-x_{BM}l_{BM}^{T})
endif skr2, ger2\hfill\blacktriangleleft\text{\sc skr2, ger2}
Update:
π3:=iamax((χ31x41)){\color[rgb]{.75,.5,.25}\pi_{3}:=\text{\sc iamax}(\left(\begin{array}[]{c}\chi_{31}\\ \hline\cr x_{41}\end{array}\right))}
(χ31x41):=P(π3)(χ31x41){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c}\chi_{31}\\ \hline\cr x_{41}\end{array}\right):=P(\pi_{3})\left(\begin{array}[]{c}\chi_{31}\\ \hline\cr x_{41}\end{array}\right)}
τ31:=χ31\tau_{31}:=\chi_{31}
l43:=x41/τ31l_{43}:=x_{41}/\tau_{31}
(l30Tλ31L40l41):=P(π3)(l30Tλ31L40l41){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c | c }l_{30}^{T}&\lambda_{31}\\ \hline\cr L_{40}&l_{41}\end{array}\right):=P(\pi_{3})\left(\begin{array}[]{c | c }l_{30}^{T}&\lambda_{31}\\ \hline\cr L_{40}&l_{41}\end{array}\right)}
(χ33x43X44):=P(π3)(χ33x43X44)P(π3){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c | c}\chi_{33}&\star\\ \hline\cr x_{43}&X_{44}\end{array}\right):=P(\pi_{3})\left(\begin{array}[]{c | c}\chi_{33}&\star\\ \hline\cr x_{43}&X_{44}\end{array}\right)P(\pi_{3})}
X44:=X44+(l43x43Tx43l43T)skr2, ger2X_{44}:=X_{44}+(l_{43}x_{43}^{T}-x_{43}l_{43}^{T})\hfill\blacktriangleleft\text{\sc skr2, ger2}
Left-Looking Algorithm:
ltlt_unb_left(X,first_col)(X,\text{first\_col})
Update:
if first_colm(X00)>0\text{first\_col}\vee m(X_{00})>0 then
(χ31x41):=(χ31x41)\quad\quad\left(\begin{array}[]{c }\chi_{31}\\ \hline\cr x_{41}\end{array}\right):=\left(\begin{array}[]{c }\chi_{31}\\ \hline\cr x_{41}\end{array}\right)- gemv-sktri\blacktriangleleft\text{\sc gemv-sktri}
(l30Tλ31L40l41)(T00τ10elτ10elT0)(l101)\quad\quad\quad\quad\left(\begin{array}[]{c | c }l_{30}^{T}&\lambda_{31}\\ \hline\cr L_{40}&l_{41}\end{array}\right)\left(\begin{array}[]{c | c}T_{00}&-\tau_{10}e_{l}\\ \hline\cr\tau_{10}e_{l}^{T}&0\end{array}\right)\left(\begin{array}[]{c }l_{10}\\ \hline\cr 1\end{array}\right)
endif
π3:=iamax((χ31x41)){\color[rgb]{.75,.5,.25}\pi_{3}:=\text{\sc iamax}(\left(\begin{array}[]{c}\chi_{31}\\ \hline\cr x_{41}\end{array}\right))}
(χ31x41):=P(π3)(χ31x41){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c}\chi_{31}\\ \hline\cr x_{41}\end{array}\right):=P(\pi_{3})\left(\begin{array}[]{c}\chi_{31}\\ \hline\cr x_{41}\end{array}\right)}
τ31:=χ31\tau_{31}:=\chi_{31}
l43:=x41/τ31l_{43}:=x_{41}/\tau_{31}
(l30Tλ31L40l41):=P(π3)(l30Tλ31L40l41){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c | c }l_{30}^{T}&\lambda_{31}\\ \hline\cr L_{40}&l_{41}\end{array}\right):=P(\pi_{3})\left(\begin{array}[]{c | c }l_{30}^{T}&\lambda_{31}\\ \hline\cr L_{40}&l_{41}\end{array}\right)}
(χ33x43X44):=P(π3)(χ33x43X44)P(π3){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c | c}\chi_{33}&\star\\ \hline\cr x_{43}&X_{44}\end{array}\right):=P(\pi_{3})\left(\begin{array}[]{c | c}\chi_{33}&\star\\ \hline\cr x_{43}&X_{44}\end{array}\right)P(\pi_{3})}
Figure 2: Unblocked algorithms. Brown text indicates optional pivoting. The operator PP constructs a permutation matrix from the indicated pivot or column of pivots. The variable first_col indicates whether or not the first column of LL is assumed non-zero; when assumed zero, updates from that column are excluded. Execution of these algorithms for the entire factorization first_col=false\text{first\_col}=false. When executed inside of a blocked algorithm, the value is determined by the caller. In this case, the algorithm may also exit early and updates may be restricted to the leading portion of XX (see Fig. 3). Important computational kernels are noted.

III Algorithms

In this section we review the algorithms which were previously derived [17] and are implemented here. We particularly note how the specific features of the various algorithms impact implementation efficiency and optimizations. We also briefly discuss how the initial problem definition, in terms of the PME and choice of algorithmic invariant affect the resulting algorithm. All algorithms share a common structure, depicted in Fig. 1. The 3×33\times 3 partitioning of the matrices XX, LL, and TT reflects the initial partitioning of the operands (for most algorithms), while the 5×55\times 5 structure used within the loop update reflects the overlay of two 3×33\times 3 partitionings (“before” and “after”) where the same invariant is required to hold, but which are shifted by some number of rows and columns in order to make progress through the matrix. Importantly, comparison of the conditions implied by the invariant at the beginning (“before”) and at the end (“after”) of the loop body uniquely determines the update step(s).

Right-Looking Algorithm: ltlt_blk_right(X)(X)
[(L220l32T1L42l43),(0τ21efT220τ32elT),(p2π3)]\left[\left(\begin{array}[]{c | c}L_{22}&0\\ \hline\cr l_{32}^{T}&1\\ \hline\cr L_{42}&l_{43}\end{array}\right),\left(\begin{array}[]{c | c}0&\star\\ \hline\cr\tau_{21}e_{f}&T_{22}\\ \hline\cr 0&\tau_{32}e_{l}^{T}\end{array}\right){\color[rgb]{.75,.5,.25},\left(\begin{array}[]{c}p_{2}\\ \hline\cr\pi_{3}\end{array}\right)}\right]
:=ltlt_unb_var((χ11x21X22χ31x32Tχ33x41X42x43X44),false)\quad\quad:=\mbox{\sc ltlt\_unb\_\emph{var}}(\left(\begin{array}[]{c | c | c | c}\chi_{11}&\star&\pagecolor{mygray}\star&\pagecolor{mygray}\star\\ \hline\cr x_{21}&X_{22}&\pagecolor{mygray}\star&\pagecolor{mygray}\star\\ \hline\cr\chi_{31}&x_{32}^{T}&\pagecolor{mygray}\chi_{33}&\pagecolor{mygray}\star\\ \hline\cr x_{41}&X_{42}&\pagecolor{mygray}x_{43}&\pagecolor{mygray}X_{44}\end{array}\right),\text{false})
(L20 l21l30T λ31L40 l41):=P((p2π3))(L20 l21l30T λ31L40 l41){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c I c}L_{20}\hfil\lx@intercol\vrule width=&l_{21}\\ \hline\cr l_{30}^{T}\hfil\lx@intercol\vrule width=&\lambda_{31}\\ \hline\cr L_{40}\hfil\lx@intercol\vrule width=&l_{41}\end{array}\right):=P(\left(\begin{array}[]{c}p_{2}\\ \cr\hline\cr\cr\pi_{3}\end{array}\right))\left(\begin{array}[]{c I c}L_{20}\hfil\lx@intercol\vrule width=&l_{21}\\ \hline\cr l_{30}^{T}\hfil\lx@intercol\vrule width=&\lambda_{31}\\ \hline\cr L_{40}\hfil\lx@intercol\vrule width=&l_{41}\end{array}\right)}
(χ33x43X44):=(χ33x43X44)gemmt-sktri\left(\begin{array}[]{c | c}\chi_{33}&\star\\ \hline\cr x_{43}&X_{44}\end{array}\right):=\left(\begin{array}[]{c | c}\chi_{33}&\star\\ \hline\cr x_{43}&X_{44}\end{array}\right)-\hfill\blacktriangleleft\text{\sc gemmt-sktri}
(l32T1L42l43)(T22τ32elT0)(l32L42T1l43T)\quad\quad\left(\begin{array}[]{ c | c }l_{32}^{T}&1\\ \hline\cr L_{42}&l_{43}\end{array}\right)\left(\begin{array}[]{c | c }T_{22}&\star\\ \hline\cr\tau_{32}e_{l}^{T}&0\end{array}\right)\left(\begin{array}[]{c | c}l_{32}&L_{42}^{T}\\ \hline\cr 1&l_{43}^{T}\end{array}\right)
X44:=X44+(l43x43Tx43l43T)skr2X_{44}:=X_{44}+(l_{43}x_{43}^{T}-x_{43}l_{43}^{T})\hfill\blacktriangleleft\text{\sc skr2}
Left-Looking Algorithm: ltlt_blk_left(X)(X)
(x21X22χ31x32Tx41X42):=(x21X22χ31x32Tx41X42)gemm-sktri\left(\begin{array}[]{ c | c }x_{21}&X_{22}\\ \hline\cr\chi_{31}&x_{32}^{T}\\ \hline\cr x_{41}&X_{42}\end{array}\right):=\left(\begin{array}[]{ c | c }x_{21}&X_{22}\\ \hline\cr\chi_{31}&x_{32}^{T}\\ \hline\cr x_{41}&X_{42}\end{array}\right)-\hfill\blacktriangleleft\text{\sc gemm-sktri}
(L20l21l30Tλ31L40l41)(T00τ10elT0)(l10L20T1l21T)\quad\quad\left(\begin{array}[]{c | c }L_{20}&l_{21}\\ \hline\cr l_{30}^{T}&\lambda_{31}\\ \hline\cr L_{40}&l_{41}\end{array}\right)\left(\begin{array}[]{c | c }T_{00}&\star\\ \cr\hline\cr\cr\tau_{10}e_{l}^{T}&0\end{array}\right)\left(\begin{array}[]{c | c }l_{10}&L_{20}^{T}\\ \cr\hline\cr\cr 1&l_{21}^{T}\end{array}\right)
[(L220l32T1L42l43),(0τ21efT220τ32elT)]\left[\left(\begin{array}[]{c | c}L_{22}&0\\ \hline\cr l_{32}^{T}&1\\ \hline\cr L_{42}&l_{43}\end{array}\right),\left(\begin{array}[]{c | c}0&\star\\ \hline\cr\tau_{21}e_{f}&T_{22}\\ \hline\cr 0&\tau_{32}e_{l}^{T}\end{array}\right)\right]
:=ltlt_unb_var((χ11x21X22χ31x32Tχ33x41X42x43X44),true)\quad\quad:=\mbox{\sc ltlt\_unb\_\emph{var}}(\left(\begin{array}[]{c | c | c | c}\chi_{11}&\star&\pagecolor{mygray}\star&\pagecolor{mygray}\star\\ \hline\cr x_{21}&X_{22}&\pagecolor{mygray}\star&\pagecolor{mygray}\star\\ \hline\cr\chi_{31}&x_{32}^{T}&\pagecolor{mygray}\chi_{33}&\pagecolor{mygray}\star\\ \hline\cr x_{41}&X_{42}&\pagecolor{mygray}x_{43}&\pagecolor{mygray}X_{44}\end{array}\right),\text{true})
Figure 3: Blocked algorithms. Brown text indicates optional pivoting. When invoking the unblocked algorithm for the sub-problem, the grayed region indicates where execution should stop, and that updates to this region should not be performed (e.g. in the right-looking unblocked algorithm). The right-looking algorithm allows for pivoting but must use an unblocked left-looking algorithm for the sub-problem, as the updates needed to prepare columns which might be pivoted in from the grayed region are missing. The left-looking algorithm cannot be pivoted as the panel update (gemm-sktri) is applied before the pivots which determine the composition of the panel are selected. Important computational kernels are noted.

III-A Unblocked Algorithms

Fig. 2 depicts the “basic” right- and left-looking unblocked algorithms, which are equivalent to the algorithms of Parlett and Reid, and of Aasen, respectively. The algorithms as written contain some extra provision for the case in which the first column of LL (below the diagonal) is not assumed to be zero as it typically is when performing a full factorization. Actually, as Rozložník pointed out [13], the first column of LL may be initialized arbitrarily, leading to distinct but functionally equivalent factorizations. When used to solve the panel sub-problem in a blocked algorithm, this first column will typically contain the last (and usually non-zero) elimination vector from the previous panel. However, whether or not this column should be considered in the sub-problem depends on the structure of the blocked algorithm (see Fig. 3).

The left-looking algorithm performs a matrix-vector multiply each iteration, which is the leading-order cost in terms of FLOPs. While the intermediate product (a portion of TLTTL^{T}) can be computed separately in 𝒪(n)\mathcal{O}(n) time, we denote the entire LTLTLTL^{T} matrix-tridiagonal-vector multiplication as a single fused operation gemv-sktri. At iteration ii, this then incurs 2(ni1)(i+1)2(n-i-1)(i+1) FLOPs and hence a total leading-order cost of n3/3n^{3}/3 FLOPs. Instead, the right-looking algorithm performs a skew-symmetric rank-2 update (skr2) every iteration ii at a cost of 2(ni2)22(n-i-2)^{2} FLOPs for a total leading-order cost of 2n3/32n^{3}/3 FLOPs. Because the unblocked algorithm deals only with a “tall-and-skinny” trapezoidal region of XX for a blocked sub-problem, the trailing region of the skr2 operation becomes a general rank-2 update (denoted here as ger2).

“Two-step” Right-Looking Algorithm
Update:
τ21:=χ21\tau_{21}:=\chi_{21}
(λ32l42):=(χ31x41)/τ21\left(\begin{array}[]{c}\lambda_{32}\\ \hline\cr l_{42}\end{array}\right):=\left(\begin{array}[]{c}\chi_{31}\\ \hline\cr x_{41}\end{array}\right)/\tau_{21}
τ32:=χ32\tau_{32}:=\chi_{32}
l43:=x42/τ32l_{43}:=x_{42}/\tau_{32}
x43:=x43+τ32l42τ32λ32l43x_{43}:=x_{43}+\tau_{32}l_{42}-\tau_{32}\lambda_{32}l_{43}
X44:=X44+l43(x43τ32l42)T(x43τ32l42)l43TX_{44}:=X_{44}+l_{43}(x_{43}-\tau_{32}l_{42})^{T}-(x_{43}-\tau_{32}l_{42})l_{43}^{T}
skr2\blacktriangleleft\text{\sc skr2}
Fused Right-Looking Algorithm: ltlt_blk_fused(X)(X)
Initialization:
efTpB:=iamax(xBM){\color[rgb]{.75,.5,.25}e_{f}^{T}p_{B}:=\text{\sc iamax}(x_{BM})}
xBM:=P(efTpB)xBM{\color[rgb]{.75,.5,.25}x_{BM}:=P(e_{f}^{T}p_{B})x_{BM}}
τBM:=efTxBM\tau_{BM}:=e_{f}^{T}x_{BM}
E¯fTLBRef:=E¯fTxBM/τBM\bar{E}_{f}^{T}L_{BR}e_{f}:=\bar{E}_{f}^{T}x_{BM}/\tau_{BM}
Update:
[(E¯fTL22E¯f00l32TE¯f10L42E¯fl43L44ef),(T22τ32elT00τ43),(E¯fTp2π3efTp4)]\left[\left(\begin{array}[]{c | c | c}\bar{E}_{f}^{T}L_{22}\bar{E}_{f}&0&0\\ \hline\cr l_{32}^{T}\bar{E}_{f}&1&0\\ \hline\cr L_{42}\bar{E}_{f}&l_{43}&L_{44}e_{f}\end{array}\right),\left(\begin{array}[]{c | c}T_{22}&\star\\ \hline\cr\tau_{32}e_{l}^{T}&0\\ \hline\cr 0&\tau_{43}\end{array}\right){\color[rgb]{.75,.5,.25},\left(\begin{array}[]{c}\bar{E}_{f}^{T}p_{2}\\ \hline\cr\pi_{3}\\ \hline\cr e_{f}^{T}p_{4}\end{array}\right)}\right]
:=ltlt_unb_var((X22x32Tχ33X42x43X44),true)\quad\quad:=\mbox{\sc ltlt\_unb\_\emph{var}}(\left(\begin{array}[]{c | c | c}X_{22}&\star&\pagecolor{mygray}\star\\ \hline\cr x_{32}^{T}&\chi_{33}&\pagecolor{mygray}\star\\ \hline\cr X_{42}&x_{43}&\pagecolor{mygray}X_{44}\end{array}\right),\text{true})
(E¯fTL20 E¯fTl21l30T λ31L40 l41):=P((E¯fTp2π3efTp4))(E¯fTL20 E¯fTl21l30T λ31L40 l41){\color[rgb]{.75,.5,.25}\left(\begin{array}[]{c I c}\bar{E}_{f}^{T}L_{20}\hfil\lx@intercol\vrule width=&\bar{E}_{f}^{T}l_{21}\\ \hline\cr l_{30}^{T}\hfil\lx@intercol\vrule width=&\lambda_{31}\\ \hline\cr L_{40}\hfil\lx@intercol\vrule width=&l_{41}\end{array}\right):=P(\left(\begin{array}[]{c}\bar{E}_{f}^{T}p_{2}\\ \hline\cr\pi_{3}\\ \hline\cr e_{f}^{T}p_{4}\end{array}\right))\left(\begin{array}[]{c I c}\bar{E}_{f}^{T}L_{20}\hfil\lx@intercol\vrule width=&\bar{E}_{f}^{T}l_{21}\\ \hline\cr l_{30}^{T}\hfil\lx@intercol\vrule width=&\lambda_{31}\\ \hline\cr L_{40}\hfil\lx@intercol\vrule width=&l_{41}\end{array}\right)}
X44:=X44gemmt-sktriX_{44}:=X_{44}-\hfill\blacktriangleleft\text{\sc gemmt-sktri}
(L42l43L44ef)(T22τ32el0τ32elT0τ430τ430)(L42Tl43T(L44ef)T)\quad\left(\begin{array}[]{c | c | c}L_{42}&l_{43}&L_{44}e_{f}\end{array}\right)\left(\begin{array}[]{c | c | c}T_{22}&-\tau_{32}e_{l}&0\\ \hline\cr\tau_{32}e_{l}^{T}&0&-\tau_{43}\\ \hline\cr 0&\tau_{43}&0\end{array}\right)\left(\begin{array}[]{c}L_{42}^{T}\\ \hline\cr l_{43}^{T}\\ \hline\cr(L_{44}e_{f})^{T}\end{array}\right)
Figure 4: Fused algorithms. Brown text indicates optional pivoting. The first algorithm is similar to Wimmer’s unblocked algorithm [22] which produces a partial tridiagonalization of XX. Instead, this algorithm achieves the same number of FLOPs at leading order but produces a full tridiagonalization. Pivoting is possible but not shown for brevity. The second algorithm extends the blocked right-looking algorithm of Fig. 3 by shifting the computation of LL by one row and column. This allows the gemm-sktri and skr2 operations to be combined. As with that algorithm, a left-looking unblocked algorithm must be used for the sub-problem. Important computational kernels are noted.

Wimmer [22] halved the cost of the right-looking algorithm by skipping every other column of LL, leading to a partial tridiagonal structure of TT. This structure is sufficient for computing the Pfaffian but not for inversion or solving systems of equations. We derive a roughly similar algorithm as the “two-step” right-looking algorithm in Fig. 4. This algorithm performs a single rank-2 update for every iteration (i.e. every two rows/columns), thus yielding the same n3/3n^{3}/3 leading-order cost as Wimmer’s partial factorization. However, the two-step algorithm does in fact compute every column of LL. This example illustrates the potential for reducing computation through operation fusion (in this algorithm, the updates from the two computed columns of LL are naturally combined), and importantly, for enabling and identifying such opportunities through an alternative arrangement of the problem definition (the PME, invariant, and iteration conditions) and subsequent formal analysis, rather than through manual post-optimization.

III-B Blocked Algorithms

A blocked right-looking algorithm was first proposed by Rozložník et al. [13] and was implemented in LAPACK [26]. This algorithm has significant similarity to our algorithm, depicted in Fig. 3, with the most significant difference being the need to maintain a partial block of H=LTH=LT in the former. Instead, our algorithm directly exposes the skew-symmetry of the block update step as an operation with the form of a symmetric rank-kk update, but with a skew-symmetric tridiagonal matrix inserted in the middle of the product. We denote this operation as gemmt-sktri due to the fact that pre-combining H=LTH=LT or H~=TLT\tilde{H}=TL^{T} leads to a gemmt (general matrix-matrix multiplication updating only the upper or lower half), but with implicit generation of the Hessenberg factor from a tridiagonal-times-matrix multiplication. As HH is also not required in the unblocked sub-problem, it can be omitted entirely, reducing workspace and FLOPs. The leading-order cost for both our algorithm and Rozložník’s is only n3/3n^{3}/3; this is in contrast to the blocked Parlett-Reid algorithm introduced by Wimmer which uses a skew-symmetric rank-2k2k update (skr2k) and hence requires twice as many FLOPs. All blocked algorithms derived to date utilize the left-looking unblocked (Aasen) algorithm to factorize each panel. When pivoting, this is necessary as right-looking updates are only applied within the panel. Columns pivoted in from outside the panel will then not satisfy the necessary pre-condition. For completeness, we also implement blocked unpivoted algorithms which use a right-looking unblocked algorithm for the sub-problem. However, we opt not to use our new two-step algorithm for blocked sub-problems.

void ltlt_pivot_blockRL_var0(const matrix_view<double>& X,
const row_view<double>& t,
const row_view< int>& p,
int block_size)
{
// Store L, with explicit ’1’s, below the diagonal of X
// which requires a 1-column index shift
matrix_view<double> L = X.rebased(1, 1);
p[0] = 0;
// Initialize T = [], m = 0, B = [1,n)
auto [T, m, B] =
partition_rows<DYNAMIC,1,DYNAMIC>(X);
while (B)
{
// ( T || m | B )
// ( R0 || r1 | R2 | r3 | R4 )
auto [R0, r1, R2, r3, R4] =
repartition<DYNAMIC,1>(T, m, B, block_size);
ltlt_pivot_unblockLL(X[r1|R2|r3|R4][r1|R2|r3|R4],
t[r1|R2],
p[r1|R2|r3],
false);
pivot_rows(X[R2|r3|R4][R0],
p[R2|r3]);
gemmt_sktri(’L’,
-1.0, L [r3|R4][R2|r3],
t [R2],
L.T() [R2|r3][r3|R4],
1.0, X [r3|R4] [r3|R4]);
skr2(’L’, 1.0, L[R4][r3],
X[R4][r3],
1.0, X[R4][R4]);
// ( R0 | r1 | R2 || r3 | R4 )
// ( T || m | B )
tie(T, m, B) =
continue_with<2>(R0, r1, R2, r3, R4);
}
}
Figure 5: C++ implementation of the blocked right-looking algorithm of Fig. 3. The use of ranges and range partitioning allows for strongly-typed identification of sub-matrices, vectors, and scalars. The tridiagonal factor TT is stored as an external vector t containing the sub-diagonal elements, while the triangular factor LL is stored in the lower part of XX with an implicit column shift. The diagonal elements of LL are stored as explicit “1”s in order to enable combined updates such as the gemmt-sktri which involves λ33\lambda_{33}.

A blocked left-looking algorithm was also derived in [17] and is depicted in Fig. 3. While Yamazaki et al. hint at the possibility of a left-looking blocked Aasen algorithm [26], they do not provide any algorithm nor, to our knowledge, has any implementation been made. Like the right-looking algorithm, it requires n3/3n^{3}/3 FLOPs, but features a general matrix-matrix product (gemm) to update the next panel of XX. While considering an intermediate Hessenberg product does perturb the skew-symmetry of the matrix or the efficiency of the gemm, it does require additional storage and memory accesses to perform the tridiagonal-matrix product. We denote the combined general matrix-tridiagonal-matrix product as gemm-sktri. The left-looking algorithm is conventionally considered as lower-performing compared to the right-looking algorithm due to the reduced level of parallelism available in the matrix-matrix product compared to the rank-kk update of the right-looking algorithm. Additionally, the left-looking blocked algorithm cannot incorporate partial pivoting as the panel update cannot be applied until pivots are chosen and vice versa.

Finally, we implement the novel blocked right-looking algorithm which starts with a more fine-grained partitioning of the PME and invariant [17]. This algorithm “shifts” the computation by one row/column, resulting in the factorization of column x43x_{43} (via L44efL_{44}e_{f}) during the panel update, and hence allows the trailing update to be computed entirely in the form X-=LTLTX\mathrel{-}=LTL^{T} (a fused gemmt-sktri operation). As with the two-step algorithm, an alternative setup of the problem followed by formal analysis exposes additional opportunities for fusion and optimization.

IV Implementation

Most of the blocked algorithms mentioned above, as well as the unblocked left-looking and “two-step” right-looking algorithms, achieve a leading-order FLOP count of n3/3n^{3}/3, with the main exception being the blocked right-looking algorithm of Wimmer. In analogy to the Cholesky and LULU decompositions, it is highly unlikely that a full tridiagonalization can be accomplished in fewer than n3/3n^{3}/3 FLOPs (but this remains to be proven). Thus, optimizing the implementation of these algorithms must focus on reducing memory traffic overhead and increasing parallel scaling, rather than reducing FLOPs.

We focus on optimization through operation fusion, which enables a greater number of FLOPs to be performed with fewer total reads and writes. Beyond raising the average arithmetic intensity, such fusion may also combine a compute-bound operation (high arithmetic intensity, such as gemm) with a memory-bound operation (low arithmetic intensity, such as ger), which can asymptotically eliminate the entire cost of the latter operation.

Many of these fusion opportunities are enabled by defining new combined BLAS-like operations, which we implement either via in-house code or as extensions to the BLIS framework, which additionally provides pre-optimized computational kernels, tuned cache blocking, and parallelization.

IV-A Storage of the LL and TT matrices

Similarly to other DLA routines, such as those implemented in LAPACK, it is often desirable to reuse the storage of the input matrix for the decomposed form. For an X=LTLTX=LTL^{T} decomposition, the strictly lower or upper n(n1)/2n(n-1)/2 elements are available, which can exactly hold the unique non-unit/zero elements of TT and LL. For example, the lower part of LL (except the leading and presumably zero column) may be stored, “shifted” left by one column, below the sub-diagonal of XX, with elements of TT stored on the sub-diagonal. Because the algorithms here do not require any additional arrays or matrices, they could then be implemented without any external workspace (apart from the pivot vector pp). However, we note that the updates often explicitly reference the diagonal of LL, which is implicitly one. For example, in the blocked right-looking algorithm, the update appears,

(χ33x43X44)-=(l32T1L42l43)(T22τ32elT0)(l32L42T1l43T)\displaystyle\left(\begin{array}[]{c | c}\chi_{33}&\star\\ \hline\cr x_{43}&X_{44}\end{array}\right)\mathrel{-}=\left(\begin{array}[]{ c | c }l_{32}^{T}&1\\ \hline\cr L_{42}&l_{43}\end{array}\right)\left(\begin{array}[]{c | c }T_{22}&\star\\ \hline\cr\tau_{32}e_{l}^{T}&0\end{array}\right)\left(\begin{array}[]{c | c}l_{32}&L_{42}^{T}\\ \hline\cr 1&l_{43}^{T}\end{array}\right)

When stored as described above, the “1” which is in fact λ33\lambda_{33} would refer to the same storage as x32Telx_{32}^{T}e_{l}, which in turn physically stores τ32\tau_{32}. This value cannot simply be temporarily overwritten with an explicit 1, as it is also referenced in the central tridiagonal part of the update. One option is to split the update into separate steps, for example,

x43\displaystyle x_{43} -=L42T22l32+τ32l43elTl32τ32L42el\displaystyle\mathrel{-}=L_{42}T_{22}l_{32}+\tau_{32}l_{43}e_{l}^{T}l_{32}-\tau_{32}L_{42}e_{l}
X44\displaystyle X_{44} -=(L42l43)(T22τ32elT0)(L42Tl43T)\displaystyle\mathrel{-}=\left(\begin{array}[]{ c | c }L_{42}&l_{43}\end{array}\right)\left(\begin{array}[]{c | c }T_{22}&\star\\ \hline\cr\tau_{32}e_{l}^{T}&0\end{array}\right)\left(\begin{array}[]{c}L_{42}^{T}\\ \hline\cr l_{43}^{T}\end{array}\right)

Instead, we opt to store the sub-diagonal elements of TT in an external vector, and to store explicit 1’s on the sub-diagonal of XX as columns of LL are computed. This allows the entire update to be computed using a single, fused gemmt-sktri operation. Interestingly, the choice of the storage location of the LL and TT matrices is expressible as a part of the PME and invariants by imposing additional conditions. Dependency analysis of the resulting logical expressions at the “before” and “after” points can then illuminate when such fusion optimizations can and cannot be performed. Storing TT as an external array also allows elements to be accessed contiguously.

IV-B Level 2 BLAS and BLAS-like operations

While the leading-order cubic cost of the blocked algorithms arises mostly from level-3 BLAS or BLAS-like operations, the unblocked algorithm which operates within blocks works entirely within level-1 and level-2 and, as can be seen in our results, accounts for a considerable portion of the running time. We apply two optimizations to these operations: first, we include OpenMP parallelization of all level-2 operations (including the application of blocks of pivots, see e.g. Fig. 3). As most BLAS libraries do not parallelize level-2 operations, including BLIS, we implement in-house versions, although we use BLIS kernels where applicable. Second, we have implemented “fused” versions of those level-2 operations not appearing in standard BLAS (we count skr2 as a standard operation for the present purposes since it is a minor modification of syr2). These new operations include gemv-sktri and ger2. In the latter case, a speedup approaching 2×2\times may be expected due to accessing the output matrix only once, while in the former case we can only expect a very slight speedup over a naive implementation which first computes z=Txz=Tx (with 𝒪(n)\mathcal{O}(n) memory accesses) and then y=Azy=Az (𝒪(n2)\mathcal{O}(n^{2}) accesses) rather than y=ATxy=ATx (also 𝒪(n2)\mathcal{O}(n^{2}) accesses) all at once. Where possible, we apply loop unrolling and jamming (e.g. computing multiple axpys with a single loop) in order to maximize register and prefetch pipeline usage.

IV-C Level 3 BLAS-like operations

Level-3 BLAS operations are typically compute-bound, performing 𝒪(n3)\mathcal{O}(n^{3}) memory accesses for only 𝒪(n2)\mathcal{O}(n^{2}) memory accesses.111Actually, any level-3 implementation must perform 𝒪(n3)\mathcal{O}(n^{3}) accesses on one matrix (usually the output matrix), although in practice these accesses are carefully prefetched and amortized such that the actual overhead is dominated by accessing the other two matrices in 𝒪(n2)\mathcal{O}(n^{2}) time Thus, adding 𝒪(n2)\mathcal{O}(n^{2}) overhead, for example to compute X-=LTLTX\mathrel{-}=LTL^{T} as H=TLT;X-=LHH=TL^{T};X\mathrel{-}=LH, does not seem like a limitation. However, the skew-symmetric tridiagonal-times-matrix product H=TLTH=TL^{T} is, as with level-2 operations, entirely memory-bound. In order to fuse these operations together, we take advantage of the flexibility of the BLIS framework, where user-defined “packing” operations can be specified, to create a custom BLAS-like operation. Packing is a typical part of gemm (and other level-3) implementations, and serves to load blocks of the input matrices into a special storage format which provides linear, vectorized access in the inner kernel as well as better TLB reuse. Because packing already necessitates accessing memory in the input matrices, we compute the TLTTL^{T} product on-the-fly and store it into the packed buffer. This custom packing operation has negligible increase in cost over standard packing because, although each element of TLTTL^{T} requires reading two elements of LL (due to the skew-symmetric tridiagonal structure), the accesses can take advantage of high temporal locality. The rest of the operation is handled without modification by BLIS, yielding the same overall performance as standard level-3 operations.

IV-D C++ interface

A sample implementation of the column-pivoted blocked right-looking algorithm is depicted in Fig. 5. We make use of MArray,[11] which is a header-only DLA library developed by us, similar in spirit to Eigen [8], Armadillo [14], Blitz++ [20, 21], and others. We have extended this library with a range-based partitioning API which expresses the partition–repartition pattern of FLAME. Ranges are strongly typed, with singleton ranges (corresponding to partitions containing only one column or row) typed as scalar integers, and others typed as a C++ class representing the half-open range [from,to)[from,to). Indexing matrix or vector types with such ranges produces a result which depends on the range types, yielding a sub-matrix, sub-vector, or single scalar entry. Finally, BLAS and BLAS-like operations (besides simple scalar-vector operations which use the expression template functionality of YYY) use custom wrapper functions such that matrix lengths, strides, and other properties can be provided by the relevant objects. Even for unblocked algorithms, which are sensitive to overhead in terms of branches, function calls, and additional memory accesses (register spills, updating structs, etc.), typical compilers (in our case, we use gcc 11.4.0) are capable of optimizing the combination of while loop and range partition–repartition pattern into the same assembly as a plain for loop, with the exception of a small number of spurious conditional moves. As these instructions are spurious (referring to conditions already known), they can be perfectly predicted. This observation matches our profiling data which shows negligible overhead in the ltlt_unb_var function bodies.

Refer to caption
Figure 6: Performance of blocked right-looking algorithms with the unblocked left-looking algorithm applied within blocks. Fusion and other optimizations are applied in steps (see text). ace) without pivoting; bdf) with pivoting. The algorithmic block size is ab) 128, cd) 256, and ef) 512. All calculations use 64 cores (1 socket).

V Results

We performed a series of performance experiments using a system with 2×2\times AMD EPYC 7763 processors (64 cores per socket) and 512 GiB of 8-channel (per socket) DDR4-3200 RAM. At the base frequency of 2.45 GHz, this provides 5.02 TFLOPs of peak double precision performance (2.51 TFLOPs per socket), although actual frequency depends on thermal load and available boost up to 3.5 GHz.

We compare our implementations to equivalent or computationally similar operations provided by several packages. We use standard LAPACK routines from Intel MKL 2022.2.1, OpenBLAS 0.3.26, and AMD AOCL 4.2. We also use X=LTLTX=LTL^{T} factorizations from PFAPACK and Pfaffine compiled from source using the same gcc 11.2.0 compiler as used for our code. We use a pre-release version of BLIS 2.0 for implementing the skew-symmetric and fused BLAS operations. We use OpenMP for parallelization, and pin threads to cores using the OMP_PLACES=cores and OMP_PROC_BIND=close environment variables. In all cases we measure performance for double precision matrices, and the default schedutil frequency governor is used due to technical limitations.

V-A Impact of optimizations

In order to investigate the impact of the several optimizations described in Section IV, we measured the performance of a sequence of algorithms in which the optimizations are applied in “steps”:

  1. Step 0:

    Only existing BLAS functions (plus skr2) are used from BLIS. Algorithms as in Figs. 2 and 3 are used. Both LL and TT are stored in XX as they are computed and updates involving the diagonal of LL are unfused, see Section IV-A.

  2. Step 1:

    Fused level-2 operations gemv-sktri and ger2 are introduced, without parallelization.

  3. Step 2:

    All level-2 operations as well as block pivoting are parallelized.

  4. Step 3:

    TT is stored in an external vector and fused updates are used which reference (unit) diagonal elements of LL.

  5. Step 4:

    Fused level-3 operations gemm-sktri and gemmt-sktri are introduced.

  6. Step 5:

    The fused blocked right-looking algorithm of Fig. 4 is used.

Optimizations at each step are applied cumulatively. The performance of the blocked right-looking/unblocked left-looking algorithm (with and without pivoting) at each step is presented in Fig. 6, using one socket (64 threads).

Three different algorithmic block sizes are tested: 128 (top row), 256 (middle row), and 512 (bottom row). Successively large block sizes imply more time spent in the unblocked panel factorization, but also more aggregation into level-3 trailing updates. Step 1 has a very small positive impact, as might be expected from the fusion of an essentially level-1 step into the gemv operation. Step 2 has a much larger effect, particularly at large core counts, as approximately 10 cores are required to fully saturate the memory bandwidth. This effect grows with algorithmic block size due to the larger amount of work in level-2 operations. The impact of Step 3 is also very small. At this step, essentially a gemv-sktri is fused with an “adjacent” gemmt-sktri (at the blocked level, fusion at the unblocked level is less important). As the input sub-matrices for the gemv-sktri come only from the current block they typically reside in L2 or L3 cache. The vector to update is typically in main memory, but is only 𝒪(n)\mathcal{O}(n) in size. This is in contrast to Steps 4 and 5, where now the operations which are fused operate on 𝒪(n2)\mathcal{O}(n^{2}) data. The effect of these optimizations is much larger. The effect of fusing the trailing skr2 update (Step 5) is particularly pronounced for small block sizes. Cumulatively, these optimizations increase performance by more than 5×5\times in the unpivoted case, and 3×3\times in the pivoted case.

V-B Parallel scalability

We measure the parallel scalability of our algorithms by performing a strong scaling experiment from one to all 128 cores of the system for a single 20,000×20,00020,\!000\times 20,\!000 skew-symmetric X=LTLTX=LTL^{T} factorization. The results are presented in Fig. 7, with unpivoted and pivoted blocked algorithms in the top and bottom panels, respectively. All algorithms involve non-trivial memory bound work, primarily in the unblocked sub-problem and in pivoting. Additionally, the unfused blocked right-looking algorithm performs an level-2 skr2 operation which adds considerable overhead (see Section V-A). The impact of these operations can be clearly seen in the parallel scaling, as memory bandwidth only increases with core count up to 10\sim 10 cores, and latency is significantly impacted for inter-CCD and especially inter-socket memory traffic. The unblocked algorithms experience a drop of 25%\sim 25\% in parallel efficiency (which is proportional to per-core performance) from 1 to 16 or 32 threads. Some of this is attributable to a drop in processor frequency due to thermal management. The single core results certainly show an effect of frequency boosting as the base clock single core peak is only 39.2 GFLOPs, while peak at max boost frequency is 56 GFLOPs. Assuming max boost, the fused right-looking algorithm (with or without pivoting) achieves 85%\sim 85\% of peak.

Refer to caption
Figure 7: Strong scaling. Per-core performance of X=LTLTX=LTL^{T} decomposition on a 20,000×20,00020,\!000\times 20,\!000 matrix across varying number of cores. a) Unpivoted blocked algorithms; b) pivoted blocked algorithms; c) all algorithms, in comparison to MKL dgemm. In the labels, BXX refers to the blocked algorithm and UBXX to the unblocked algorithm used within blocks, with XX=RL\texttt{XX}=\texttt{RL} for right-looking and LL for left-looking. VAR0 denotes the blocked right-looking algorithm of Fig. 3, and VAR1 denotes the fused blocked algorithm of Fig. 4.

In order to measure this effect, we consider the MKL implementation of gemm as the “speed of light” and present relative performance of our algorithms in Fig. 7c. Indeed, the relative performance for 16+ threads is higher than indicated by panel a), although only slightly. The difference grows with core count, showing a progressive drop in processor frequency, although significant boosts are evident even for 32 or more cores. From panel c), the biggest parallel efficiency drops are at 16 cores, likely due to exceeding a single 8-core cluster (CCD), and then at 64 threads and beyond. Based on profiling data, the main effect is due to increased time in level-2 operations, indicating main memory bandwidth as the limiting factor (inter-cache bandwidth would already be affected at core counts greater than 8 due to inter-CCD communication). In the two socket case, the impact of NUMA remote accesses is likely an additional factor. The scalability of the pivoted algorithms is even more severely affected. In particular, we measure the application of block pivots (see Fig. 3 for example) as the largest overhead. This is primarily because the columns of LL to pivot are stored in column-major layout, while rows are swapped. Thus, accesses “waste” most of each cache line loaded and TLB reuse is poor, especially for very large matrices. We attempt to optimize stores by blocking across the row dimension to increase temporal locality. Overall, the best unpivoted algorithm achieves 41% parallel efficiency (53% of gemm) at 64 cores, while the best pivoted algorithm achieves only 24% (29% of gemm). We observe similar poor scalability of existing pivoted factorizations, e.g. pivoted Cholesky (pstrf) and Bunch-Kaufman symmetric tridiagonalization (sytrf) although data is not shown here.

Refer to caption
Figure 8: Performance of our skew-symmetric X=LTLTX=LTL^{T} implementations (labels as in Fig. 7) a) without pivoting compared to Cholesky factorization (potrf), and with pivoting compared to b) column-pivoted LULU factorization (getrf) and pivoted Cholesky (pstrf), c) Bunch-Kaufman (sytrf) and Rozložník/Aasen (sytrf_aa) symmetric tridiagonalization, and d) skew-symmetric X=LTLTX=LTL^{T} factorization from PFAPACK and Pfaffine. All calculations use 64 cores (1 socket) and an optimal block size determined from experiments in Section V-A.

V-C Comparison to other algorithms

In Fig. 8 we compare the performance of our implementation to related algorithms as well as existing implementations of skew-symmetric tridiagonal factorization. In Fig. 8ab, we compare to common unpivoted and pivoted operations which share computational and algorithmic similarities to X=LTLTX=LTL^{T}, but which are expected to be highly optimized given their ubiquity in scientific computing. Panel a) shows a comparison to Cholesky decomposition (potrf). Cholesky decomposition incurs the same number of FLOPs as our algorithms, but is expected to be more efficient due to several factors which we do not take advantage of. The first is recursive decomposition (more than the two levels employed by us), which is necessary to achieve the communication/IO lower bound [5]. The second is the use of level-3 BLAS operations (trsm in the case of Cholesky) to update the trailing portion of blocks of the triangular factor. While the MKL implementation does exceed our performance comfortably, we find that the fused right-looking algorithm with left-looking unblocked updates is competitive with the OpenBLAS Cholesky implementation and much more efficient than that in AOCL. The unfused right-looking algorithm, as well as algorithms which use unblocked right-looking updates experience a significant drop in performance. This is almost exclusively due to the poor parallel performance of skr2. We use a non-uniform thread partitioning scheme, but load imbalance and cache conflicts still significantly impact performance. The left-looking blocked algorithm, even though it does not perform an additional level-2 operation at the blocked level, falls behind the right-looking algorithm. As mentioned above, this is at least in part due to reduced parallelism in block updates, depending on the stage of the algorithm. In panel b), we compare to pivoted Cholesky (pstrf) and column-pivoted LULU decomposition (getrf). Both algorithms exhibit similar pivoting requirements and other algorithmic properties, although LULU decomposition performs twice as many FLOPs. We see that both of our pivoted algorithms achieve significantly better performance compared to pstrf, but are in turn outperformed by getrf. As getrf is one of the most-used DLA operations it is unsurprisingly highly optimized in all major libraries. Additionally, getrf can make use of recursive blocking and trsm for updates to UU (as with Cholesky), and can aggregate most of row swaps due to pivoting while also not requiring complementary column swaps.

In comparison to symmetric tridiagonal factorizations (sytrf and sytrf_aa, Fig. 8c), our algorithms achieve uniformly and significantly higher performance than all implementations, with the exception of MKL’s sytrf for matrix sizes less than 6000×60006000\times 6000. In large part, this difference is not so much attributable to algorithmic deficiencies in these algorithms (for example, the Rozložník algorithm implemented in sytrf_aa should have similar intrinsic performance to our fused blocked right-looking algorithm), but rather due to lack of optimization in the implementation. The difference between the MKL and other sytrf implementation highlights this effect. While we applied several optimizations in our work which might not be leveraged in the lower-performing implementations, the use of a flexible, expressive, and performant C++ interface along with the ability to enable and expose optimization opportunities through formal derivation methods is immediately applicable to many (if not almost all) other DLA routines in LAPACK and beyond. Thus, the use of our techniques should significantly lower the barrier to producing highly-efficient DLA implementations. Finally, in Fig. 8d we compare our algorithms to implementations of Wimmer’s blocked right-looking algorithm from PFAPACK and Pfaffine. These implementations are very similar, with the latter switching from Fortran to C++ and replacing the reference, unoptimized skr2k of PFAPACK with calls to an optimized version, presumably implemented in terms of gemmt. This difference is highly evident from the results where PFAPACK does not exceed 5.8 GFLOPs, compared to a peak of 135 GFLOPs for Pfaffian. Both implementations also perform twice as many FLOPs as our algorithms, thus a reduction in FLOPs plus other optimizations applied here (parallelization of level-2 operations and fusion) accounts for the observed performance difference.

Overall, these results validate the optimizations and implementation strategy that we have applied, as the observed performance is roughly comparable to existing optimized implementations of related operations, and much higher than existing implementations of the specific X=LTLTX=LTL^{T} factorization (symmetric or skew-symmetric).

VI Conclusions

We have implemented a family of algorithms for computing the tridiagonal factorization of a skew-symmetric matrix, X=LTLTX=LTL^{T}. We employed optimizations, primarily operation fusion, at multiple levels, along with an expressive yet performant C++ interface which closely follows the notation and philosophy of the FLAME methodology. The optimizations employed are shown to have a significant impact on performance, increasing efficiency from 3–5×\times depending on whether partial pivoting is performed or not. The unpivoted algorithms attain similar performance across an entire AMD EPYC 7763 processor (64 cores) to highly tuned implementations of related factorizations such as the Cholesky decomposition. The pivoted algorithms significantly improve on the performance of existing symmetric and skew-symmetric tridiagonalization routines. While some work remains on increasing parallel scalability, particularly beyond a single socket (and to some extent, beyond the 8-core cluster group or CCD featured in AMD “Milan” processors), the observed performance validates the current approach which is focused on the use of expressive APIs which clearly connect to mathematical notation and formal derivation methods, and the use of extended BLAS-like operations implemented with the flexible BLIS framework. Beyond providing a best-in-class implementation of the specific skew-symmetric X=LTLTX=LTL^{T} factorization, this approach is immediately extensible to a wide range of DLA operations, where it has the promise to significantly lower the barrier to creating highly efficient yet easily understandable implementations.

References

  • [1] Jan Ole Aasen. On the reduction of a symmetric matrix to tridiagonal form. BIT Numer. Math., 11(3):233–242, September 1971.
  • [2] Cleve Ashcraft, Roger G. Grimes, and John G. Lewis. Accurate symmetric indefinite linear equation solvers. SIAM Journal on Matrix Analysis and Applications, 20(2):513–561, 1998.
  • [3] Michal Bajdich and Lubos Mitas. Electronic structure quantum Monte Carlo, August 2010. arXiv:1008.2369 [cond-mat, physics:physics].
  • [4] Grey Ballard, Dulceneia Becker, James Demmel, Jack Dongarra, Alex Druinsky, Inon Peled, Oded Schwartz, Sivan Toledo, and Ichitaro Yamazaki. Implementing a Blocked Aasen’s Algorithm with a Dynamic Scheduler on Multicore Architectures. In 2013 IEEE 27th International Symposium on Parallel and Distributed Processing, pages 895–907, May 2013. ISSN: 1530-2075.
  • [5] Grey Ballard, James Demmel, Olga Holtz, and Oded Schwartz. Communication-optimal parallel and sequential cholesky decomposition: extended abstract. In Proceedings of the Twenty-First Annual Symposium on Parallelism in Algorithms and Architectures, SPAA ’09, page 245–252, New York, NY, USA, 2009. Association for Computing Machinery.
  • [6] Paolo Bientinesi, Enrique S. Quintana-Ortí, and Robert A. van de Geijn. Representing linear algebra algorithms in code: the FLAME application program interfaces. ACM Trans. Math. Softw., 31(1):27–59, March 2005.
  • [7] Jack J Dongarra, Iain S Duff, Danny C Sorensen, and Henk A Van der Vorst. Numerical linear algebra for high-performance computers. SIAM, 1998.
  • [8] Gaël Guennebaud, Benoît Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010.
  • [9] John A. Gunnels, Fred G. Gustavson, Greg M. Henry, and Robert A. van de Geijn. FLAME: Formal Linear Algebra Methods Environment. ACM Trans. Math. Softw., 27(4):422–455, December 2001.
  • [10] Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, and Xiaoou Tang. Deep Learning Markov Random Field for Semantic Segmentation, August 2017. arXiv:1606.07230 [cs].
  • [11] Devin A. Matthews. MArray. http://github.com/devinamatthews/marray, 2024.
  • [12] Beresford N Parlett and JK Reid. On the solution of a system of linear equations whose matrix is symmetric but not definite. BIT Numer. Math, 10(3):386–397, 1970.
  • [13] Miroslav Rozložník, Gil Shklarski, and Sivan Toledo. Partitioned Triangular Tridiagonalization. ACM Trans. Math. Softw., 37(4):38:1–38:16, February 2011.
  • [14] Conrad Sanderson and Ryan Curtin. Armadillo: a template-based c++ library for linear algebra. Journal of Open Source Software, 1(2):26, 2016.
  • [15] Creighton K. Thomas and A. Alan Middleton. Exact Algorithm for Sampling the 2D Ising Spin Glass. Physical Review E, 80(4):046708, October 2009. arXiv:0906.5519 [cond-mat].
  • [16] Robert van de Geijn and Maggie Myers. Formal Derivation of LU Factorization with Pivoting, April 2023. arXiv:2304.03068 [cs].
  • [17] Robert van de Geijn, Maggie Myers, RuQing G. Xu, and Devin A. Matthews. Deriving algorithms for triangular tridiagonalization a (skew-)symmetric matrix, November 2023. arXiv:2311.10700 [cs].
  • [18] Field G. Van Zee, Tyler M. Smith, Bryan Marker, Tze Meng Low, Robert A. Van De Geijn, Francisco D. Igual, Mikhail Smelyanskiy, Xianyi Zhang, Michael Kistler, Vernon Austel, John A. Gunnels, and Lee Killough. The BLIS Framework: Experiments in Portability. ACM Trans. Math. Softw., 42(2):12:1–12:19, June 2016.
  • [19] Field G. Van Zee and Robert A. van de Geijn. BLIS: A Framework for Rapidly Instantiating BLAS Functionality. ACM Trans. Math. Softw., 41(3):14:1–14:33, June 2015.
  • [20] Todd L. Veldhuizen. Arrays in Blitz++. In Denis Caromel, Rodney R. Oldehoeft, and Marydell Tholburn, editors, Computing in Object-Oriented Parallel Environments, number 1505 in Lecture Notes in Computer Science, pages 223–230. Springer Berlin Heidelberg, December 1998.
  • [21] Todd L. Veldhuizen. Blitz++: The Library that Thinks it is a Compiler. In Hans Petter Langtangen, Are Magnus Bruaset, and Ewald Quak, editors, Advances in Software Tools for Scientific Computing, pages 57–87, Berlin, Heidelberg, 2000. Springer Berlin Heidelberg.
  • [22] M. Wimmer. Efficient numerical computation of the Pfaffian for dense and banded skew-symmetric matrices. ACM Transactions on Mathematical Software, 38(4):1–17, August 2012. arXiv:1102.3440 [cond-mat, physics:physics].
  • [23] Michael Wimmer. PFAPACK. https://michaelwimmer.org/downloads.html, 2012.
  • [24] RuQing G. Xu. Pfaffine. https://github.com/xrq-phys/Pfaffine, 2022.
  • [25] RuQing G. Xu, Tsuyoshi Okubo, Synge Todo, and Masatoshi Imada. Optimized implementation for calculation and fast-update of Pfaffians installed to the open-source fermionic variational solver mVMC. Computer Physics Communications, 277:108375, August 2022.
  • [26] Ichitaro Yamazaki and Jack Dongarra. Aasen’s Symmetric Indefinite Linear Solvers in LAPACK, 2017. LAPACK Working Note #294.