Skew-Symmetric Matrix Decompositions on Shared-Memory Architectures
Abstract
The factorization of skew-symmetric matrices is a critically understudied area of dense linear algebra (DLA), particularly in comparison to that of symmetric matrices. While some algorithms can be adapted from the symmetric case, the cost of algorithms can be reduced by exploiting skew-symmetry. A motivating example is the factorization of a skew-symmetric matrix , which is used in practical applications as a means of determining the determinant of as the square of the (cheaply-computed) Pfaffian of the skew-symmetric tridiagonal matrix , for example in fields such as quantum electronic structure and machine learning. Such applications also often require pivoting in order to improve numerical stability. In this work we explore a combination of known literature algorithms and new algorithms recently derived using formal methods. High-performance parallel CPU implementations are created, leveraging the concept of fusion at multiple levels in order to reduce memory traffic overhead, as well as the BLIS framework which provides high-performance gemm kernels, hierarchical parallelism, and cache blocking. We find that operation fusion and improved use of available bandwidth via parallelization of bandwidth-bound (level-2 BLAS) operations are essential for obtaining high performance, while a concise C++ implementation provides a clear and close connection to the formal derivation process without sacrificing performance.
Index Terms:
Dense linear algebra, matrix factorization, skew-symmetric matrices, tridiagonal matrices, Gauss transforms, PfaffianI Introduction
The reduction of a skew-symmetric matrix (i.e. ) to tridiagonal form is an important technique in the manipulation of such matrices in dense linear algebra (DLA), for example enabling the rapid computation of , solution of systems of equations , and computation of the matrix inverse , among others. As the eigenvalues of a skew-symmetric matrix are purely imaginary, reduction to diagonal form is not possible in real arithmetic, and while reduction to a 2x2 diagonal-blocked form is possible, tridiagonalization provides a convenient middle ground. Such a factorization can be computed by applying and accumulating Gauss transforms, or with column pivoting with lower unit-triangular matrix and permutation matrix , or via Householder reduction with orthogonal . As the former decomposition can be obtained in approximately FLOPs for an matrix and can utilize efficient level-3 BLAS operations for the majority of the computation [13, 22, 25], we do not further consider the Householder approach. The Pfaffian of the tridiagonal factor satisfies , with . Note that we use uppercase roman letters to denote matrices and submatrices, lowercase roman letters to denote column (sub)vectors, and lowercase Greek letters to refer to scalars. Thus, the computation of the Pfaffian via tridiagonal factorization is a critical kernel in scientific applications working with skew-symmetric matrices, such as in machine learning (Markov random fields [10]), physics (partition functions of Ising spin models [15]), and quantum chemistry/materials science (electronic structure quantum Monte Carlo [3]).
Previously, the FLAME (Formal Linear Algebra Methods Environment) methodology [9, 6, 16] was used to derive a family of algorithms for factorization [17]. The basic idea is a goal-oriented approach, where one represents the post-condition (and other conditions as required) in a partitioned form: the partitioned matrix expression (PME),
(5) | |||
(10) |
where indicates redundant data (due to the skew-symmetry of ), and are column Euclidean unit vectors with a non-zero entry in the first and last position, respectively. For later use, we also define () as the collection of all column unit vectors except (), i.e. the identity matrix with the first (last) column dropped. Essentially, inspection of the PME gives rise to a family of algorithmic invariants, which each then leads mechanically to a concrete algorithm.
In this paper, we present high-performance parallel shared-memory CPU implementations of this family of algorithms leveraging the concept of fusion at multiple levels in order to reduce memory traffic overhead, as well as the BLIS framework [19, 18] which provides high-performance gemm kernels, hierarchical parallelism, and cache blocking. As in [25], we add basic skew-symmetric BLAS operations as modifications of existing operations, while also re-implementing certain key operations (e.g. skr2) using in-house code. The specific contributions of this paper are:
-
•
High-performance shared-memory parallel implementation of six skew-symmetric factorization algorithms, of which only two have been previously implemented (skew-symmetric Parlett-Reid and Aasen’s algorithms). All algorithms include optional partial pivoting, except the “two-step” and blocked left-looking algorithms.
-
•
Implementation of key skew-symmetric level-2 and level-3 BLAS operations in the BLIS framework.
-
•
Additional hand-optimized level-2 BLAS implementations, including shared-memory parallelization and operation fusion.
-
•
Implementation of fused “sandwich product” level-3 operations using the BLIS framework.
-
•
Algorithmic modifications which expose additional opportunities for operation fusion.
-
•
An expressive C++ interface for implementing both blocked and unblocked DLA operations which closely mirrors the FLAME notation while enabling extensive compiler optimization.
-
•
Benchmarking of our own and related implementations on AMD “Milan” EPYC 7763 CPUs.
These contributions impact not only the efficient implementation of factorization, but also showcase the benefits of our approach, leveraging formal derivation together with expressive APIs and flexible frameworks, to DLA in general.
Algorithm: |
---|
where , , and are |
Initialization: Some algorithms require extra initialization where noted. |
while do |
and similarly for |
Update: Algorithm-specific update steps. For most unblocked algorithms, the central partition (2) is empty, leading to an effective partitioning. For the unblocked “two-step” algorithm, the central partition contains exactly one row/column, and e.g. is a single element . |
and similarly for |
endwhile |
II Related Work
The first algorithm for (skew-)symmetric matrix tridiagonalization, due to Parlett and Reid [12], can be considered a modification of factorization and applies Gauss transforms from both the left and right. It is a right-looking algorithm and, for skew-symmetric matrices, makes use of a skew-symmetric rank-2 updates (denoted skr2 [22]), . The leading-order asymptotic cost for Partlett-Reid is the same as factorization at floating point operations (FLOPs) for an matrix .
Shortly thereafter, Aasen derived a left-looking algorithm [1] which leverages level-2 matrix operations (gemv), while also reducing the asymptotic to FLOPs. Aasen also considered partial pivoting for improving numerical stability.
Bunch and Kaufman derived an algorithm for complete factorization which shares some similarities with Aasen’s left-looking algorithm. A blocked version was described by Dongarra et al. [7] and implemented in LAPACK as sytrf. However, care must be taken with the Bunch-Kaufman method to avoid numerical instability [2].
Rozložník et al. [13] proposed a blocked right-looking algorithm which extends Aasen’s column-pivoted algorithm to the case of accumulated panel updates and which leverages more efficient level-3 BLAS operations. Their decomposition exploits the lower-Hessenberg structure of . Efficient implementations of this algorithm rely on the gemmt BLAS extension which updates only the upper or lower half of a general matrix product. If only the unique portions of the trailing matrix are updated in this way, then their algorithm also achieves FLOPs.
The Rozložník algorithm is implemented for symmetric tridiagonalization in LAPACK as sytrf_aa [26]. That implementation does not use gemmt but rather performs a blocked trailing updated which mixes gemm for off-diagonal blocks and a loop over gemv for diagonal portions. Ballard et al. [4] also implemented a tiled version of the block (right-looking) Aasen algorithm and a left-looking blocked variant. Their implementation leverages a high degree of parallelism via dynamic task DAG scheduling at the cost of a significantly higher total FLOP count due to several factors such as the use of gemm for constructing , tile decomposition for pivoting, use of trsm to back out from , etc.
Wimmer [22] extended the Parlett-Reid algorithm to the case of blocked updates, and also considered the case of a partial tridiagonalization (in which even-number columns of are non-zero below the sub-diagonal) which is sufficient for computation of the Pfaffian. The blocked version uses a rank-2 skew-symmetric update (skr2k), . When applied in the partial tridiagonalization case with rank equal to half the block size, it requires FLOPs. In the more general case of full tridiagonalization, FLOPs are required. Notably, only Wimmer has explicitly considered the case of skew-symmetric rather than symmetric matrices.
Wimmer implemented the blocked right-looking algorithm in PFAPACK [23], although it makes use of an unoptimized version of skr2k. Later, Xu et al. [25] re-implemented this algorithm with an optimized skr2k based on gemmt in the Pfaffine library [24].
Right-Looking Algorithm: |
ltlt_unb_right |
Initialization: |
if first_col then |
endif |
Update: |
Left-Looking Algorithm: |
ltlt_unb_left |
Update: |
if then |
endif |
III Algorithms
In this section we review the algorithms which were previously derived [17] and are implemented here. We particularly note how the specific features of the various algorithms impact implementation efficiency and optimizations. We also briefly discuss how the initial problem definition, in terms of the PME and choice of algorithmic invariant affect the resulting algorithm. All algorithms share a common structure, depicted in Fig. 1. The partitioning of the matrices , , and reflects the initial partitioning of the operands (for most algorithms), while the structure used within the loop update reflects the overlay of two partitionings (“before” and “after”) where the same invariant is required to hold, but which are shifted by some number of rows and columns in order to make progress through the matrix. Importantly, comparison of the conditions implied by the invariant at the beginning (“before”) and at the end (“after”) of the loop body uniquely determines the update step(s).
Right-Looking Algorithm: ltlt_blk_right |
---|
Left-Looking Algorithm: ltlt_blk_left |
III-A Unblocked Algorithms
Fig. 2 depicts the “basic” right- and left-looking unblocked algorithms, which are equivalent to the algorithms of Parlett and Reid, and of Aasen, respectively. The algorithms as written contain some extra provision for the case in which the first column of (below the diagonal) is not assumed to be zero as it typically is when performing a full factorization. Actually, as Rozložník pointed out [13], the first column of may be initialized arbitrarily, leading to distinct but functionally equivalent factorizations. When used to solve the panel sub-problem in a blocked algorithm, this first column will typically contain the last (and usually non-zero) elimination vector from the previous panel. However, whether or not this column should be considered in the sub-problem depends on the structure of the blocked algorithm (see Fig. 3).
The left-looking algorithm performs a matrix-vector multiply each iteration, which is the leading-order cost in terms of FLOPs. While the intermediate product (a portion of ) can be computed separately in time, we denote the entire matrix-tridiagonal-vector multiplication as a single fused operation gemv-sktri. At iteration , this then incurs FLOPs and hence a total leading-order cost of FLOPs. Instead, the right-looking algorithm performs a skew-symmetric rank-2 update (skr2) every iteration at a cost of FLOPs for a total leading-order cost of FLOPs. Because the unblocked algorithm deals only with a “tall-and-skinny” trapezoidal region of for a blocked sub-problem, the trailing region of the skr2 operation becomes a general rank-2 update (denoted here as ger2).
“Two-step” Right-Looking Algorithm |
---|
Update: |
Fused Right-Looking Algorithm: ltlt_blk_fused |
Initialization: |
Update: |
Wimmer [22] halved the cost of the right-looking algorithm by skipping every other column of , leading to a partial tridiagonal structure of . This structure is sufficient for computing the Pfaffian but not for inversion or solving systems of equations. We derive a roughly similar algorithm as the “two-step” right-looking algorithm in Fig. 4. This algorithm performs a single rank-2 update for every iteration (i.e. every two rows/columns), thus yielding the same leading-order cost as Wimmer’s partial factorization. However, the two-step algorithm does in fact compute every column of . This example illustrates the potential for reducing computation through operation fusion (in this algorithm, the updates from the two computed columns of are naturally combined), and importantly, for enabling and identifying such opportunities through an alternative arrangement of the problem definition (the PME, invariant, and iteration conditions) and subsequent formal analysis, rather than through manual post-optimization.
III-B Blocked Algorithms
A blocked right-looking algorithm was first proposed by Rozložník et al. [13] and was implemented in LAPACK [26]. This algorithm has significant similarity to our algorithm, depicted in Fig. 3, with the most significant difference being the need to maintain a partial block of in the former. Instead, our algorithm directly exposes the skew-symmetry of the block update step as an operation with the form of a symmetric rank- update, but with a skew-symmetric tridiagonal matrix inserted in the middle of the product. We denote this operation as gemmt-sktri due to the fact that pre-combining or leads to a gemmt (general matrix-matrix multiplication updating only the upper or lower half), but with implicit generation of the Hessenberg factor from a tridiagonal-times-matrix multiplication. As is also not required in the unblocked sub-problem, it can be omitted entirely, reducing workspace and FLOPs. The leading-order cost for both our algorithm and Rozložník’s is only ; this is in contrast to the blocked Parlett-Reid algorithm introduced by Wimmer which uses a skew-symmetric rank- update (skr2k) and hence requires twice as many FLOPs. All blocked algorithms derived to date utilize the left-looking unblocked (Aasen) algorithm to factorize each panel. When pivoting, this is necessary as right-looking updates are only applied within the panel. Columns pivoted in from outside the panel will then not satisfy the necessary pre-condition. For completeness, we also implement blocked unpivoted algorithms which use a right-looking unblocked algorithm for the sub-problem. However, we opt not to use our new two-step algorithm for blocked sub-problems.
A blocked left-looking algorithm was also derived in [17] and is depicted in Fig. 3. While Yamazaki et al. hint at the possibility of a left-looking blocked Aasen algorithm [26], they do not provide any algorithm nor, to our knowledge, has any implementation been made. Like the right-looking algorithm, it requires FLOPs, but features a general matrix-matrix product (gemm) to update the next panel of . While considering an intermediate Hessenberg product does perturb the skew-symmetry of the matrix or the efficiency of the gemm, it does require additional storage and memory accesses to perform the tridiagonal-matrix product. We denote the combined general matrix-tridiagonal-matrix product as gemm-sktri. The left-looking algorithm is conventionally considered as lower-performing compared to the right-looking algorithm due to the reduced level of parallelism available in the matrix-matrix product compared to the rank- update of the right-looking algorithm. Additionally, the left-looking blocked algorithm cannot incorporate partial pivoting as the panel update cannot be applied until pivots are chosen and vice versa.
Finally, we implement the novel blocked right-looking algorithm which starts with a more fine-grained partitioning of the PME and invariant [17]. This algorithm “shifts” the computation by one row/column, resulting in the factorization of column (via ) during the panel update, and hence allows the trailing update to be computed entirely in the form (a fused gemmt-sktri operation). As with the two-step algorithm, an alternative setup of the problem followed by formal analysis exposes additional opportunities for fusion and optimization.
IV Implementation
Most of the blocked algorithms mentioned above, as well as the unblocked left-looking and “two-step” right-looking algorithms, achieve a leading-order FLOP count of , with the main exception being the blocked right-looking algorithm of Wimmer. In analogy to the Cholesky and decompositions, it is highly unlikely that a full tridiagonalization can be accomplished in fewer than FLOPs (but this remains to be proven). Thus, optimizing the implementation of these algorithms must focus on reducing memory traffic overhead and increasing parallel scaling, rather than reducing FLOPs.
We focus on optimization through operation fusion, which enables a greater number of FLOPs to be performed with fewer total reads and writes. Beyond raising the average arithmetic intensity, such fusion may also combine a compute-bound operation (high arithmetic intensity, such as gemm) with a memory-bound operation (low arithmetic intensity, such as ger), which can asymptotically eliminate the entire cost of the latter operation.
Many of these fusion opportunities are enabled by defining new combined BLAS-like operations, which we implement either via in-house code or as extensions to the BLIS framework, which additionally provides pre-optimized computational kernels, tuned cache blocking, and parallelization.
IV-A Storage of the and matrices
Similarly to other DLA routines, such as those implemented in LAPACK, it is often desirable to reuse the storage of the input matrix for the decomposed form. For an decomposition, the strictly lower or upper elements are available, which can exactly hold the unique non-unit/zero elements of and . For example, the lower part of (except the leading and presumably zero column) may be stored, “shifted” left by one column, below the sub-diagonal of , with elements of stored on the sub-diagonal. Because the algorithms here do not require any additional arrays or matrices, they could then be implemented without any external workspace (apart from the pivot vector ). However, we note that the updates often explicitly reference the diagonal of , which is implicitly one. For example, in the blocked right-looking algorithm, the update appears,
When stored as described above, the “1” which is in fact would refer to the same storage as , which in turn physically stores . This value cannot simply be temporarily overwritten with an explicit 1, as it is also referenced in the central tridiagonal part of the update. One option is to split the update into separate steps, for example,
Instead, we opt to store the sub-diagonal elements of in an external vector, and to store explicit 1’s on the sub-diagonal of as columns of are computed. This allows the entire update to be computed using a single, fused gemmt-sktri operation. Interestingly, the choice of the storage location of the and matrices is expressible as a part of the PME and invariants by imposing additional conditions. Dependency analysis of the resulting logical expressions at the “before” and “after” points can then illuminate when such fusion optimizations can and cannot be performed. Storing as an external array also allows elements to be accessed contiguously.
IV-B Level 2 BLAS and BLAS-like operations
While the leading-order cubic cost of the blocked algorithms arises mostly from level-3 BLAS or BLAS-like operations, the unblocked algorithm which operates within blocks works entirely within level-1 and level-2 and, as can be seen in our results, accounts for a considerable portion of the running time. We apply two optimizations to these operations: first, we include OpenMP parallelization of all level-2 operations (including the application of blocks of pivots, see e.g. Fig. 3). As most BLAS libraries do not parallelize level-2 operations, including BLIS, we implement in-house versions, although we use BLIS kernels where applicable. Second, we have implemented “fused” versions of those level-2 operations not appearing in standard BLAS (we count skr2 as a standard operation for the present purposes since it is a minor modification of syr2). These new operations include gemv-sktri and ger2. In the latter case, a speedup approaching may be expected due to accessing the output matrix only once, while in the former case we can only expect a very slight speedup over a naive implementation which first computes (with memory accesses) and then ( accesses) rather than (also accesses) all at once. Where possible, we apply loop unrolling and jamming (e.g. computing multiple axpys with a single loop) in order to maximize register and prefetch pipeline usage.
IV-C Level 3 BLAS-like operations
Level-3 BLAS operations are typically compute-bound, performing memory accesses for only memory accesses.111Actually, any level-3 implementation must perform accesses on one matrix (usually the output matrix), although in practice these accesses are carefully prefetched and amortized such that the actual overhead is dominated by accessing the other two matrices in time Thus, adding overhead, for example to compute as , does not seem like a limitation. However, the skew-symmetric tridiagonal-times-matrix product is, as with level-2 operations, entirely memory-bound. In order to fuse these operations together, we take advantage of the flexibility of the BLIS framework, where user-defined “packing” operations can be specified, to create a custom BLAS-like operation. Packing is a typical part of gemm (and other level-3) implementations, and serves to load blocks of the input matrices into a special storage format which provides linear, vectorized access in the inner kernel as well as better TLB reuse. Because packing already necessitates accessing memory in the input matrices, we compute the product on-the-fly and store it into the packed buffer. This custom packing operation has negligible increase in cost over standard packing because, although each element of requires reading two elements of (due to the skew-symmetric tridiagonal structure), the accesses can take advantage of high temporal locality. The rest of the operation is handled without modification by BLIS, yielding the same overall performance as standard level-3 operations.
IV-D C++ interface
A sample implementation of the column-pivoted blocked right-looking algorithm is depicted in Fig. 5. We make use of MArray,[11] which is a header-only DLA library developed by us, similar in spirit to Eigen [8], Armadillo [14], Blitz++ [20, 21], and others. We have extended this library with a range-based partitioning API which expresses the partition–repartition pattern of FLAME. Ranges are strongly typed, with singleton ranges (corresponding to partitions containing only one column or row) typed as scalar integers, and others typed as a C++ class representing the half-open range . Indexing matrix or vector types with such ranges produces a result which depends on the range types, yielding a sub-matrix, sub-vector, or single scalar entry. Finally, BLAS and BLAS-like operations (besides simple scalar-vector operations which use the expression template functionality of YYY) use custom wrapper functions such that matrix lengths, strides, and other properties can be provided by the relevant objects. Even for unblocked algorithms, which are sensitive to overhead in terms of branches, function calls, and additional memory accesses (register spills, updating structs, etc.), typical compilers (in our case, we use gcc 11.4.0) are capable of optimizing the combination of while loop and range partition–repartition pattern into the same assembly as a plain for loop, with the exception of a small number of spurious conditional moves. As these instructions are spurious (referring to conditions already known), they can be perfectly predicted. This observation matches our profiling data which shows negligible overhead in the ltlt_unb_var function bodies.

V Results
We performed a series of performance experiments using a system with AMD EPYC 7763 processors (64 cores per socket) and 512 GiB of 8-channel (per socket) DDR4-3200 RAM. At the base frequency of 2.45 GHz, this provides 5.02 TFLOPs of peak double precision performance (2.51 TFLOPs per socket), although actual frequency depends on thermal load and available boost up to 3.5 GHz.
We compare our implementations to equivalent or computationally similar operations provided by several packages. We use standard LAPACK routines from Intel MKL 2022.2.1, OpenBLAS 0.3.26, and AMD AOCL 4.2. We also use factorizations from PFAPACK and Pfaffine compiled from source using the same gcc 11.2.0 compiler as used for our code. We use a pre-release version of BLIS 2.0 for implementing the skew-symmetric and fused BLAS operations. We use OpenMP for parallelization, and pin threads to cores using the OMP_PLACES=cores and OMP_PROC_BIND=close environment variables. In all cases we measure performance for double precision matrices, and the default schedutil frequency governor is used due to technical limitations.
V-A Impact of optimizations
In order to investigate the impact of the several optimizations described in Section IV, we measured the performance of a sequence of algorithms in which the optimizations are applied in “steps”:
- Step 0:
-
Step 1:
Fused level-2 operations gemv-sktri and ger2 are introduced, without parallelization.
-
Step 2:
All level-2 operations as well as block pivoting are parallelized.
-
Step 3:
is stored in an external vector and fused updates are used which reference (unit) diagonal elements of .
-
Step 4:
Fused level-3 operations gemm-sktri and gemmt-sktri are introduced.
-
Step 5:
The fused blocked right-looking algorithm of Fig. 4 is used.
Optimizations at each step are applied cumulatively. The performance of the blocked right-looking/unblocked left-looking algorithm (with and without pivoting) at each step is presented in Fig. 6, using one socket (64 threads).
Three different algorithmic block sizes are tested: 128 (top row), 256 (middle row), and 512 (bottom row). Successively large block sizes imply more time spent in the unblocked panel factorization, but also more aggregation into level-3 trailing updates. Step 1 has a very small positive impact, as might be expected from the fusion of an essentially level-1 step into the gemv operation. Step 2 has a much larger effect, particularly at large core counts, as approximately 10 cores are required to fully saturate the memory bandwidth. This effect grows with algorithmic block size due to the larger amount of work in level-2 operations. The impact of Step 3 is also very small. At this step, essentially a gemv-sktri is fused with an “adjacent” gemmt-sktri (at the blocked level, fusion at the unblocked level is less important). As the input sub-matrices for the gemv-sktri come only from the current block they typically reside in L2 or L3 cache. The vector to update is typically in main memory, but is only in size. This is in contrast to Steps 4 and 5, where now the operations which are fused operate on data. The effect of these optimizations is much larger. The effect of fusing the trailing skr2 update (Step 5) is particularly pronounced for small block sizes. Cumulatively, these optimizations increase performance by more than in the unpivoted case, and in the pivoted case.
V-B Parallel scalability
We measure the parallel scalability of our algorithms by performing a strong scaling experiment from one to all 128 cores of the system for a single skew-symmetric factorization. The results are presented in Fig. 7, with unpivoted and pivoted blocked algorithms in the top and bottom panels, respectively. All algorithms involve non-trivial memory bound work, primarily in the unblocked sub-problem and in pivoting. Additionally, the unfused blocked right-looking algorithm performs an level-2 skr2 operation which adds considerable overhead (see Section V-A). The impact of these operations can be clearly seen in the parallel scaling, as memory bandwidth only increases with core count up to cores, and latency is significantly impacted for inter-CCD and especially inter-socket memory traffic. The unblocked algorithms experience a drop of in parallel efficiency (which is proportional to per-core performance) from 1 to 16 or 32 threads. Some of this is attributable to a drop in processor frequency due to thermal management. The single core results certainly show an effect of frequency boosting as the base clock single core peak is only 39.2 GFLOPs, while peak at max boost frequency is 56 GFLOPs. Assuming max boost, the fused right-looking algorithm (with or without pivoting) achieves of peak.

In order to measure this effect, we consider the MKL implementation of gemm as the “speed of light” and present relative performance of our algorithms in Fig. 7c. Indeed, the relative performance for 16+ threads is higher than indicated by panel a), although only slightly. The difference grows with core count, showing a progressive drop in processor frequency, although significant boosts are evident even for 32 or more cores. From panel c), the biggest parallel efficiency drops are at 16 cores, likely due to exceeding a single 8-core cluster (CCD), and then at 64 threads and beyond. Based on profiling data, the main effect is due to increased time in level-2 operations, indicating main memory bandwidth as the limiting factor (inter-cache bandwidth would already be affected at core counts greater than 8 due to inter-CCD communication). In the two socket case, the impact of NUMA remote accesses is likely an additional factor. The scalability of the pivoted algorithms is even more severely affected. In particular, we measure the application of block pivots (see Fig. 3 for example) as the largest overhead. This is primarily because the columns of to pivot are stored in column-major layout, while rows are swapped. Thus, accesses “waste” most of each cache line loaded and TLB reuse is poor, especially for very large matrices. We attempt to optimize stores by blocking across the row dimension to increase temporal locality. Overall, the best unpivoted algorithm achieves 41% parallel efficiency (53% of gemm) at 64 cores, while the best pivoted algorithm achieves only 24% (29% of gemm). We observe similar poor scalability of existing pivoted factorizations, e.g. pivoted Cholesky (pstrf) and Bunch-Kaufman symmetric tridiagonalization (sytrf) although data is not shown here.

V-C Comparison to other algorithms
In Fig. 8 we compare the performance of our implementation to related algorithms as well as existing implementations of skew-symmetric tridiagonal factorization. In Fig. 8ab, we compare to common unpivoted and pivoted operations which share computational and algorithmic similarities to , but which are expected to be highly optimized given their ubiquity in scientific computing. Panel a) shows a comparison to Cholesky decomposition (potrf). Cholesky decomposition incurs the same number of FLOPs as our algorithms, but is expected to be more efficient due to several factors which we do not take advantage of. The first is recursive decomposition (more than the two levels employed by us), which is necessary to achieve the communication/IO lower bound [5]. The second is the use of level-3 BLAS operations (trsm in the case of Cholesky) to update the trailing portion of blocks of the triangular factor. While the MKL implementation does exceed our performance comfortably, we find that the fused right-looking algorithm with left-looking unblocked updates is competitive with the OpenBLAS Cholesky implementation and much more efficient than that in AOCL. The unfused right-looking algorithm, as well as algorithms which use unblocked right-looking updates experience a significant drop in performance. This is almost exclusively due to the poor parallel performance of skr2. We use a non-uniform thread partitioning scheme, but load imbalance and cache conflicts still significantly impact performance. The left-looking blocked algorithm, even though it does not perform an additional level-2 operation at the blocked level, falls behind the right-looking algorithm. As mentioned above, this is at least in part due to reduced parallelism in block updates, depending on the stage of the algorithm. In panel b), we compare to pivoted Cholesky (pstrf) and column-pivoted decomposition (getrf). Both algorithms exhibit similar pivoting requirements and other algorithmic properties, although decomposition performs twice as many FLOPs. We see that both of our pivoted algorithms achieve significantly better performance compared to pstrf, but are in turn outperformed by getrf. As getrf is one of the most-used DLA operations it is unsurprisingly highly optimized in all major libraries. Additionally, getrf can make use of recursive blocking and trsm for updates to (as with Cholesky), and can aggregate most of row swaps due to pivoting while also not requiring complementary column swaps.
In comparison to symmetric tridiagonal factorizations (sytrf and sytrf_aa, Fig. 8c), our algorithms achieve uniformly and significantly higher performance than all implementations, with the exception of MKL’s sytrf for matrix sizes less than . In large part, this difference is not so much attributable to algorithmic deficiencies in these algorithms (for example, the Rozložník algorithm implemented in sytrf_aa should have similar intrinsic performance to our fused blocked right-looking algorithm), but rather due to lack of optimization in the implementation. The difference between the MKL and other sytrf implementation highlights this effect. While we applied several optimizations in our work which might not be leveraged in the lower-performing implementations, the use of a flexible, expressive, and performant C++ interface along with the ability to enable and expose optimization opportunities through formal derivation methods is immediately applicable to many (if not almost all) other DLA routines in LAPACK and beyond. Thus, the use of our techniques should significantly lower the barrier to producing highly-efficient DLA implementations. Finally, in Fig. 8d we compare our algorithms to implementations of Wimmer’s blocked right-looking algorithm from PFAPACK and Pfaffine. These implementations are very similar, with the latter switching from Fortran to C++ and replacing the reference, unoptimized skr2k of PFAPACK with calls to an optimized version, presumably implemented in terms of gemmt. This difference is highly evident from the results where PFAPACK does not exceed 5.8 GFLOPs, compared to a peak of 135 GFLOPs for Pfaffian. Both implementations also perform twice as many FLOPs as our algorithms, thus a reduction in FLOPs plus other optimizations applied here (parallelization of level-2 operations and fusion) accounts for the observed performance difference.
Overall, these results validate the optimizations and implementation strategy that we have applied, as the observed performance is roughly comparable to existing optimized implementations of related operations, and much higher than existing implementations of the specific factorization (symmetric or skew-symmetric).
VI Conclusions
We have implemented a family of algorithms for computing the tridiagonal factorization of a skew-symmetric matrix, . We employed optimizations, primarily operation fusion, at multiple levels, along with an expressive yet performant C++ interface which closely follows the notation and philosophy of the FLAME methodology. The optimizations employed are shown to have a significant impact on performance, increasing efficiency from 3–5 depending on whether partial pivoting is performed or not. The unpivoted algorithms attain similar performance across an entire AMD EPYC 7763 processor (64 cores) to highly tuned implementations of related factorizations such as the Cholesky decomposition. The pivoted algorithms significantly improve on the performance of existing symmetric and skew-symmetric tridiagonalization routines. While some work remains on increasing parallel scalability, particularly beyond a single socket (and to some extent, beyond the 8-core cluster group or CCD featured in AMD “Milan” processors), the observed performance validates the current approach which is focused on the use of expressive APIs which clearly connect to mathematical notation and formal derivation methods, and the use of extended BLAS-like operations implemented with the flexible BLIS framework. Beyond providing a best-in-class implementation of the specific skew-symmetric factorization, this approach is immediately extensible to a wide range of DLA operations, where it has the promise to significantly lower the barrier to creating highly efficient yet easily understandable implementations.
References
- [1] Jan Ole Aasen. On the reduction of a symmetric matrix to tridiagonal form. BIT Numer. Math., 11(3):233–242, September 1971.
- [2] Cleve Ashcraft, Roger G. Grimes, and John G. Lewis. Accurate symmetric indefinite linear equation solvers. SIAM Journal on Matrix Analysis and Applications, 20(2):513–561, 1998.
- [3] Michal Bajdich and Lubos Mitas. Electronic structure quantum Monte Carlo, August 2010. arXiv:1008.2369 [cond-mat, physics:physics].
- [4] Grey Ballard, Dulceneia Becker, James Demmel, Jack Dongarra, Alex Druinsky, Inon Peled, Oded Schwartz, Sivan Toledo, and Ichitaro Yamazaki. Implementing a Blocked Aasen’s Algorithm with a Dynamic Scheduler on Multicore Architectures. In 2013 IEEE 27th International Symposium on Parallel and Distributed Processing, pages 895–907, May 2013. ISSN: 1530-2075.
- [5] Grey Ballard, James Demmel, Olga Holtz, and Oded Schwartz. Communication-optimal parallel and sequential cholesky decomposition: extended abstract. In Proceedings of the Twenty-First Annual Symposium on Parallelism in Algorithms and Architectures, SPAA ’09, page 245–252, New York, NY, USA, 2009. Association for Computing Machinery.
- [6] Paolo Bientinesi, Enrique S. Quintana-Ortí, and Robert A. van de Geijn. Representing linear algebra algorithms in code: the FLAME application program interfaces. ACM Trans. Math. Softw., 31(1):27–59, March 2005.
- [7] Jack J Dongarra, Iain S Duff, Danny C Sorensen, and Henk A Van der Vorst. Numerical linear algebra for high-performance computers. SIAM, 1998.
- [8] Gaël Guennebaud, Benoît Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010.
- [9] John A. Gunnels, Fred G. Gustavson, Greg M. Henry, and Robert A. van de Geijn. FLAME: Formal Linear Algebra Methods Environment. ACM Trans. Math. Softw., 27(4):422–455, December 2001.
- [10] Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, and Xiaoou Tang. Deep Learning Markov Random Field for Semantic Segmentation, August 2017. arXiv:1606.07230 [cs].
- [11] Devin A. Matthews. MArray. http://github.com/devinamatthews/marray, 2024.
- [12] Beresford N Parlett and JK Reid. On the solution of a system of linear equations whose matrix is symmetric but not definite. BIT Numer. Math, 10(3):386–397, 1970.
- [13] Miroslav Rozložník, Gil Shklarski, and Sivan Toledo. Partitioned Triangular Tridiagonalization. ACM Trans. Math. Softw., 37(4):38:1–38:16, February 2011.
- [14] Conrad Sanderson and Ryan Curtin. Armadillo: a template-based c++ library for linear algebra. Journal of Open Source Software, 1(2):26, 2016.
- [15] Creighton K. Thomas and A. Alan Middleton. Exact Algorithm for Sampling the 2D Ising Spin Glass. Physical Review E, 80(4):046708, October 2009. arXiv:0906.5519 [cond-mat].
- [16] Robert van de Geijn and Maggie Myers. Formal Derivation of LU Factorization with Pivoting, April 2023. arXiv:2304.03068 [cs].
- [17] Robert van de Geijn, Maggie Myers, RuQing G. Xu, and Devin A. Matthews. Deriving algorithms for triangular tridiagonalization a (skew-)symmetric matrix, November 2023. arXiv:2311.10700 [cs].
- [18] Field G. Van Zee, Tyler M. Smith, Bryan Marker, Tze Meng Low, Robert A. Van De Geijn, Francisco D. Igual, Mikhail Smelyanskiy, Xianyi Zhang, Michael Kistler, Vernon Austel, John A. Gunnels, and Lee Killough. The BLIS Framework: Experiments in Portability. ACM Trans. Math. Softw., 42(2):12:1–12:19, June 2016.
- [19] Field G. Van Zee and Robert A. van de Geijn. BLIS: A Framework for Rapidly Instantiating BLAS Functionality. ACM Trans. Math. Softw., 41(3):14:1–14:33, June 2015.
- [20] Todd L. Veldhuizen. Arrays in Blitz++. In Denis Caromel, Rodney R. Oldehoeft, and Marydell Tholburn, editors, Computing in Object-Oriented Parallel Environments, number 1505 in Lecture Notes in Computer Science, pages 223–230. Springer Berlin Heidelberg, December 1998.
- [21] Todd L. Veldhuizen. Blitz++: The Library that Thinks it is a Compiler. In Hans Petter Langtangen, Are Magnus Bruaset, and Ewald Quak, editors, Advances in Software Tools for Scientific Computing, pages 57–87, Berlin, Heidelberg, 2000. Springer Berlin Heidelberg.
- [22] M. Wimmer. Efficient numerical computation of the Pfaffian for dense and banded skew-symmetric matrices. ACM Transactions on Mathematical Software, 38(4):1–17, August 2012. arXiv:1102.3440 [cond-mat, physics:physics].
- [23] Michael Wimmer. PFAPACK. https://michaelwimmer.org/downloads.html, 2012.
- [24] RuQing G. Xu. Pfaffine. https://github.com/xrq-phys/Pfaffine, 2022.
- [25] RuQing G. Xu, Tsuyoshi Okubo, Synge Todo, and Masatoshi Imada. Optimized implementation for calculation and fast-update of Pfaffians installed to the open-source fermionic variational solver mVMC. Computer Physics Communications, 277:108375, August 2022.
- [26] Ichitaro Yamazaki and Jack Dongarra. Aasen’s Symmetric Indefinite Linear Solvers in LAPACK, 2017. LAPACK Working Note #294.