Towards large scale linear planning

In this post I will look at how large linear programs we can expect to be solvable on a modern computer cluster. I will only consider linear programs related to economic planning, meaning programs that are very sparse. I will be using the local supercomputer center HPC2N and its Kebnekaise cluster as my reference. I assume the reader is familiar with the Message Passing Interface (MPI). I use the term work here rather than time, and it is to be understood as the total amount of computation done by all nodes. The amount of time necessary depends on the number of nodes and how they are connected. Similarly applies for memory as I don't always bother to specify where everything is stored, but only concern myself with the total amount of memory used by the entire cluster.

In a previous post I laid down the rough shape of the linear programs in question. I will expand on it slightly. Each column in the constraint matrix now represents one production method. Each workplace could have many methods available. Each column could represent one particular machine. The average number of non-zeroes per column is bound by some constant q .

Let S denote the entire system matrix including non-negativity constraints and the constraint for the objective function. Finding a solution xRn×1 that is within some given percentage of optimum involves repeatedly solving the following system

H(i)h(i)=STΛ(i)-2Sh(i)=-STΛ(i)-11=-g(i)

where Λ(i) is diagonal and changes between steps. Sizes are as follows:

H(i)Rn×n,SRm×n,Λ(i)Rm×m,m>n

The number of solves necessary for L bits of precision and m constraints is somewhere between L/2 and O(Lm) . The lower bound is my own empirical result and the upper bound is due to Renegar.

Because the system is symmetric and positive definite, the conjugate gradient method can be used. The number of CG iterations necessary depends on how large global steps are taken. With a conservative δ , few CG iterations are necessary per solve. If lots of extrapolation is used, like always stepping 75% of the way to the nearest constraint, then a lot more CG iterations are necessary per solve. A practical solver balances these two. A cluster implementation may choose to try different step sizes in parallel, assigning each one to a subset of machines and seeing which makes the most progress.

For an empirical lower bound using a diagonal preconditioner and large steps I have found 2000 CG iterations to be enough.

For an upper bound we need to first know the number of Newton steps per bit. Renegar tells us that each step with δ=1/(13m) shrinks the objective function by a factor (1-1/28m) . Together with the fact that we're solving both the primal and dual programs this means 56ln(2)m Newton steps per bit. Each Newton step in turn requires some amount of CG iterations that depends on the condition number of H(i) and the quality of the preconditioner.

We can derive how many iterations are necessary to find the solution for each Newton step to within machine precision with single-precision floating point numbers using the following:

(1-2κ(M-1H(i)))j=2-23j232ln(2)κ(M-1H(i))8κ(M-1H(i))

­ κ computes the condition number of a matrix and M is whatever preconditioner we're using. It is assumed |h(i)| does not change too much between steps. This obviously does not hold for h(0) and therefore it likely involves more CG iterations. Due to the nature of floating point numbers it cannot involve more than 256 times the number of CG iterations than the average h(i) does, and therefore it amortizes.

All this taken together we expect the number of CG iterations jmax to solve our linear program to L bits of accuracy to be bracketed by

2000jmax309Lκ(M-1H)m

where κ is understood as the average condition number across all steps.

Preconditioner

I mentioned preconditioners earlier. Preconditioning is an important way to speed up the conjugate gradient method. A preconditioner is an operator that approximates the inverse of the system to be solved. Ways of maintaining a good preconditioner is where the algorithmic progress for interior point methods occurs today.

Diagonal preconditioner

A diagonal preconditioner is one of the simplest preconditioners available. It is simply the diagonal of H(i) which can be computed with O(qn) work and then trivially applied. Because of its cheapness we compute it before every Newton step.

Incomplete Cholesky factorization

The incomplete (sparse) Cholesky factorization is a much better preconditioner than the diagonal, but it is also more costly to compute. First we will note that thanks to the structure of S , forming H(i) explicitly is not too expensive. Assuming the non-zeroes of S are distributed uniformly randomly then by the binomial distribution it can be shown that

nnz(H(i))n+(n2-n)(1-(1-q/m)q)n+(n2-n)q2/mn2q2/m

when qm . The following Octave program demonstrates this:

q = 30;
for n=2.^(10:18)
 m = 2*n;
 S = sprand(m,n,q/m);
 B = S'*S;
 est = n^2*q^2/m;
 printf("n=%6i ratio=%f\n", n, nnz(B)/est);
endfor

Which outputs the following:

n=  1024 ratio=0.812704
n=  2048 ratio=0.897969
n=  4096 ratio=0.948767
n=  8192 ratio=0.974487
n= 16384 ratio=0.988381
n= 32768 ratio=0.995698
n= 65536 ratio=0.998564
n=131072 ratio=1.000623
n=262144 ratio=1.001329

As n (and m ) grows the estimate improves. This means H(i) is quite sparse and memory use is O(q2n) when nm . Computing it involves O(qn2) work. The sparsity pattern of H(i) does not change, which means computing H0 amortizes and subsequent H(i) require only O(q3n) work ( O(q) per non-zero).

It may be possible to speed up the computation of H0 by exploiting its symmetry and the structure of S . The columns and rows of S may be ordered and indexed such that large swaths of H0 may quickly be deduced to be zero. It might be possible to cluster S so that most information in H0 is close to the diagonal.

Once we have H(i) we can use it to compute the incomplete Cholesky factorization L(i)L(i)TH(i) with O(nnz(H)n)=O(q2n2) work. Such preconditioners remain useful only for so long however, and L(i) should be recomputed as the solver progresses. Since we're doing all this in parallel what we're really talking about is L(i-τ) , meaning Cholesky factorizations that arrive slightly "late". We can update these "late" preconditioners with low-rank adjustments using the Woodbury matrix identity, something that is often done in the literature on interior point methods.

I will only use the diagonal preconditioner for the rest of this post since it is much easier to analyze. I found the result for the density of H useful enough to leave this section intact.

Parallel CG iteration

Assume a cluster of P=M×N nodes. Each CG iteration involves computing

q(j)=STΛ(i)-2Sp(j)

and applying the preconditioner

z(j)=D(i)-1r(j)

where D(i) is diagonal. There are many ways to parallelize this. What I show here is merely one way. Note also that in addition to these computations there are global calls to MPI_Allreduce for scalars like α(j) and β(j) , and some trivial node-local vector arithmetic. This gets absorbed in the big-O notation.

Splitting of matrices

­ S is split into blocks Sk,l , one for each node, 1kM and 1lN . Each node also has Λ(i)k-2 , a block on the diagonal of Λ(i)-2 corresponding to the horizontal strip Sk,* .

Communicators

In addition to the global communicator MPI_COMM_WORLD there are two more sets of communicators: rowcomm and colcomm. Each rowcomm groups nodes belonging to one horizontal strip of S , meaning rowcommkSk,* . For colcomm it is similar, but with vertical strips: colcommlS*,l .

Communicators and S_{k,l}

Computing D(i)

Computing D(i) involves O(qn/P) node-local work and a call to MPI_Allreduce along each colcomm. Because the amount of work and communication is trivial and amortizes it makes no difference big-O wise.

Computing q(j)

For practical reasons the multiplication must be broken up into two parts that look as follows:

t(j)=Λ(i)-2Sp(j)

q(j)=STt(j)

Then each node performs the following series of operations:

  • tk=Λ(i)k-2Sk,lpl
  • tk=MPI_Allreduce(tk,MPI_SUM,rowcommk)
  • ql=Sk,lTtk
  • ql=MPI_Allreduce(ql,MPI_SUM,colcomm)l)

Note how each communicator only has to deal with a subset of t and q . This turns out to be important for being able to scale the system up.

Complexity

The amount of computation per node is O(nnz(S)/P)=O(qn/P) assuming S is perfectly split up among them. This can usually be accomplished by swapping rows and/or columns and adjsting the shape of the grid.

For communication we need to know how MPI_Allreduce works. A pessimistic estimate is that it is implemented by pairwise summing in a hypercube grid. In each rowcomm tk is summed in log2N exchanges, each one involving m/M elements. Similarly with colcomm. Let K be the time necessary to transmit one value between two nodes in the cluster, in seconds. Then the time used by each pair of MPI_Allreduce calls is Km/Mlog2N+Kn/Nlog2M) .

If we take the above, let MNP then we get the following overall complexity:

O(qn/P+Kn/PlogP)=O˜(qn/P+Kn/P)

We therefore expect linear speedup for small clusters, transitioning into P speedup for large clusters.

Strip based splitting

Another possibility of splitting S is into horizontal and vertical strips, Sk,* and S*,l respectively. This turns out to be worse than the method presented above except for very small clusters. I therefore only present it in short.

The per-node operations look as follows:

  • tk=Λ(i)k-2Sk,*p(j)
  • t(j)=MPI_Allgather(tk)
  • ql=S*,lTt(j)
  • q(j)=MPI_Allgather(ql)

This means tk and ql are computed at each node and then MPI_Allgather'd into t(j) and q(j) on all nodes. Allgather is assumed to be implemented by pairwise exchange of exponentially growing collections of tk 's. The resulting complexity is as follows:

O(qn/P+Kk=0log2P2kn/P)=O(qn/P+Kn)

This is clearly worse than the blocklevel splitting presented earlier, except when P is very small and log2P dominates. Another way to view the communication bound is that every node must receive n+m elements per iteration which clearly takes O(Kn) time.

Results

The following table summarizes my estimates for how large systems can be solved on Kebnekaise for three of the node types in the cluster: Skylake, Compute-skylake and Large Memory. For those interested in the details the LibreOffice spreadsheet used can be downloaded here. I have chosen the average number of non-zeros per column to be q=160 just like in previous posts, meaning the number of inputs and outputs per production method.

ComputeCompute-skylakeLarge Memory
M (#horizontal strips)28106
N (#vertical strips)1453
P (#nodes, M*N)3925018
m (#constraints)46*10⁹12*10⁹79*10⁹
n (#variables)23*10⁹6.2*10⁹39*10⁹
Estimated CG iteration time (s)9.253.2829.45

Keep in mind these are estimates. To truly test this I would need to apply for a small project allocation via SNIC.

The most fruitful appears to be the Compute nodes which are capable of handling 23,000,000,000 variables in less than ten seconds per iteration. With the previously stated 2000 empirical iterations per solve this amounts to five hours of computation time. Very comfortable. Most of this is just sending data around. Only ~5% of the time is spent actually computing. This suggests that a cluster optimized for this kind of problem should have a much faster interconnect.

The upper bound is trickier since we don't know the condition number κ(D-1H) . Let's guess it is 1000. In that case the number of iterations for 7 bits of accuracy is 15 billion, or 4300 years (!). Clearly we hope that S is not structured so that predictor-corrector methods cannot be used or else plans of this size cannot be computed even with exascale clusters.

One thing to keep in mind is that we are likely to reuse previous solutions, and only perturbing the system in ways like tightening/slackening constraints or adding/removing production methods. Solving such a perturbed system is much cheaper than what has been presented here.

For those curious how long a post like this takes to write, I started writing this one on 2022-01-25. So about a week and a half off and on.

Addendum (2022-02-07)

After going through the literature some more I came across the paper A lower bound on the number of iterations of long-step primal-dual linear programming algorithms (doi:10.1007/BF02206818) by Michael J. Todd and Yinyu Ye which provides a worst-case bound on the number of predictor-corrector steps of Ω(m3) . If I understand their results correctly then the duality gap shrinks by a factor of 3 every K=min(δm3,(1-γ)2n/(8α)) steps where δ=25/(32α) , α4 and 1/2<γ<1 . For sufficiently large linear programs this simplifies to K25/128m3 . Combining this with previous results in here we get:

jmax25/(16log23)Lm3κ(M-1H)

Using the same κ=1000 guess and L=7 bits we get 780,000 iterations or 84 days. Not bad! Keep in mind this is an upper bound. In practice predictor-corrector does even better.

I've also been reading up on Katta Murty's sphere methods. The zero-radius version is identical to an idea I had that appeared to run in O(nω+1) (strongly polynomial) time. I couldn't prove this, and by studying Murty's references I came across one paper that proves it takes exponential time for certain types of linear programs, just like the simplex method. Bummer.

Murty's more sophisticated sphere methods are based on a centering strategy that involves only projection and linear programs in two dimensions. They involve very few if any matrix inversions / linear system solves. But the number of steps appears to be up to 2m which may be prohibitive communication-wise even if the steps themselves are cheap.