Run time as a function of number of variables and number of constraints

Hi OSQP authors and the larger community,

Is there some complexity analysis out there that could relate the number of flops to the number of variables (N) and number of constraints (M) fed to the ADMM solver?

Looking for something simple, example O(N*M)

I’m sure the real answer is a lot more sensitive to the problem specification.

Many thanks!

There is not one that is useful, because the answer depends very strongly on the density and location of nonzeros in your problem.

You could make a worst-case estimate of the per iteration time, which would be something like O((m+n)^2), with a further one-time factorisation cost of roughly O((m+n)^3).
These estimates are highly likely to be terrible and nothing like the actual performance that would be observed on a sparse problem. It also doesn’t account for the number of iterations.

Now if you know the sparsity pattern of your problem at least, then you could compute a symbolic factorisation of your problem data and work out exactly the number of operations for both numerical factorisation and per-iteration estimates. I don’t think that’s what you’re asking though.

Thanks a lot - I would love to learn more about complexity analysis under various sparsity patterns. Is there a resource out there you could point me to?

I don’t recall if either addresses complexity of factorisations explicitly, but the texts by either Iain Duff or Tim Davis are a good place to start.

1 Like