There is not one that is useful, because the answer depends very strongly on the density and location of nonzeros in your problem.
You could make a worst-case estimate of the per iteration time, which would be something like O((m+n)^2), with a further one-time factorisation cost of roughly O((m+n)^3).
These estimates are highly likely to be terrible and nothing like the actual performance that would be observed on a sparse problem. It also doesn’t account for the number of iterations.
Now if you know the sparsity pattern of your problem at least, then you could compute a symbolic factorisation of your problem data and work out exactly the number of operations for both numerical factorisation and per-iteration estimates. I don’t think that’s what you’re asking though.