How to select eps_abs and eps_rel

Hi,

As others have mentioned in the forum, OSQP is a great product - many congratulations and thank you. My questions is generic, please forgive me if the answer is already out there in a textbook.

Can the primal and dual residual be connected to the distance (in units of objective function) from the optimal point?

I have a problem which is rather large and time-sensitive, therefore I’d like to solve it in as few iterations as possible, as long as I have some guarantee that I won’t be more than a given amount (this itself can be a function of the residuals / eps_abs / eps_rel) far from the optima. If such an ansatz exists, that would help me specify eps_abs and eps_rel with confidence. Currently I am trying the naive approach of incrementally increasing precision and observing the effects on the optimal objective, but this does not generalize.

I see in Section 3.3.1 in https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf something that may provide some direction, but I’m not sure if this applies to the osqp algo equally?

Thanks a lot

If by “distance to the optimal point” you are referring to the distance between the solution returned and the optimiser, then I think the answer is no. The reason is that one can construct an LP for which an epsilon change to the problem data results in an arbitrarily large change to the optimiser, with a corresponding arbitrarily small change in the objective value.

Thanks for your reply. Can you explain what you mean by the optimiser?
By distance I meant the difference between the returned optimal objective and the true global minima

By ‘optimiser’ I mean an argument that minimises the objective function. The optimal objective value (the ‘optimum’) is then the value of the objective function evaluated at an optimiser.

Thanks for clarifying. My question is about bounding the solution returned by OSQP in value of the objective function space, using eps_abs and eps_rel, so that lets say if I’m doing portfolio optimisation and my objective is in dollars, I can say that using 1e-3 for eps_abs and eps_rel, and knowing the solution OSQP returned, can I say something like the optimal objective is not more than 1 million dollars off the true minima? Maybe I’m missing something obvious, thanks for your patience :slight_smile:

Still no. You could, for example, construct a problem which is not feasible but where the constraints are very close to feasibility. Suppose for simplicity that in this case the objective function is always 0.

The solver may return a point that it believes to be feasible, with objective value zero. The true answer is that the problem is infeasible, and (by convention) has infinite objective value.

Variations of the above could be used to construct problems where the cost you get is extremely far from the true value.

Thanks for clarifying Paul with a great example. So essentially from the point of view of bounding the objective’s suboptimality, there’s nothing we can say about the quality of the solution OSQP returns?

Instead of ADMM, if I were using interior point methods, please correct me if I’m wrong, I would see a primal objective (the one I’m interested in solving) and a dual objective, and as iterations grow I would see the duality gap going to 0, thus giving me a lower bound.

With OSQP, I only see a single objective and primal/dual residuals - so I was hoping an analog of the duality gap would exist - otherwise it’s very difficult to have confidence in the stopping criterion.

For an IP solver you would seem the same behavior. The issue has nothing to do with the algorithm, but rather with the difficulty of detecting cases in which constraints overlap very slightly to produce either no feasible points or a feasible set that is very pointy (e.g. the intersection of two nearly parallel halfspaces).

If I try to solve this problem:
min x
s.t. x <= 1
(1+eps) <= x

with eps = 2e-16, I would expect to get a solution x^* ~ 1 with most solvers, even though the problem is infeasible. I just tried this in OSQP, SEDUMI, matlab’s quadprog and GLPK and all of them got it wrong.

Paul,

Thanks for your patience. I agree floating point precision is paramount - certainly these nasty problems that teeter on the edge of machine precision would trick any algorithm.

Separately, for a feasible example, one that we know is feasible without running into machine precision, I continue asking the same question. Can you educate me on alternative algorithms, if any, that would provide a bound on the objectives suboptimality?
I’m not an expert and am just looking for a way to solve a constrained quadratic convex optimization problem while being able to say the objective returned by the algorithm is within something of the true minima - it may be that OSQP / SEDUMI / MATLAB’s quadprog isn’t the right solution for that.

p.s. The interior point approach I mentioned in the last comment would generate a log like this: https://docs.mosek.com/9.2/pythonapi/solving-conic.html#the-interior-point-log where there’s two different problems with a duality gap