Inconsistencies w.r.t. adaptive_rho, scaling

Hello everyone,

we are currently trying to adapt osqp to our use case. However, setting the settings of the solver correctly seems to be difficult. I am attaching a problem instance showing the following behaviour:

Rho Adaptive_rho Polish Scaling Iterations Objective value
0.01 1 0 1 975 -2025.4739
0.01 0 0 0 1425 -2059.2177
0.01 1 0 0 4075 -1728.1199
0.01 1 0 1 5225 -1942.6292
0.01 1 1 1 5225 -1942.6292
0.01 0 1 1 1475 -2042.4353

As you see, the objective value can vary quite a lot depending on the parameters set. Also, in all instances turning “polish” on yields “polish unsuccessful”.

There have already been a few discussions here about the sensitivity w.r.t. rho but I wasn’t able to find an optimal way to tune the parameters in order to get the right objective value. Any help would be appreciated.

Thanks!

 c_float P_x[5] = {-0.0,-0.0,-0.0,-0.0,-0.0,};
 c_int P_i[5] = {0,1,2,3,4,};
 c_int P_p[6] = {0,1,2,3,4,5,};
 c_float A_x[13] = {1.0,1.0,1.0,-1.0,1.0,-1.0,-13.820750661910893,1.0,1.0,-9.564788905220015,1.0,1.0,-1.0,};
 c_int A_i[13] = {0,7,1,7,2,5,7,3,6,7,4,5,6,};
 c_int A_p[6] = {0,2,4,7,10,13,};
 c_float q[5] = {-1.0,1.0,1.0,0.2,2.0,};
 c_float l[8] = {0.0,0.0,20.0,0.0,0.0,-OSQP_INFTY,0.0,-OSQP_INFTY,};
 c_float u[8] = {OSQP_INFTY,OSQP_INFTY,100.0,100.0,OSQP_INFTY,0.0,0.0,0.0,};
 c_int P_nnz = 5;
 c_int A_nnz = 13;
 c_int n = 5;
 c_int m = 8;
 OSQPWorkspace *work;
  OSQPSettings  *settings = (OSQPSettings *)c_malloc(sizeof(OSQPSettings));
  OSQPData      *data     = (OSQPData *)c_malloc(sizeof(OSQPData));

  // Populate data
  if (data) {
    data->n = n;
    data->m = m;
    data->P = csc_matrix(data->n, data->n, P_nnz, P_x, P_i, P_p);
    data->q = q;
    data->A = csc_matrix(data->m, data->n, A_nnz, A_x, A_i, A_p);
    data->l = l;
    data->u = u;
  }
    //c_print(data->u);
  // Define solver settings as default
  if (settings) osqp_set_default_settings(settings);
    //settings->polish_refine_iter = 100;
    settings->polish=1;
    settings->max_iter = 10000;
    settings->scaling=1;
    settings->adaptive_rho=0;
    settings->rho = 0.01;

  // Setup workspace
  exitflag = osqp_setup(&work, data, settings);

  // Solve Problem
  osqp_solve(work);

There are a few unrelated issues here.

Polishing : this is a post-processing step where the solver tries to produce a higher accuracy solution after terminating. It has no effect on the iteration count. Sometimes it is successful, sometimes not. It is more likely to be successful if you tighten the tolerance on the solver (i.e. set eps_rel to a smaller value).

Scaling : This is a preprocessing step, and it is almost always better to have it enabled. Note that the number for the scaling is an int, not just a true/false, since it is the number of scaling iterations to use. You probably want something like 5-10 there (I think 10 is the default).

The usual configuration is your second-to-last row, i.e. with both scaling and adaptive_rho. I think the issue is that you only have 1 scaling iteration.

Hi Paul,

thank you for your reply. We actually tried setting scaling iterations to different numbers and also to the default of 10. In all cases our main problem is the instability of the objective value: It can fluctuate quite substantially (up to 10%). At first, we thought that it’s due to unsuccessful polishing but it seems like it’s unrelated to that. Changing eps_rel also didn’t help. What is the best way to stabilise the objective value?

We are running the C implementation by the way, so the discrepancies cannot be due to bad interfacing with C or similar.

Any further advise is greatly appreciated.

Hi Paul, hi everyone,

any thoughts? We’ve played around with the settings quite a bit more but still couldn’t get the obj. value to stabilise. Setting primal/dual eps to 10^-8 helped a bit, but we still see deviations. We benchmarked e.g. against ipopt and mosek. With mosek, primal-dual convergence is there and we are getting a slightly different obj. value. What is the best way to make sure that the obj. value is stable and converges?

Thanks!

I don’t really understand what is going on, because whenever I try your problem (assuming reasonable solver settings) I get an objective value of about -2018, which appears to be the correct value. This is with, for example, (scaling = 10, eps_rel = 1e-4, adaptive_rho = true, rho = 1e-2), which converges in ~1000 iterations. I did this via both Matlab and Julia interfaces, but I agree that if you call directly from C there shouldn’t be any weird interface problems.

There is no direct way to ‘stabilise’ the objective value, since we are really checking for (approximate) satisfaction of KKT type conditions. Since your problem is strangely scaled – the first two components of the optimiser are quite large – it is maybe not surprising to see some sensitivity in the objective value even when the KKT solution is quite good.

You do not mention the platform. I suppose that you could be getting an initial rho update very early, since we choose the update point based on factorisation time. To make it happen at a fixed number of iterations, you could try (adaptive_rho_fraction = 0, adaptive_rho_interval = 50) to force update at a deterministic iteration number.

Could you post outputs that you get with two different settings illustrating what you are seeing? Preferably two cases with scaling enabled (maybe 10 scaling iterations) and which both report convergence but to different values. Also the platform and OSQP version. There is no need to set a very tight tolerance I think. eps_rel = 1e-4 seems to be fine for me.