Help with performance/accuracy issues for MPC

I am having trouble getting good performance out of OSQP for solving an MPC problem, both in terms of accuracy and speed. I think it may have to do with how I am setting up the solver.

My first attempt with OSQP was to take the exact problem I was using with qpOASES and put it in OSQP and use default settings. This uses the formulation described in equations 3 and 4 of here which doesn’t include states as optimization variables. The accuracy of OSQP was acceptable with default settings, but the speed was around 2x slower than qpOASES, with sometimes very large spikes to 10x slower than qpOASES. I tried tuning alpha and rho, but nothing I tried made it better. This worked well in simulation, but poorly in hardware because of the large spikes in solve time.

After checking on the OSQP website here I saw that the MPC example is set up with the sparse formulation, like in equation 2 of (the previous pdf), where dynamics are part of the constraints. I set up my optimization just like that example.

With the sparse formulation, the accuracy got very bad, and it was even slower. The accuracy problem resulted in the robot behaving poorly in simulation and it was too slow to try on hardware. I notice that as my state variables increase (for instance, the yaw angle increase from around 0 to around 3), the accuracy of OSQP suffers even more and the values all become much smaller than they should be (roughly half the optimal value, as computed by qpOASES and MATLAB’s quadprog ). My optimization variables are not poorly scaled, they are meters (robot moves 10’s of meters), radians (always within +/- 2 pi), meters per second (always less than 5), radians per second (less than 10), and Newtons (less than 100). My cost function is a least-squared error from a desired state, with weights between 0.2 and 20, so I think this is also not poorly scaled.

This is what OSQP prints when solving this type of problem. I have decreased eps_abs and eps_rel for this example, but the accuracy is still causing issues and the speed is also slow compared to qpOASES.

problem:  variables n = 168, constraints m = 200
          nnz(P) + nnz(A) = 768
settings: linear system solver = qdldl,
          eps_abs = 1.0e-04, eps_rel = 1.0e-04,
          eps_prim_inf = 1.0e-04, eps_dual_inf = 1.0e-04,
          rho = 1.00e-01 (adaptive),
          sigma = 1.00e-06, alpha = 1.60, max_iter = 4000
          check_termination: on (interval 25),
          scaling: on, scaled_termination: off
          warm start: on, polish: off

iter   objective    pri res    dua res    rho        time
   1  -8.1495e+01   2.52e+00   1.20e+04   1.00e-01   5.97e-04s
 200  -3.4787e+02   1.37e-01   4.98e-03   1.28e-03   2.11e-03s
 350  -3.4819e+02   6.26e-03   1.70e-03   1.28e-03   3.61e-03s

status:               solved
number of iterations: 350
optimal objective:    -348.1899
run time:             3.63e-03s
optimal rho estimate: 7.70e-04

Time taken in osqp_solve(workspace);  -> 3.705 ms (6.23x slower than ref)

To verify I was setting up the problem correctly, I decreased eps_abs , until the solution was accurate, but at this point, OSQP was much slower than qpOASES (around 15x). However, the robot did work as expected and the solution agreed with qpOASES and MATLAB.

My MPC problem has 12 states, 12 inputs, 10 timestep horizon, and 20 input constraints per timestep. (Input constraints are not poorly scaled, values between 0.4 and 1.0 only)

Is there a recommended setting I should try tuning? Or a better way to formulate the problem?

If needed, I can provide example problems, simulation results, or code where I have issues, just let me know the format that is best.


You appear to be getting convergence in 350 iterations, which is not unusual for the solver. It sometimes can be helpful to increase the interval before the first rho update, but I don’t think that is the issue here since you got a solution in a reasonable number of iterations already. Three suggestions:

  1. You have polishing disabled. If you turn it on and it works (it doesn’t always), then the solution accuracy will be much better.

  2. You could try running the solver with the PARDISO linear solver instead of the default QDLDL one. That might improve the per-iteration solve time.

  3. You are solving an MPC problem, so I don’t think the single QP solve time is the best metric for you. It would be better to run your system closed-loop and warm start OSQP from the previous solution at each new solve. You are likely to observe much lower iteration counts before convergence if you do that.