# Variable bounds and constraints violation

Hi, I have just tried to use OSQP solver in Julia for a LP and noted that it frequently violates variable bounds and constraints. I know that this solver is intended for QP, but on this link it suggests that it can be used for LP as well: https://www.juliaopt.org/JuMP.jl/stable/installation/

I have given my code, and output.
Please could you advise how to ensure that variable bounds and constraints are not violated?

``````using JuMP, OSQP, LinearAlgebra

function Optimizer(Cost,Demand,Production)
model = Model(OSQP.Optimizer)
@variable(model, 0<= x[b = 1:5])
@objective(model,Min,sum(Cost[i]*x[i] for i=1:5))
@constraint(model, Production*x .>= Demand)
optimize!(model)
end

Cost = [99.74, 91.22, 98.71, 103.75,97.15]'
Production= [4 4 4 4 4 4 4 104; 5 5 5 5 5 5 105 0; 2.5 2.5 2.5 2.5 102.5 0 0 0; 5 5 5 5 5 105 0 0; 4 4 4 4 4 104 0 0]'
Demand= [5000, 7000, 7000, 6000, 8000, 7000, 2000, 1000]
Results = Optimizer(Cost,Demand,Production)
``````
``````Answer = [9.264372775061187, 1388.0376416427164, 9.73315660695938, -0.07465346435688373, -0.00297960504755646]
``````

The solver has returned a solution only to within the tolerance asked of it, which means that some of the constraints may have small violations.

You can do two things:

1. Tighten the tolerances.

Something like this

set_optimizer_attribute(model,“eps_rel”,1e-5)

returns a higher accuracy solution with much smaller violations.

1. Turn polishing on:

Once the solver reaches the specified tolerance, it tries to identify the active constraints and compute a solution that is exact to machine precision. Do this:

set_optimizer_attribute(model,“polish”,true)

This tends to work better in combination with tighter tolerances set for `eps_rel`. The combination of polishing and `eps_rel = 1e-5` worked for me on your problem.

Thanks for looking into it. I have tried your suggestion, but I still get a negative value. Given that the variable is defined to be greater than or equal to zero negative value does not make any sense. On some other problems, I get quite bad results and I think the reason maybe that this solver is not well integrated into JuMP, so one has to use the low level interface.

If I use CLP solver then I get the desired results.

Maybe developers of this package will have a view on this.

I do not understand what the issue is. I did this:

``````using JuMP, OSQP, LinearAlgebra

function Optimizer(Cost,Demand,Production)
model = Model(OSQP.Optimizer)
set_optimizer_attribute(model,"eps_rel",1e-5)
set_optimizer_attribute(model,"polish",true)
@variable(model, 0<= x[b = 1:5])
@objective(model,Min,sum(Cost[i]*x[i] for i=1:5))
@constraint(model, Production*x .>= Demand)
optimize!(model)
end

Cost = [99.74, 91.22, 98.71, 103.75,97.15]'
Production= [4 4 4 4 4 4 4 104; 5 5 5 5 5 5 105 0; 2.5 2.5 2.5 2.5 102.5 0 0 0; 5 5 5 5 5 105 0 0; 4 4 4 4 4 104 0 0]'
Demand= [5000, 7000, 7000, 6000, 8000, 7000, 2000, 1000]
Result = Optimizer(Cost,Demand,Production)
x = Result
``````

When I do so, I get the solution:

``````x
5-element Array{Float64,1}:
9.615384615384613
1387.0576923076917
10.000000000000032
0.2500000000000262
-2.9742967069920573e-21
``````

The solution satisfies your constraints to machine precision. The only nonnegative term in the solution is `-2e-21` in the last position, i.e. effectively zero. You could round this up to zero if it is really a problem. Are you seeing something different?

I don’t think that the example is highlighting any problem with the JuMP interface. If you have other examples in which you have substantial violations, please post them and we will have a look.

I have tried the different solver settings before posting the question. I do get similar results as yours. However, on larger problems the results are unreliable as constraints are violated by much more than the tolerance set (e.g ~ 30-40%). I cannot post the other examples that I have tested but there appears to be something wrong with this solver.

On GitHub a similar issue is reported:

Did you try disabling the `adaptive_rho` feature as suggested in that github issue? If so, what was the outcome?

Yes, I did but it didn’t resolve the issue. I will see if I can prepare another example that will better illustrate the issue

You could perhaps just post the text output from the solver during the solve, i.e. the settings, outputs from the iterations and the post-solution output (status / run-time etc). Maybe we can tell something from that directly without seeing the problem data.