Code Monkey home page Code Monkey logo

tulip.jl's Introduction

Tulip

DOI

Tulip is an open-source interior-point solver for linear optimization, written in pure Julia. It implements the homogeneous primal-dual interior-point algorithm with multiple centrality corrections, and therefore handles unbounded and infeasible problems. Tulip’s main feature is that its algorithmic framework is disentangled from linear algebra implementations. This allows to seamlessly integrate specialized routines for structured problems.

License

Tulip is licensed under the MPL 2.0 license.

Installation

Install Tulip using the Julia package manager:

import Pkg
Pkg.add("Tulip")

Usage

The recommended way of using Tulip is through JuMP or MathOptInterface (MOI).

The low-level interface is still under development and is likely change in the future. The MOI interface is more stable.

Using with JuMP

Tulip follows the syntax convention PackageName.Optimizer:

using JuMP
import Tulip
model = Model(Tulip.Optimizer)

Linear objectives, linear constraints and lower/upper bounds on variables are supported.

Using with MOI

The type Tulip.Optimizer is parametrized by the model's arithmetic, for example, Float64 or BigFloat. This allows to solve problem in higher numerical precision. See the documentation for more details.

import MathOptInterface as MOI
import Tulip
model = Tulip.Optimizer{Float64}()   # Create a model in Float64 precision
model = Tulip.Optimizer()            # Defaults to the above call
model = Tulip.Optimizer{BigFloat}()  # Create a model in BigFloat precision

Solver parameters

See the documentation for a full list of parameters.

To set parameters in JuMP, use:

using JuMP, Tulip
model = Model(Tulip.Optimizer)
set_attribute(model, "IPM_IterationsLimit", 200)

To set parameters in MathOptInterface, use:

using Tulip
import MathOptInterface as MOI
model = Tulip.Optimizer{Float64}()
MOI.set(model, MOI.RawOptimizerAttribute("IPM_IterationsLimit"), 200)

To set parameters in the Tulip API, use:

using Tulip
model = Tulip.Model{Float64}()
Tulip.set_parameter(model, "IPM_IterationsLimit", 200)

Command-line executable

See app building instructions.

Citing Tulip.jl

If you use Tulip in your work, we kindly ask that you cite the following reference (preprint available here).

@Article{Tulip.jl,
  author   = {Tanneau, Mathieu and Anjos, Miguel F. and Lodi, Andrea},
  journal  = {Mathematical Programming Computation},
  title    = {Design and implementation of a modular interior-point solver for linear optimization},
  year     = {2021},
  issn     = {1867-2957},
  month    = feb,
  doi      = {10.1007/s12532-020-00200-8},
  language = {en},
  url      = {https://doi.org/10.1007/s12532-020-00200-8},
  urldate  = {2021-03-07},
}

tulip.jl's People

Contributors

amontoison avatar blegat avatar devmotion avatar fredrikekre avatar github-actions[bot] avatar juliatagbot avatar matbesancon avatar mtanneau avatar nsajko avatar odow avatar pratyai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tulip.jl's Issues

API for parameters

Very happy to see the BigFloat stuff is working!

I'm planning on making a discourse post announcing Convex.jl 0.13 which supports high-precision problem formulation and MOI. I wanted to show an example with Tulip.jl, and was thinking about adopting the bounded one from examples.jl:

using Convex, Tulip, Test
x1 = Variable()
x2 = Variable()
constraints = [ x1 + x2 == big(1.0), x1 - x2 == big(0.0), 0 <= x1, 0 <= x2, x1 <= 1, x2 <= 1]
problem = minimize( x1 + 2*x2, constraints; numeric_type = BigFloat)
opt = Tulip.Optimizer{BigFloat}()
opt.inner.env.barrier_tol_pfeas = big(1e-30)
opt.inner.env.barrier_tol_dfeas = big(1e-30)
opt.inner.env.barrier_tol_conv = big(1e-30)
opt.inner.env.barrier_tol_infeas = big(1e-30)
solve!(problem, opt)
@test problem.optval  1.5 atol=1e-30

Is there a better way to set the parameters than I did here?

(the example requires Convex.jl master until 0.13 is tagged).

Writing user extension for `Tulip.KKT.AbstractKKTBackend`, `Tulip.KKT.AbstractKKTSolver` etc.

Hi,

I am trying to write a custom KKT backend and solver for Tulip that could exploit the known structure of a certain class of problems. The problem details etc. are not particularly relevant, but I would like to write a module like:

module MyKKT

using Tulip

struct Backend <: AbstractKKTBackend    # custom backend class to pass to Tulip parameters
end

mutable struct Solver{Tv<:Number,Ti<:Integer} <: AbstractKKTSolver{Tv}   # custom solver class that knows the problem structure
  # problem specific data
end

backend(::Solver) = "CustomBackend"
linear_system(::Solver) = "Normal equations (K1)"

function setup(A::AbstractSparseMatrix{Tv,Ti}, ::K1, ::Backend) where {Tv<:Number,Ti<:Integer}
  # setup work
end

function update!(kkt::Solver{Tv,Ti}, θ::AbstractVector{Tv}, regP::AbstractVector{Tv}, regD::AbstractVector{Tv}) where {Tv<:Number,Ti<:Integer}
  # update work
end

function solve!(dx::AbstractVector{Tv}, dy::AbstractVector{Tv}, kkt::Solver{Tv,Ti}, ξp::AbstractVector{Tv}, ξd::AbstractVector{Tv}) where {Tv<:Number,Ti<:Integer}
  # solve work
end

end

The problem is that the rest of the Tulip framework heavily relies on the KKT module and its specific functions, e.g. this setup call.

I don't understand if it's possible to define a setup() function (as shown above) completely outside Tulip, and still be able to reuse the rest of the Tulip (i.e. I don't have to work on a fork). I couldn't find any example if that's possible to do so. If not right now, how difficult it could be to make the KKT module extendable (I'm happy to make an attempt if the maintainers are open to it)?

WIP: v0.1.0 release

Todo:

  • Interface re-write
  • Documentation
    • Deploy script and gh-pages website
    • Document solver algorithms
    • Document solver parameters
  • MOI interface

Add iterative refinement

Iterative refinement is an algorithm for improving solutions.
The basic idea is that if you have an residual
r = b- Ax
You can calculate a correction c using the residual.
Ac = r
and x+c is an improved solution.
However this approach might run into floating point troubles, so there are version that are adapted for floating point arithmetic Lecture note 13 so this merits some investigation beyond the trivial implementation.

Presolve causes infeasibility error?

When I tried to solve a trivial OT problem (10 sources with equal mass and a single target) with OptimalTransport.emd2 and Tulip, an error was thrown since the termination status was INFEASIBLE although the trivial solution is feasible and optimal. The following MWE seems to indicate that it is a problem with presolve: An optimal solution is found if presolve is disabled and otherwise the termination status is INFEASIBLE.

using Tulip
using JuMP

# Instantiate JuMP model
lp = Model(Tulip.Optimizer)

# Create variables
@variable(lp, x >= 0)
@variable(lp, y >= 0)
@variable(lp, z >= 0)

# Add constraints
@constraint(lp, row1, x + y + z == 1.0)
@constraint(lp, row2, x == 1/3)
@constraint(lp, row3, y == 1/3)
@constraint(lp, row4, z == 1/3)

# Set the objective
@objective(lp, Min, x + y + z)

# Set some parameters
# set_optimizer_attribute(lp, "Presolve_Level", 0)     # disable presolve

# Solve the problem
optimize!(lp)

# Check termination status
st = termination_status(lp)
println("Termination status: $st")

What is the recommended way for dealing with this issue? Should one disable presolve or are there some tolerances that should be adjusted? It seems a bit unfortunate if the default settings in OptimalTransport cause errors in these trivial but possibly numerically challenging cases, so I would be happy about any suggestions.

Duplicate constraint name errors with JuMP

Hello,
recently, we saw Tulip errors in https://github.com/LCSB-BioCore/COBREXA.jl, likely caused by update to JuMP 0.22.x. Since no error is triggered with other optimizers, I assume this might be a potential problem in Tulip. Basically, optimizing any LP with named constraints causes an error Duplicate constraint name. The demonstration with only JuMP (0.22.1)+Tulip (0.9.1) is below.

I hope this is a fixable issue (looks like some kind of a logic mistake in name checking to me). Please let me know if I can supply any debugging information.

Thanks for any help!
-mk

julia> using Tulip, JuMP

julia> m = Model(Tulip.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Tulip

julia> @variable(m, x[i=1:2])
2-element Vector{VariableRef}:
 x[1]
 x[2]

julia> @constraint(m, mb, x[1]+x[2]==1)
mb : x[1] + x[2] = 1.0

julia> @constraint(m, lbs, [0,0] .<= x)
2-element Vector{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.LessThan{Float64}}, ScalarShape}}:
 lbs : -x[1]  0.0
 lbs : -x[2]  0.0

julia> @constraint(m, ubs, [1,1] .>= x)
2-element Vector{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.GreaterThan{Float64}}, ScalarShape}}:
 ubs : -x[1]  -1.0
 ubs : -x[2]  -1.0

julia> @objective(m, Max, [1,0.5]' * x)
x[1] + 0.5 x[2]

julia> optimize!(m)
ERROR: Duplicate constraint name ubs
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] set(m::Tulip.Optimizer{Float64}, #unused#::MathOptInterface.ConstraintName, c::MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.GreaterThan{Float64}}, name::String)
    @ Tulip ~/.julia/packages/Tulip/VjKXM/src/Interfaces/MOI/constraints.jl:395
  [3] set
    @ ~/.julia/packages/MathOptInterface/QxT5e/src/Bridges/bridge_optimizer.jl:1362 [inlined]
  [4] _pass_attribute(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, index_map::MathOptInterface.Utilities.IndexMap, cis_src::Vector{MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.GreaterThan{Float64}}}, attr::MathOptInterface.ConstraintName)
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/QxT5e/src/Utilities/copy.jl:137
  [5] pass_attributes(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, index_map::MathOptInterface.Utilities.IndexMap, cis_src::Vector{MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.GreaterThan{Float64}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/QxT5e/src/Utilities/copy.jl:122
  [6] _pass_constraints(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, index_map::MathOptInterface.Utilities.IndexMap, variable_constraints_not_added::Vector{Any})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/QxT5e/src/Utilities/copy.jl:316
  [7] default_copy_to(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/QxT5e/src/Utilities/copy.jl:498
  [8] #copy_to#7
    @ ~/.julia/packages/MathOptInterface/QxT5e/src/Bridges/bridge_optimizer.jl:421 [inlined]
  [9] copy_to
    @ ~/.julia/packages/MathOptInterface/QxT5e/src/Bridges/bridge_optimizer.jl:421 [inlined]
 [10] optimize!(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}})
    @ MathOptInterface ~/.julia/packages/MathOptInterface/QxT5e/src/MathOptInterface.jl:80
 [11] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/QxT5e/src/Utilities/cachingoptimizer.jl:285
 [12] optimize!(model::Model, optimizer_factory::Nothing; bridge_constraints::Bool, ignore_optimize_hook::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ JuMP ~/.julia/packages/JuMP/lnUbA/src/optimizer_interface.jl:195
 [13] optimize! (repeats 2 times)
    @ ~/.julia/packages/JuMP/lnUbA/src/optimizer_interface.jl:167 [inlined]
 [14] top-level scope
    @ REPL[13]:1

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

Expose presolve code

@mtanneau mentioned at the last JuMP developers call that he was considering making the presolve code here in Tulip available for use in other packages. I'm starting work on a prototype solver for which this code would be useful. I'll eventually want to build out some MIP presolve routines on top.

What is your preferred way to do this? Would you prefer we depend on Tulip? Or would you consider splitting the code off into a separate package?

cc @BochuanBob and @Anhtu07

WIP: MathOptInterface

Provide interface to MathOptInterface

Minimum set of features for Coluna support:

  • Attributes
    • MOI.ResultsCount
    • ObjectiveValue
    • PrimalStatus
    • VariablePrimal
    • DualStatus
    • ConstraintDual
    • VariableName
    • ConstraintName
    • ConstraintFunction
    • ConstraintSet
    • ListOfConstraints
    • ListOfConstraintsIndices
    • ObjectiveBound
  • Variables
    • Adding and naming variables
    • Bounds: MOI.SingleVariable in MOI.Interval, MOI.LessThan, MOI.GreaterThan, MOI.EqualTo
  • Constraints
    • Adding and naming constraints
    • ScalarAffine in MOI.Interval, MOI.LessThan, MOI.GreaterThan, MOI.EqualTo
  • Objective
    • Min and Max senses
  • Problem modifications
    • MOI.ScalarCoefficientChange
    • Objective modification

Define error code for failed factorization

Currently, failed factorizations are handled in the IPM solver via a try ... catch block that intercepts PosDefExceptions and increases regularizations.

It would be more flexible/general to have KKT.update! return a status code, which would be used to flag numerical issues, etc.
This would allow to do away with the try ... catch, which is not the most compiler-friendly approach.

New release?

I am getting version conflicts trying to have the latest NOMAD and latest Percival on one environment. NOMAD requires Tulip 0.9.3 which only supports Krylov 0.7 and Percival requires Krylov 0.8. I see master has been updated a while ago so a new release is probably due?

PackageCompiler Incompatibility: InitError(mod=:Tulip, ...

I have opended an issue regarding a Tulip PackageCompiler -Incompatibility, see:
JuliaLang/PackageCompiler.jl#789

The minal example is as follows:

module MyAppHelloTulip
using Tulip
function julia_main()::Cint
    try
        if isempty(ARGS)
            real_main(string("Tulip Version: v", Tulip.__init__()))
        else
            real_main(ARGS[1])
        end
    catch
        Base.invokelatest(Base.display_error, Base.catch_stack())
        return 1
    end
return 0 # if things finished successfully
end                                                                               
  
real_main(_s::AbstractString) = println(_s)

try
    if abspath(PROGRAM_FILE) == @__FILE__
        real_main("abspath(PROGRAM_FILE) == @__FILE__")
    end
catch
    Base.invokelatest(Base.display_error, Base.catch_stack())
    return 3
end

end # module

Do you have a proposal, how to handle this issue?

Automatic differentiation compatibility with Zygote

Hi,

For some applications I work on I need to combine an ODE with an optimization problem (the optimization adjusts the ODE equations). I have started playing around with automatic differentiation and I think it would be cool to be able to automatically differentiate such an ODE/LP system using, e.g. Zygote. I have tried implementing this but it seems that there are some compatibility issues, e.g. Tulip uses try/catch statements that are not allowed in Zygote.

I would be willing to fork the project and to try to adjust the package such that one can use auto-diff on it. I was just wondering if you have already considered this and/or if there are any major obstacles you foresee (maybe it isn't a great idea to try?). What do you think?

Other than that, I think this package is really cool, especially that it is in pure Julia!

Add travis + codecov support

This requires membership to the ds4dm org, once the PR is accepted, simply go to the travis link to activate the project on Travis

Error when calling `solution_summary()`

I'm using Tulip via JuMP. I'm able to set up and solve a model, but I'm not sure how to see the solution. When I call

solution_summary(model)

I get the following:

ArgumentError: ModelLike of type Tulip.Optimizer{Float64} does not support accessing the attribute MathOptInterface.RawStatusString()

Stacktrace:
  [1] get_fallback(::Tulip.Optimizer{Float64}, ::MathOptInterface.RawStatusString)
    @ MathOptInterface ~/.julia/packages/MathOptInterface/YDdD3/src/attributes.jl:296
  [2] get(::Tulip.Optimizer{Float64}, ::MathOptInterface.RawStatusString)
    @ MathOptInterface ~/.julia/packages/MathOptInterface/YDdD3/src/attributes.jl:293
  [3] get(b::MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, attr::MathOptInterface.RawStatusString)
    @ MathOptInterface.Bridges ~/.julia/packages/MathOptInterface/YDdD3/src/Bridges/bridge_optimizer.jl:804
  [4] get(model::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}}, attr::MathOptInterface.RawStatusString)
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/YDdD3/src/Utilities/cachingoptimizer.jl:716
  [5] _moi_get_result(model::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}}, args::MathOptInterface.RawStatusString)
    @ JuMP ~/.julia/packages/JuMP/klrjG/src/JuMP.jl:1199
  [6] get(model::Model, attr::MathOptInterface.RawStatusString)
    @ JuMP ~/.julia/packages/JuMP/klrjG/src/JuMP.jl:1212
  [7] raw_status
    @ ~/.julia/packages/JuMP/klrjG/src/JuMP.jl:765 [inlined]
  [8] solution_summary(model::Model; verbose::Bool)
    @ JuMP ~/.julia/packages/JuMP/klrjG/src/print.jl:463
  [9] solution_summary(model::Model)
    @ JuMP ~/.julia/packages/JuMP/klrjG/src/print.jl:463
 [10] top-level scope
    @ In[99]:1
 [11] eval
    @ ./boot.jl:360 [inlined]
 [12] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
    @ Base ./loading.jl:1116

MathOptInterface.BarrierIterations

The README suggests setting the iteration limit like so:

MOI.set(moi_model, MOI.RawOptimizerAttribute("IPM_IterationsLimit"), 200)

I think, however, that perhaps this is supposed to have the same effect:

MOI.set(moi_model, MOI.BarrierIterations(), 200)

The latter call fails like so:

julia> using Tulip, MathOptInterface

julia> const MOI = MathOptInterface
MathOptInterface

julia> lp = Tulip.Optimizer{BigFloat}()
Tulip.Optimizer{BigFloat}

julia> MOI.set(lp, MOI.BarrierIterations(), 100)
ERROR: MathOptInterface.SetAttributeNotAllowed{MathOptInterface.BarrierIterations}: Setting attribute MathOptInterface.BarrierIterations() cannot be performed. You may want to use a `CachingOptimizer` in `AUTOMATIC` mode or you may need to call `reset_optimizer` before doing this operation if the `CachingOptimizer` is in `MANUAL` mode.
Stacktrace:
 [1] throw_set_error_fallback(model::Tulip.Optimizer{BigFloat}, attr::MathOptInterface.BarrierIterations, value::Int64; error_if_supported::MathOptInterface.SetAttributeNotAllowed{MathOptInterface.BarrierIterations})
   @ MathOptInterface ~/.julia/packages/MathOptInterface/wx5Ea/src/attributes.jl:584
 [2] throw_set_error_fallback(model::Tulip.Optimizer{BigFloat}, attr::MathOptInterface.BarrierIterations, value::Int64)
   @ MathOptInterface ~/.julia/packages/MathOptInterface/wx5Ea/src/attributes.jl:577
 [3] set(model::Tulip.Optimizer{BigFloat}, attr::MathOptInterface.BarrierIterations, args::Int64)
   @ MathOptInterface ~/.julia/packages/MathOptInterface/wx5Ea/src/attributes.jl:550
 [4] top-level scope
   @ REPL[4]:1

Error when building problem with zero coefficients

@mtanneau I am testing the native API of the Tulip solver on several examples, which are supposed to be feasible. Most are working, except this one.

On this particular example (sorry, I did not find another minimal example), I trigger the formatting exception line 164 of src/IMP/impdata.jl. It works for several similar other examples, so the dimensions format seem correct.

I try to solve the following problem:

min z5
s.c lb <= Az <= ub

with A of dimensions 49 * 34 and z of dimensions 34, lb and ub of dimensions 49.

So, A has more rows than columns.

import Tulip

# data
lb = [10.000000000000181;
      -4.160250050215518e-14;
      -5.859572377763578e-14;
      -3.0206746014582044e-13;
      7.000000000000127;
      1.4999999999999585;
      2.0000000000000235;
      2.000000000000257;
      29.999999999999186;
      20.0;
      0.010000000000095792;
      5.2001536749292135e-14;
      -0.2;
      0.0999999999999949;
      0.0;
      0.0;
      100.0;
      500.0;
      500.0;
      1000.449469529165;
      2.0;
      1998.9181819464666;
      2994.4712103355896;
      4997.836363892933;
      0.2;
      2.5999380519889845;
      0.0;
      100.0;
      4.0;
      0.0;
      579.9867487235355;
      10.0;
      249.77526523541746;
      750.2247347645825;
      249.85179092666596;
      10.0;
      10.74936862843041;
      100.01932718975416;
      277.358639666949;
      196.75629112130164;
      779.9814155966956;
      100.0;
      2.0;
      0.0;
      0.0;
      500.0;
      0.3071248179551529;
      1.3867931983347423;
      1.6396357593441846]
ub = [150.00000000000017;
      9.99999999999996;
      9.999999999999941;
      4.999999999999698;
      120.00000000000013;
      7.999999999999958;
      20.000000000000025;
      30.000000000000256;
      499.9999999999992;
      200.0;
      20.000000000000096;
      10.000000000000052;
      -0.001;
      1.999999999999995;
      1.0;
      2.0;
      1000.0;
      5000.0;
      5000.0;
      20000.449469529165;
      30.0;
      19998.918181946465;
      29994.47121033559;
      49997.83636389293;
      0.8;
      6.5999380519889845;
      20.0;
      400.0;
      15.0;
      10.0;
      10079.986748723535;
      50.0;
      4999.7752652354175;
      15000.224734764583;
      2999.851790926666;
      5000.0;
      45.74936862843041;
      3000.019327189754;
      477.358639666949;
      316.7562911213016;
      1979.9814155966956;
      1000.0;
      20.0;
      1.0;
      2.0;
      5000.0;
      1.307124817955153;
      2.3867931983347423;
      2.6396357593441846]
A = [0.0 0.0 0.0 0.024495949415647516 0.2477654684642663 0.0 -0.011119658429724254 1.2228081648911448e-16 -0.02223931685944833 0.0 -1.5444999971558658e-18 0.0 0.0 0.0 0.0 -4.633367913734631e-16 0.0 0.2600827551833563 -0.2600827551833563 0.4058269499899332 0.0 0.005806029719914009 0.2864485283509324 -4.051468122850578e-16 -3.9190061124904336e-16 -4.63299639313543e-16 0.0 0.0 0.0 0.0 0.0 -0.2032110401969258 -2.0261494930936026e-18 -3.265833968102728e-18;
     0.0 0.0 0.0 -0.019865773474634207 0.2070920855092544 0.0 0.4499895688046492 -2.969927233278397e-17 -0.21805485114059647 0.0 3.475804001008223e-19 0.0 0.0 0.0 0.0 1.0450213846969295e-16 0.0 0.19219900830735728 -0.19219900830735728 -0.10131289433179796 0.0 -0.0014494495141722757 -0.3463506326022579 9.060323444968781e-17 8.715408023996434e-17 1.0459464184353835e-16 0.0 0.0 0.0 0.0 0.0 0.05073073299601369 4.514593846151991e-19 7.266142848492211e-19;
     0.0 0.0 0.0 -0.028094446274859377 0.29287243598731627 0.0 -0.6285297129373957 -4.223110773813765e-17 0.32407940420939807 0.0 4.891571448740823e-19 0.0 0.0 0.0 0.0 1.4656170678748429e-16 0.0 0.2718104442229239 -0.2718104442229239 -0.14327806920730107 0.0 -0.0020498311609175313 -0.48981376196261417 1.2629237085359786e-16 1.2202784755816392e-16 1.4669252631334175e-16 0.0 0.0 0.0 0.0 0.0 0.07174409063209088 6.336684426562909e-19 1.0180046923935259e-18;
     0.0 0.0 0.0 0.013329586870028262 0.19048154980934828 0.0 0.017686318086479736 -1.9687839174413231e-16 0.0353726361729593 0.0 2.61139139430658e-18 0.0 0.0 0.0 0.0 7.8349203899890905e-16 0.0 0.21583334762147532 -0.21583334762147532 -0.645486061550352 0.0 -0.009234752047010557 0.5895766933052783 6.899784706781798e-16 6.68626317913213e-16 7.836495760647963e-16 0.0 0.0 0.0 0.0 0.0 0.3232163216452669 3.4499975993835424e-18 5.5729038174830675e-18;
     0.0 0.0 0.0 0.017147164590953258 0.17343582792498646 0.0 -0.0077837609008068725 8.559657154238012e-17 -0.015567521801613879 0.0 -1.081149998009106e-18 0.0 0.0 0.0 0.0 -3.243357539614242e-16 0.0 0.18205792862834946 -0.18205792862834946 0.2840788649929533 0.0 0.0040642208039398066 0.20051396984565262 -2.8360276859954047e-16 -2.7433042787433036e-16 -3.243097475194801e-16 0.0 0.0 0.0 0.0 0.0 -0.14224772813784806 -1.418304645165522e-18 -2.28608377767191e-18;
     0.0 0.0 0.0 -0.019865773474634203 0.20709208550925434 0.0 0.4499895688046493 -2.9699272332783967e-17 -0.21805485114059647 0.0 3.4758040010082237e-19 0.0 0.0 0.0 0.0 1.0450213846969291e-16 0.0 0.19219900830735728 -0.19219900830735728 -0.10131289433179798 0.0 -0.001449449514172275 -0.34635063260225785 9.060323444968786e-17 8.715408023996433e-17 1.0459464184353833e-16 0.0 0.0 0.0 0.0 0.0 0.05073073299601368 4.514593846151991e-19 7.266142848492214e-19;
     0.0 0.0 0.0 0.0031844734240341786 0.032209510900354635 0.0 -0.0014455555958641545 1.5896506143584886e-17 -0.0028911111917282835 0.0 -2.0078499963026272e-19 0.0 0.0 0.0 0.0 -6.023378287855022e-17 0.0 0.03381075817383636 -0.03381075817383636 0.05275750349869137 0.0 0.0007547838635888217 0.037238308685621266 -5.266908559705755e-17 -5.0947079462375685e-17 -6.02289531107606e-17 0.0 0.0 0.0 0.0 0.0 -0.026417435225600378 -2.6339943410216863e-19 -4.245584158533549e-19;
     0.0 0.0 0.0 0.003682977379840559 -0.16440569563625282 0.0 0.012860422734150183 1.6303794767130798e-16 0.025720845468300498 0.0 -2.2439031706833463e-18 0.0 0.0 0.0 0.0 -6.732120296658963e-16 0.0 -0.17262314189704298 0.17262314189704298 -0.4693584939470894 0.0 0.007564932791720189 -0.19110340141372273 -5.958557646369564e-16 -5.781183869154841e-16 -6.732366187761362e-16 0.0 0.0 0.0 0.0 0.0 -0.26477264771012254 -2.9789281816317022e-18 -4.817463581403262e-18;
     0.0 0.0 0.0 0.0018414886899202794 -0.08220284781812655 0.0 -0.007244147269796774 -5.292561556386879e-16 -0.014488294539593476 0.0 7.053928058912055e-18 0.0 0.0 0.0 0.0 2.116238770089633e-15 0.0 -0.08631157094852142 0.08631157094852142 0.2643849368538975 0.0 -0.024777307503025956 -0.09555170070686136 1.864838611355606e-15 1.8073541272602294e-15 2.1162264755345127e-15 0.0 0.0 0.0 0.0 0.0 0.8672057626056332 9.324182030306174e-18 1.5060899229944685e-17;
     0.0 0.0 0.0 0.049870719479576275 0.0002764385600500723 0.0 1.8065205161585194e-5 -1.9410492414211895e-19 3.6130410323172576e-5 0.0 2.2517222666263698e-21 0.0 0.0 0.0 0.0 6.756370778918779e-19 0.0 0.0001795188617644395 -0.0001795188617644395 -0.0006593140569921849 0.0 -9.432584528329142e-6 -0.002253946471758722 5.8337368849753e-19 5.623387558497746e-19 6.758367217593351e-19 0.0 0.0 0.0 0.0 0.0 0.0003301404584913903 2.9175085073919777e-21 4.68768008763356e-21;
     0.0 0.0 0.0 0.0 1.3625371989532675e-17 0.0 -5.6276869015575715e-18 6.23009673364483e-17 -1.1255373803115143e-17 0.0 -8.2924525536196e-19 0.0 0.0 0.0 0.0 -2.487159783681747e-16 0.0 -6.813533027713592e-18 6.813533027713592e-18 -7.712234984750405e-19 0.0 9.255833946508751e-16 5.859350412631623e-19 -0.00499993750117207 -2.1228678537266177e-16 -2.487159783681747e-16 0.0 0.0 0.0 0.0 0.0 2.6445427404463324e-17 0.9999875002343702 -1.768604793866979e-18;
     0.0 0.0 0.0 0.0 7.392903563635533e-18 0.0 -3.0560948736935156e-18 3.382710778154774e-17 -6.0986372202309624e-18 0.0 -4.497744949920335e-19 0.0 0.0 0.0 0.0 -1.3530843112619095e-16 0.0 -3.699839913606784e-18 3.699839913606784e-18 -4.1928130889087867e-19 0.0 5.023650766211585e-16 3.1763735522036263e-19 -1.1882855810441129e-16 -0.008333043996551135 -1.3444106938820255e-16 0.0 0.0 0.0 0.0 0.0 1.435297329122412e-17 -5.929230630780102e-19 0.9999652795861224;
     0.0 0.0 0.0 0.0 -1.6940658945086007e-21 0.0 1.6940658945086007e-21 6.776263578034403e-21 3.3881317890172014e-21 0.0 0.9999944444907404 0.0 0.0 0.0 0.0 0.0 0.0 8.470329472543003e-22 -8.470329472543003e-22 2.117582368135751e-22 0.0 6.776263578034403e-21 0.0 0.0 -1.0842021724855044e-19 -0.0033333148149691347 0.0 0.0 0.0 0.0 0.0 0.0 -8.470329472543003e-22 -8.470329472543003e-22;
     0.0 0.0 0.0 0.0 -0.017614394442704878 0.0 0.010542091369171164 -0.05290832135876376 0.021084182738342318 0.0 4.5514786025117795e-20 0.0 0.0 0.0 0.0 1.7875783318854754e-17 0.0 0.008807197221352443 -0.008807197221352443 0.0014442665175764485 0.0 -0.9968859897124329 -0.0007574189610363099 -0.025130824374803403 -0.030861044331200297 1.7367563550502174e-17 0.0 0.0 0.0 0.0 0.0 -0.028482456848935616 -0.00012565412187310164 -0.00025717536942617233;
     0.0 0.0 0.0 0.0 -0.005093362018242666 0.0 0.008200038762387908 0.02830000804060586 0.016400077524775812 0.0 -0.0016898622062368812 0.0 0.0 0.0 0.0 -0.5069642947450851 0.0 0.0025466810091213346 -0.0025466810091213346 0.0011234053104472305 0.0 0.026671227304458357 -0.00021901456678434858 -0.49350945318946754 -0.4904218996450341 -0.5069586618710643 0.0 0.0 0.0 0.0 0.0 0.0007620350658464641 -0.002467547265947828 -0.004086849163708883;
     1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.9974143895915252 0.005528771201001569 0.0 0.00036130410323176223 -3.882098482842509e-18 0.0007226082064635208 0.0 4.503444533253034e-20 0.0 0.0 0.0 0.0 1.3512741557837343e-17 0.0 0.003590377235288976 -0.003590377235288976 -0.013186281139845267 0.0 -0.0001886516905665881 -0.04507892943517663 1.166747376995119e-17 1.1246775116996476e-17 1.3516734435188982e-17 0.0 0.0 0.0 0.0 0.0 0.006602809169828494 5.835017014783872e-20 9.375360175268435e-20;
     0.0 0.0 0.0 0.0 0.6655096913916039 0.0 0.0002274583216080798 -0.0007878058556446685 0.00045491664321615935 0.0 -8.607079977310684e-6 0.0 0.0 0.0 0.0 -0.002582152683459795 0.0 -0.33275484569580205 0.33275484569580205 3.1161790060307367e-5 0.0 -0.01742369682106859 0.028616916729838966 -0.0029562865577270696 -0.0030414948843299956 -0.002582123993193204 0.0 0.0 0.0 0.0 0.0 -0.0004978199091735217 -1.4781432788621721e-5 -2.534579070274257e-5;
     0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.00022745832160807968 0.0 0.1990736713373403 0.0003257031950447 0.39814734267468044 0.0 1.3856935594236776e-5 0.0 0.0 0.0 0.0 0.004157126868056346 0.0 -0.00011372916080403988 0.00011372916080403988 0.027273092973215604 0.0 0.01029055809045807 9.78070782914672e-6 0.004311728092500266 0.004346818536100747 0.004157080678271032 0.0 0.0 0.0 0.0 0.0 0.00029401594544171424 2.1558640462495703e-5 3.62234878008365e-5;
     0.0 0.0 0.0 0.0 -0.0007878058556446683 0.0 0.0003257031950447003 0.9963998190758995 0.0006514063900894001 0.0 4.78231140240197e-5 0.0 0.0 0.0 0.0 0.01434709361758599 0.0 0.00039390292782233427 -0.00039390292782233427 4.4621337721121425e-5 0.0 -0.05349836024892367 -3.3875651792723184e-5 0.01263669176134418 0.012246137652301464 0.014346934207205909 0.0 0.0 0.0 0.0 0.0 -0.001528524578541286 6.318345880678321e-5 0.00010205114710254602;
     0.0 0.0 0.0 0.0 0.00045491664321615935 0.0 0.3981473426746804 0.0006514063900894 0.7962946853493609 0.0 2.771387118847355e-5 0.0 0.0 0.0 0.0 0.008314253736112692 0.0 -0.00022745832160807978 0.00022745832160807978 0.05454618594643121 0.0 0.020581116180916136 1.9561415658293444e-5 0.00862345618500053 0.008693637072201494 0.008314161356542064 0.0 0.0 0.0 0.0 0.0 0.0005880318908834286 4.311728092499141e-5 7.2446975601673e-5;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 -8.607079977310684e-6 0.0 1.3856935594236772e-5 4.782311402401969e-5 2.771387118847354e-5 0.0 8.255353379522243e-6 0.0 0.0 0.0 0.0 -0.000856699801601254 0.0 4.3035399886553445e-6 -4.3035399886553445e-6 1.8984001764105851e-6 0.0 4.507069901575734e-5 -3.7010443902421405e-7 -0.0008339629733655104 -0.0008287454333210395 0.002476606013887481 0.0 0.0 0.0 0.0 0.0 1.2877342576011728e-6 -4.16981486682838e-6 -6.9062119443424454e-6;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 -0.0025821526834597965 0.0 0.004157126868056346 0.014347093617585988 0.00831425373611269 0.0 -0.000856699801601254 0.0 0.0 0.0 0.0 0.7429872038536185 0.0 0.001291076341729899 -0.001291076341729899 0.0005695263809237636 0.0 0.013521359940390587 -0.00011103256538872762 -0.250191671886231 -0.2486263924810896 -0.2570099404803762 0.0 0.0 0.0 0.0 0.0 0.00038632456972787714 -0.0012509583594314036 -0.002071886604009215;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 -0.33275484569580205 0.0 -0.00011372916080403989 0.0003939029278223342 -0.0002274583216080797 0.0 4.303539988655342e-6 0.0 0.0 0.0 0.0 0.0012910763417298978 0.0 0.6663774228479008 0.33362257715209914 -1.5580895030153683e-5 0.0 0.008711848410534295 -0.014308458364919485 0.0014781432788635348 0.0015207474421649978 0.001291061996596602 0.0 0.0 0.0 0.0 0.0 0.00024890995458676083 7.390716394310861e-6 1.2672895351371284e-5;
     0.0 0.0 0.0 0.0 0.33275484569580205 0.0 0.00011372916080403989 -0.0003939029278223342 0.0002274583216080797 0.0 -4.303539988655342e-6 0.0 0.0 0.0 0.0 -0.0012910763417298978 0.0 0.33362257715209914 0.6663774228479008 1.5580895030153683e-5 0.0 -0.008711848410534295 0.014308458364919485 -0.0014781432788635348 -0.0015207474421649978 -0.001291061996596602 0.0 0.0 0.0 0.0 0.0 -0.00024890995458676083 -7.390716394310861e-6 -1.2672895351371284e-5;
     0.0 0.0 0.0 0.0 3.116179006030691e-5 0.0 0.02727309297321561 4.462133772112389e-5 0.054546185946431215 0.0 1.8984001764104384e-6 0.0 0.0 0.0 0.0 0.0005695263809237195 0.0 -1.5580895030153463e-5 1.5580895030153463e-5 0.0037364137373307044 0.0 0.0014098064583927556 1.3399569725931008e-6 0.0005907067486725365 0.0005955141394458025 0.0005695200529231316 0.0 0.0 0.0 0.0 0.0 4.028018452551486e-5 2.9535337433619116e-6 4.962617828714602e-6;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 -0.01742369682106859 0.0 0.010290558090458077 -0.05349836024892369 0.020581116180916143 0.0 4.5070699015757624e-5 0.0 0.0 0.0 0.0 0.013521359940390673 0.0 0.008711848410534299 -0.008711848410534299 0.0014098064583927532 0.0 0.004691308463981791 -0.0007492189633059519 -0.011890063926249923 -0.017684788761150847 0.013521209704727288 0.0 0.0 0.0 0.0 0.0 0.0001340373846761278 -5.945031963032403e-5 -0.0001473732396757547;
     0.0 0.0 0.0 0.0 0.028616916729838963 0.0 9.78070782914743e-6 -3.387565179272074e-5 1.956141565829485e-5 0.0 -3.701044390243593e-7 0.0 0.0 0.0 0.0 -0.00011103256538877119 0.0 -0.014308458364919487 0.014308458364919487 1.3399569725932164e-6 0.0 -0.0007492189633059493 0.0012305274193829972 -0.000127120321982264 -0.00013078428002618982 -0.00011103133170730777 0.0 0.0 0.0 0.0 0.0 -2.1406256094461428e-5 -6.35601609910734e-7 -1.0898690002179303e-6;
     0.0 0.0 0.0 0.0 -0.0029562865577270705 0.0 0.004311728092500266 0.01263669176134418 0.00862345618500053 0.0 -0.0008339629733655104 0.0 0.0 0.0 0.0 -0.250191671886231 0.0 0.0014781432788635363 -0.0014781432788635363 0.0005907067486725794 0.0 -0.011890063926249996 -0.00012712032198222155 0.7557918619038599 -0.24280340701107112 -0.2501888920096531 0.0 0.0 0.0 0.0 0.0 -0.0003397161121764293 0.0037789593095190815 -0.002023361725092378;
     0.0 0.0 0.0 0.0 -0.0030414948843299964 0.0 0.004346818536100747 0.012246137652301464 0.008693637072201492 0.0 -0.0008287454333210396 0.0 0.0 0.0 0.0 -0.2486263924810896 0.0 0.0015207474421649993 -0.0015207474421649993 0.0005955141394458451 0.0 -0.01768478876115092 -0.00013078428002614764 -0.24280340701107112 0.7584645166690953 -0.24862362999631185 0.0 0.0 0.0 0.0 0.0 -0.0005052796788879503 -0.0012140170350555678 0.006320537638909014;
     0.0 0.0 0.0 0.0 -0.002582123993193205 0.0 0.004157080678271032 0.014346934207205907 0.008314161356542062 0.0 0.002476606013887481 0.0 0.0 0.0 0.0 -0.2570099404803762 0.0 0.0012910619965966035 -0.0012910619965966035 0.0005695200529231755 0.0 0.013521209704727201 -0.00011103133170726421 -0.2501888920096531 -0.24862362999631185 0.7429818041662442 0.0 0.0 0.0 0.0 0.0 0.00038632027728035184 -0.0012509444600485142 -0.0020718635833027335;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0;
     0.0 0.0 0.0 0.0 -0.0004978199091733883 0.0 0.0002940159454416593 -0.0015285245785406768 0.0005880318908833183 0.0 1.2877342575930751e-6 0.0 0.0 0.0 0.0 0.0003863245697254478 0.0 0.00024890995458669426 -0.00024890995458669426 4.0280184525507234e-5 0.0 0.00013403738468518897 -2.140625609445577e-5 -0.0003397161121785693 -0.0005052796788900241 0.00038632027727792247 0.0 0.0 0.0 0.0 0.0 3.829639562358267e-6 -1.6985805608664007e-6 -4.210663990735849e-6;
     0.0 0.0 0.0 0.0 -1.4781432788635351e-5 0.0 2.1558640462501328e-5 6.318345880672091e-5 4.311728092500265e-5 0.0 -4.169814866827552e-6 0.0 0.0 0.0 0.0 -0.001250958359431155 0.0 7.390716394317682e-6 -7.390716394317682e-6 2.953533743362897e-6 0.0 -5.945031963124998e-5 -6.356016099111078e-7 0.003778959309519301 -0.0012140170350553555 -0.0012509444600482655 0.0 0.0 0.0 0.0 0.0 -1.698580560882146e-6 1.8894796547375137e-5 -1.011680862546189e-5;
     0.0 0.0 0.0 0.0 -2.5345790702749968e-5 0.0 3.622348780083956e-5 0.0001020511471025122 7.244697560167909e-5 0.0 -6.9062119443419965e-6 0.0 0.0 0.0 0.0 -0.0020718866040090795 0.0 1.2672895351374994e-5 -1.2672895351374994e-5 4.962617828715376e-6 0.0 -0.00014737323967625764 -1.0898690002178968e-6 -0.0020233617250922592 0.00632053763890913 -0.002071863583302599 0.0 0.0 0.0 0.0 0.0 -4.210663990732919e-6 -1.0116808625463065e-5 5.2671146990812545e-5]
ind = 5

# preliminary checks: all dimensions are correct
println("Dimensions of A: ", size(A))
println("Dimensions of ub: ", length(ub))
println("Dimensions of lb: ", length(lb))
println("All lb are strictly inferior to ub: ", all(lb .< ub))

# problem

# Initialize the model
linear_model = Tulip.Model{Float64}()
pb = linear_model.pbdata

# Create variables
variables = Int[]
for i in 1:ind-1
    push!(variables, Tulip.add_variable!(pb, Int[], Float64[], 0.0, -Inf, Inf, "z" * string(i)))
end
push!(variables, Tulip.add_variable!(pb, Int[], Float64[], 1.0, -Inf, Inf, "z" * string(ind)))
for i in ind+1:size(A, 2)
    push!(variables, Tulip.add_variable!(pb, Int[], Float64[], 0.0, -Inf, Inf, "z" * string(i)))
end

println("Number of variables: ", length(variables))

# Create constraints
rows = Int[]
for i in 1:size(A, 1)
    push!(rows, Tulip.add_constraint!(pb, variables, A[i, :], lb[i], ub[i], "row" * string(i)))
end

# Set some parameters
Tulip.set_parameter(linear_model, "Presolve_Level", 0) # disable presolve

# Solve the problem
Tulip.optimize!(linear_model)

Here, what is obtained:

> julia wrong_example.jl
Dimensions of A: (49, 34)
Dimensions of ub: 49
Dimensions of lb: 49
All lb are strictly inferior to ub: true
Number of variables: 34
ERROR: LoadError: Found 633 non-zero coeffs (expected 1715)

I use the Julia 1.5.3 version release.

It does not seem a formatting normal error. Do you have any idea ?

JuMP interface problem with constraint modification after optimization

Hi, I think there is a problem with the interface between Tulip and JuMP when you modify constraints after an optimization has already occurred. This comes up when you want to modify the problem subject to the outcome of the previous omptimization. Below is a MWE of the issue. This seems to be an issue only for constraints that are vectors.

using JuMP
using Tulip

model = Model(Tulip.Optimizer)
@variable(model, -1.0 <= x[1:2] <= 2)
@objective(model, Max, 5x[1] + 3 * x[2])
@constraint(model, con1, 1x[1] + 5x[2] <= 3)
@constraint(model, con2, 0.0 .<= x)

optimize!(model)
objective_value(model)

set_normalized_rhs(con2[2], 0.2) # throws an error

The following error is thrown:

ERROR: Invalid constraint index 3
Stacktrace:
 [1] error(::String) at .\error.jl:33
 [2] set_attribute at C:\Users\St. Elmo\.julia\packages\Tulip\za0u9\src\Interfaces\tulip_julia_api.jl:138 [inlined]
 [3] set at C:\Users\St. Elmo\.julia\packages\Tulip\za0u9\src\Interfaces\MOI\constraints.jl:629 [inlined]
 [4] _set_substituted at C:\Users\St. Elmo\.julia\packages\MathOptInterface\5WwpK\src\Bridges\bridge_optimizer.jl:1119 [inlined]
 [5] set(::MathOptInterface.Bridges.LazyBridgeOptimizer{Tulip.Optimizer{Float64}}, ::MathOptInterface.ConstraintSet, ::MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}}, ::MathOptInterface.LessThan{Float64}) at C:\Users\St. Elmo\.julia\packages\MathOptInterface\5WwpK\src\Bridges\bridge_optimizer.jl:1048
 [6] replace_constraint_function_or_set(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.ConstraintSet, ::MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}}, ::MathOptInterface.LessThan{Float64}) at C:\Users\St. Elmo\.julia\packages\MathOptInterface\5WwpK\src\Utilities\cachingoptimizer.jl:472
 [7] set(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.ConstraintSet, ::MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}}, ::MathOptInterface.LessThan{Float64}) at C:\Users\St. Elmo\.julia\packages\MathOptInterface\5WwpK\src\Utilities\cachingoptimizer.jl:503
 [8] set(::Model, ::MathOptInterface.ConstraintSet, ::ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}},ScalarShape}, ::MathOptInterface.LessThan{Float64}) at C:\Users\St. Elmo\.julia\packages\JuMP\y5vgk\src\JuMP.jl:1001
 [9] set_normalized_rhs(::ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.LessThan{Float64}},ScalarShape}, ::Float64) at C:\Users\St. Elmo\.julia\packages\JuMP\y5vgk\src\constraints.jl:554
 [10] top-level scope at REPL[82]:1

Any idea what the issue is?

Help needed in a large-scale LP problem: always ITERATION_LIMIT

I am dealing with an LP problem involved in superconductor magnet design. It is essentially an L1-norm minimization problem, which is transformed into an LP. The design variables are collected in vector $I_L$, and auxiliary variables $t &gt;= |I_L|$ (element-wise) are introduced. The form of the final LP is shown below.
Snipaste_2020-09-16_20-35-23
I tried to solve this problem with JuMP and Tulip. The LP problem is a little high dimensional with the following info generated by Tulip:

Problem info
  Name        :
  Constraints : 11000
  Variables   : 5000
  Non-zeros   : 2509954

Reduced problem info
  Constraints : 5996  (removed 5004)
  Variables   : 5000  (removed 0)
  Non-zeros   : 2500000  (removed 9954)
Presolve time : 0.042s

Optimizer info
Linear solver options
  Arithmetic   : Float64
  Backend      : CHOLMOD
  System       : Augmented system

I have access to MATLAB and Gurobi and thus first tested them (both with default settings). The result is summarized below:

  • MATLAB: SUCCESS with optimal objective value 474048.424 in about 20 seconds.
  • Gurobi: SUCCESS with the same optimal objective value in about 6 seconds.

Next, I tried Tulip. With the default parameters, there was always Solver exited with status Trm_IterationLimit. I thus increased the BarrierIterationsLimit parameter by set_optimizer_attribute(model, "BarrierIterationsLimit", 500), though I don't know whether it is the proper fix. After a very very long time (around one hour), I got the following result in Tulip (the last two lines are added in my code):

 499  +4.7330511e+05  +4.7330511e+05  8.87e-10 1.91e-11 2.33e-10  1.4e-18  3453.77
 500  +4.7330527e+05  +4.7330527e+05  1.97e-09 4.85e-11 1.97e-11  2.1e-14  3460.47
Solver exited with status Trm_IterationLimit
- Termination status: ITERATION_LIMIT
- Obj value: NaN

I have even tested Tulip for 5000 iterations but still got the same result.

I have a few questions:

  1. Can Tulip handle problems of such a scale?
  2. If it can, how should I set the parameters to get a reasonable result in reasonable running time?
  3. Or is there any improvement I should make in problem formulation? I am mostly a user of optimization solvers and lack professional knowledge in this field.

Any suggestions are appreciated.

[Code for reproduction] (The Julia file is also available in Google Drive lp_tulip.jl)

using JuMP, Tulip
import LinearAlgebra
using JLD2

@load "lp_data.jld2" Imax D̄z F̄r F̄z B0 ϵ Brf Bzf aL

n = size(D̄z, 2)  # length of the design vector IL
model = Model(Tulip.Optimizer)
set_optimizer_attribute(model, "BarrierIterationsLimit", 500)
set_optimizer_attribute(model, "OutputLevel", 2)
@variable(model, I[1:n])
@variable(model, t[1:n]);
@constraints(model, begin
        I .<= t
        I .>= -t
        I .<= Imax
        I .>= -Imax
        D̄z*I .<= B0 * (1 + ϵ)
        D̄z*I .>= B0 * (1 - ϵ)
        F̄r*I .<= Brf
        F̄r*I .>= -Brf
        F̄z*I .<= Bzf
        F̄z*I .>= -Bzf
        end)
@objective(model, Min, LinearAlgebra.dot(aL, t))
optimize!(model)
st = termination_status(model)
println("- Termination status: ", st)
println("- Obj value: ", objective_value(model))

[Data: lp_data.jld2]
Please download from Google Drive lp_data.jld2.

[System info]

julia> versioninfo()
Julia Version 1.5.0
Commit 96786e22cc (2020-08-01 23:44 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-9.0.1 (ORCJIT, ivybridge)

and Tulip v0.6.0

(The Gurobi output is also listed below for your information)

Academic license - for non-commercial use only
Warning for adding constraints: zero or small (< 1e-13) coefficients, ignored
Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (win64)
Optimize a model with 11000 rows, 5000 columns and 2504998 nonzeros
Model fingerprint: 0x682571a4
Coefficient statistics:
  Matrix range     [1e-13, 1e+00]
  Objective range  [2e-01, 4e-01]
  Bounds range     [0e+00, 0e+00]
  RHS range        [3e-04, 5e+03]

Concurrent LP optimizer: dual simplex and barrier
Showing barrier log only...

Presolve removed 2500 rows and 2504 columns
Presolve time: 1.91s
Presolved: 2500 rows, 8496 columns, 2497498 nonzeros

Ordering time: 0.05s

Barrier statistics:
 AA' NZ     : 3.124e+06
 Factor NZ  : 3.126e+06 (roughly 30 MBytes of memory)
 Factor Ops : 5.211e+09 (less than 1 second per iteration)
 Threads    : 3

                  Objective                Residual
Iter       Primal          Dual         Primal    Dual     Compl     Time
   0  -9.11884571e+06  9.98372088e+07  1.00e-01 6.11e+03  2.32e+04     3s
   1  -5.96551043e+06  1.95735779e+07  4.08e-04 5.15e+03  2.57e+03     4s
   2  -1.19952925e+06  3.12413253e+06  1.35e-06 6.82e+02  4.03e+02     4s
   3   4.36598159e+04  1.92207121e+06  1.11e-07 4.10e+02  1.74e+02     5s
   4   2.47263228e+05  1.01981004e+06  4.09e-08 1.65e+02  7.21e+01     5s
   5   3.27506348e+05  8.65928047e+05  2.16e-08 1.23e+02  5.06e+01     6s

Barrier performed 5 iterations in 5.74 seconds
Barrier solve interrupted - model solved by another algorithm


Solved with dual simplex
Solved in 2001 iterations and 5.76 seconds
Optimal objective  4.740484236e+05
- Termination status: OPTIMAL
- Obj value: 474048.42359098565

Error on BigFloat inequalities

When I use JuMP to create two variables and try to place a constraint on them, I get an error.
Here is an minimal example and a prefix of the error message:

julia> lp = Model(Tulip.Optimizer{BigFloat})
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Tulip

julia> @variable(lp, x)
x

julia> @variable(lp, y)
y

julia> @constraint(lp, x - y >= 0)
ERROR: `MOI.ScalarAffineFunction{Float64}`-in-`MOI.GreaterThan{Float64}` constraints are not supported and cannot be bridged into supported constrained variables and constraints. See details below:
 [1] constrained variables in `MOI.Nonnegatives` are not supported because no added bridge supports bridging it.
   Cannot add free variables and then constrain them because:
   (5) `MOI.VectorOfVariables`-in-`MOI.Nonnegatives` constraints are not supported
 [2] constrained variables in `MOI.Nonpositives` are not supported because:
   Cannot use `MOIB.Variable.NonposToNonnegBridge{Float64}` because:
   [1] constrained variables in `MOI.Nonnegatives` are not supported
   Cannot add free variables and then constrain them because:
   (9) `MOI.VectorOfVariables`-in-`MOI.Nonpositives` constraints are not supported
 [3] constrained variables in `MOI.LessThan{Float64}` are not supported because:
   Cannot use `MOIB.Variable.VectorizeBridge{Float64,MOI.Nonpositives}` because:
   [2] constrained variables in `MOI.Nonpositives` are not supported
   Cannot add free variables and then constrain them because:
   (10) `MOI.SingleVariable`-in-`MOI.LessThan{Float64}` constraints are not supported
 [4] constrained variables in `MOI.GreaterThan{Float64}` are not supported because:
   Cannot use `MOIB.Variable.VectorizeBridge{Float64,MOI.Nonnegatives}` because:

I am using Julia 1.5.2 on a Mac running OS version 10.15.6, JuMP version v0.21.4, Tulip version v0.6.2.

Tips on how to make this work would be very appreciated!

Extended Precision when called from JuMP

Firstly, thank you for making this available!

I tried the tutorial example with lp = Model(Tulip.Optimizer{Float32}) and get an error about not supporting GreaterThan{Float64}, so I suspect its an interfacing issue.

Issue with getting ConstraintFunction

See https://ericphanson.github.io/ConvexTests.jl/dev/Tulip/:

Error in testset dsos_univariate_sum:
Error During Test at /home/runner/work/ConvexTests.jl/ConvexTests.jl/src/ConvexTests.jl:37
  Got exception outside of a @test
  BoundsError: attempt to access 7-element Array{MathOptInterface.VariableIndex,1} at index [8]
  Stacktrace:
   [1] getindex at ./array.jl:744 [inlined]
   [2] get(::Tulip.Optimizer{Float64}, ::MathOptInterface.ConstraintFunction, ::MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.GreaterThan{Float64}}) at /home/runner/.julia/packages/Tulip/w8T7U/src/Interfaces/MOI/constraints.jl:467
   [3] get at /home/runner/.julia/packages/MathOptInterface/bygN7/src/Bridges/bridge_optimizer.jl:812 [inlined]

Are models with Rational arithmetic supposed to be supported?

Trying to create an optimizer like so fails because Rational{BigInt} doesn't have an eps method (needed for some parameter default values): Tulip.Optimizer{Rational{BigInt}}(). Specifying parameters as keyword arguments doesn't help.

That said, I don't know if Rational arithmetic is even supposed to work for Tulip.

error `S` not defined

Tulip looks great! I run into an error though, when trying out Convex#MathOptInterface's tests via its ProblemDepot. Running the script

using Pkg
Pkg.add(PackageSpec(name="Convex", url="https://github.com/ericphanson/Convex.jl", rev="MathOptInterface"))

using Convex, Tulip

Convex.ProblemDepot.run_tests(exclude=[r"sdp", r"socp", r"exp", r"mip"]) do p
    Convex.solve!(p, Tulip.Optimizer())
end

gives many errors such as

affine_negate_atom: Error During Test at /Users/eh540/.julia/packages/Convex/r4w0m/src/problem_depot/problem_depot.jl:72
  Got exception outside of a @test
  UndefVarError: S not defined
  Stacktrace:
   [1] add_constraint(::Tulip.Optimizer{Float64}, ::MathOptInterface.ScalarAffineFunction{Float64}, ::S<:MathOptInterface.EqualTo{Float64}) at /Users/eh540/.julia/dev/Tulip/src/MOI_wrapper.jl:758
   [2] add_constraint(::MathOptInterface.Utilities.CachingOptimizer{Tulip.Optimizer{Float64},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.ScalarAffineFunction{Float64}, ::MathOptInterface.EqualTo{Float64}) at /Users/eh540/.julia/packages/MathOptInterface/4hMCx/src/Utilities/cachingoptimizer.jl:250
...

The patch

diff --git a/src/MOI_wrapper.jl b/src/MOI_wrapper.jl
index 6dfaf40..675bc13 100644
--- a/src/MOI_wrapper.jl
+++ b/src/MOI_wrapper.jl
@@ -733,7 +733,7 @@ function MOI.add_constraint(
 ) where{Tv<:Real, S<:SCALAR_SETS{Tv}}
     # Check that constant term is zero
     if !iszero(f.constant)
-        throw(MOI.ScalarFunctionConstantNotZero{Tv, typeof(f), S}(f.constant))
+        throw(MOI.ScalarFunctionConstantNotZero{Tv, typeof(f), typeof(s)}(f.constant))
     end
 
     # Extract bound info
@@ -755,7 +755,7 @@ function MOI.add_constraint(
     cidx = add_constraint!(m.inner, "", lb, ub, colids, colvals)
 
     # Return MOI index
-    return MOI.ConstraintIndex{typeof(f), S}(cidx.uuid)
+    return MOI.ConstraintIndex{typeof(f), typeof(s)}(cidx.uuid)
 end

fixes the errors, and all the tests pass. However, I don't quite understand why the error occurs in the first place; it seems to me that S should be the same as typeof(s).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.