## Wednesday, September 30, 2009

### Studentized residuals (the linear case)

I have developed a linear and nonlinear regression solver for GAMS (see http://www.amsterdamoptimization.com/pdf/regression.pdf and http://www.amsterdamoptimization.com/pdf/nlregression.pdf). A user asked how to get studentized residuals from these. I probably should add these to the solver so they are written to the GDX file with statistics. But first lets investigate if we can do this at the GAMS level with the current information available.

The residualsare returned in the ls.gdx file for the linear regression solver. However to calculate the standard errors of the residuals we need the (diagonals of the) hat matrix:

The evaluation of this matrix needs a little bit of attention. It is known from calculating the estimates

that the evaluation of X’X is causing lots of numerical problems. Here is an example of this for the famous Longley data set: http://yetanothermathprogrammingconsultant.blogspot.com/2008/06/demonstration-of-numerical-issues-in-ls.html. However we also export the variance-covariance matrix. This matrix is defined by:

So a simple way to evaluate the Hat matrix could be:

Below is a complete GAMS example to calculate the (internally and externally) studentized residuals:

\$ontext

Regression example

Cross-section data: weekly household expenditure on food and

weekly household income from Griffiths, Hill and Judge,

1993, Table 5.2, p. 182.

Erwin Kalvelagen, october 2000

\$offtext

option decimals=8;

* make sure elements are registered in this order:
set dummy /constant,income/;

set i /i1*i40/;

table data(i, *)

expenditure income
i1        9.46       25.83
i2       10.56       34.31
i3       14.81       42.50
i4       21.71       46.75
i5       22.79       48.29
i6       18.19       48.77
i7       22.00       49.65
i8       18.12       51.94
i9       23.13       54.33
i10      19.00       54.87
i11      19.46       56.46
i12      17.83       58.83
i13      32.81       59.13
i14      22.13       60.73
i15      23.46       61.12
i16      16.81       63.10
i17      21.35       65.96
i18      14.87       66.40
i19      33.00       70.42
i20      25.19       70.48
i21      17.77       71.98
i22      22.44       72.00
i23      22.87       72.23
i24      26.52       72.23
i25      21.00       73.44
i26      37.52       74.25
i27      21.69       74.77
i28      27.40       76.33
i29      30.69       81.02
i30      19.56       81.85
i31      30.58       82.56
i32      41.12       83.33
i33      15.38       83.40
i34      17.87       91.81
i35      25.54       91.81
i36      39.00       92.96
i37      20.44       95.17
i38      30.10      101.40
i39      20.90      114.13
i40      48.71      115.46
;

variables
constant
'estimate constant term coefficient'
income
'estimate income coefficient'
sse
'sum of squared errors'
;

equations
fit(i)
'the linear model'
obj
'objective'

;

obj..     sse =n= 0;
fit(i)..  data(i,
'expenditure') =e= constant + income*data(i,'income');

option lp=ls;
model ols1 /obj,fit/;
solve ols1 minimizing sse using lp;

display constant.l, income.l, sse.l;

*
* load covariance matrix and sigma from ls.gdx
*
set k /constant,income/;
parameters
covar(k,k)
'variance-covariance matrix'
resid(i)
'residuals';
;
scalars
sigma
'variance'
'residual sum of squares'
df
'degrees of freedom'
;

*
* calculate W= inv(X'X) = COVAR/sigma^2
*
parameter W(k,k) "inv(X'X)";
alias (k,k2);
W(k,k2) = covar(k,k2)/sqr(sigma);
display W;

*
* form X
*
parameter X(i,k);
x(i,
'constant') = 1;
x(i,
'income') = data(i,'income');
display X;

*
* form H = X W X' (only diagonal)
*
parameter H(i);
h(i) =
sum(k2, sum(k, x(i,k)*w(k,k2)) * x(i,k2));
display h;

*
* (internally) studentized residuals
* this corresponds to stdres in ls.diag in R
*
parameter stud_res1(i) '(internally) studentized residuals';
stud_res1(i) = resid(i)/[sigma*sqrt(1-h(i))];
display stud_res1;

*
* (externally) studentized residuals
* this follows the Wikipedia definition
*
parameter stud_res2(i) '(externally) studentized residuals';
display stud_res2;

*
* (externally) studentized residuals
* this follows the R implementation of
* studres in ls.diag
*
parameter stud_res3(i) '(externally) studentized residuals R version';
display stud_res3;

The way I read it the R implementation differs a little bit from the Wikipedia definition. In the code above the calculation of stud_res2 will do what Wikipedia suggests and stud_res3 will mimic what R returns as studres in ls.diag. The difference is a factor 1/(1-h(i)) when taking out the i-th residual. Looking at equation 8.1.17 of Draper and Smith, Applied Regression Analysis, it looks like R is correct and Wikipedia (or rather how I read it) is not. So the version stud_res3 would be the correct one.

For non-linear regression I assume we just need to replace X by J (the Jacobian evaluated at the minimum residual sum of squares).

## Monday, September 28, 2009

### VSTO+Solver Foundation

Microsoft Solver Foundations provides a simple plug-in too to develop optimization applications with Excel as front-end. For more complex models it may be needed to use the API’s. Basically there are two alternatives: develop a .NET DLL and call this from VBA (an example is demonstrated here) or create a VSTO application. VSTO is a framework to develop .NET based tools for use in an Office environment. Actually the Excel plug-in is an example of an application level VSTO tool. Here I describe a document level VSTO application.

A delayed column generation algorithm for the cutting stock problem is a good example of a modeling tool that can not be coded in OML. Here is a simple example in GAMS. It is not very difficult to code this in the solver API of MSF.

The Excel demo application has two sheets. One with the input and the progress log and one with the results.

The algorithm is using the duals of the master problem to form the sub-problems. Access to duals is only available in the solver level API (for the current version of MSF). This means a very low-level way to describe the model. In this case the models are simple and well-structured so the pain is limited. For more complex models this approach may become burdensome.

In order to get duals we needed to create the following options:

The reason for setting the presolve level to zero is a bug in the solver. This is fixed in the next version of MSF.

Note that the algorithm used is different from the example My Documents\Microsoft Solver Foundation\Samples\Solvers\MixedIntegerProgramming\CuttingStock from the Solver Foundation distribution. That algorithm will generate all patterns in advance. In our algorithm we generate new patterns as we go.

VSTO applications are quite fun to develop. Excel integrates nicely in Visual Studio and debugging is easy.

A few issues I encountered:

• Sometimes there is still an Excel.exe process after terminating Visual Studio. Kill it using the process manager.
• C# does not really do COM very nicely. That means lots of type casts and no proper handling of optional parameters (need to use a lot of “missing” for them in calls). I believe this is much better in the next version of C#/Visual Studio.
• Reading the documentation on deployment and security gave me a headache. This is not for the faint-hearted.

## Wednesday, September 23, 2009

### GAMS: compile time vs execution time

if(ElasticityChg_switch = 1,
\$CALL =GDXXRW.EXE IncomeElasticityAdjMatrix.xls …
)

This code works differently than expected. The \$ statements are carried out during compilation time while the if statement is executed during execution time. I.e. this actually is executed in the following order:

\$CALL =GDXXRW.EXE IncomeElasticityAdjMatrix.xls …

if(ElasticityChg_switch = 1,
)

This difference between compile time and execution time is not very well understood by most users. Unfortunately, the use of many of the newer gams constructs such as \$call, \$load requires a good understanding of these concepts. Originally GAMS had only a very few constructs that required such knowledge, so users could build models without being bothered by the difference between compilation and execution. This is no longer the case: nowadays a good GAMS modeler better understands this difference.

## Monday, September 21, 2009

### Job Shop Scheduling

Although standard MIP formulations do not work very well on this problem, the current crop of MIP solvers can solve problems that used to be very difficult to solve. As example consider ft10 with 100 operations (10 machines, 10 jobs). This problem was unsolved for 26 years after its introduction in 1963. Now this can be solved in about 5 minutes to optimality using solvers like Gurobi and Cplex. (Some tabu search based methods have been able to find the optimal solution in seconds).

I wanted to solve this from Excel, but as Excel has no native GANTT charts, I created some VBA code so I could at least inspect the results.

## Thursday, September 17, 2009

### Modeling question

I'm a new GAMS user. I am trying to model and solve a problem using
binary variables.

In the problem I have two sets:

set
t time periods /t1*t7/
l location /L1*L5/ ;

Currently, I do not have an objective function and I am just trying to
satisfy the constraints of my problem. I have a simplifed case of my
problem working.

Simplified case equations and constraints:
- I have a binary variable x with 35 entries
binary variable x(t,l)

- "Equation1" Solves for the supply at time period for each location
(L1, L2, L3, L4, and L5). The supply depends on the binary variable x
(t,l). For example:
x(t1,L1) = 1, then the supply value would INCREASE
x(t1,L1) = 0, then the supply value would DECREASE

- "Equation2" The sum of all the binary variables x(t,l) over a time
period must equal 1 (only one location can have increasing supply
during a given time period). Example:
equation2(t)    ..      sum(l,x(t,l)) =e= 1 ;

- "Equation3" Provides the upper limit of the supply value that can be
reached at each location.

- "Equation4" Provides the lower limit of the supply value that can be
reached at each location.

This simplified case solves correctly (satisfies all constraints).

Current problem:
My problem is that currently (in the simplified case), the binary
variable can move to any location from one time period to the next.
So, if the supply gets too low at location 5 (L5), the binary variable
can jump from any location to L5 (example, from L1 during time period
t4 to L5 during time period t5).

What I need to tell GAMS is that the binary variable can only move 1
spot from time period to time period. Example:

At t1, x(t1,'L1') = 1 (so the binary variable is at L1).
The supply value is low at L5.
But the binary variable can only move in the following manner:
t1 -> x(t1,'L1') = 1
t2 -> x(t2,'L2') = 1
t3 -> x(t3,'L3') = 1
t4 -> x(t4,'L4') = 1
t5 -> x(t5,'L5') = 1 (finally arrives at L5)

Also, the binary variable can't move from L1 to L5 (imagine the
locations are in a line, unable to wrap around).

Can anyone provide an idea (or similar example problem implementation,
if possible) on how to model this in GAMS? I tried the following:

m1(t)\$((ord(t) lt card(t)) and (x(t,'b1') eq 1))        ..      x(t+1,'L1') + x(t
+1,'L2') =e= 1 ;
m2(t)\$(ord(t) lt card(t) and x(t,'b2') = 1)     ..      x(t+1,'L2') + x(t
+1,'L1') + x(t+1,'L3') =e= 1 ;
m3(t)\$(ord(t) lt card(t) and x(t,'b3') = 1)     ..      x(t+1,'L3') + x(t
+1,'L2') + x(t+1,'L4') =e= 1 ;
m4(t)\$(ord(t) lt card(t) and x(t,'b4') = 1)     ..      x(t+1,'L4') + x(t
+1,'L3') + x(t+1,'L5') =e= 1 ;
m5(t)\$(ord(t) lt card(t) and x(t,'b5') = 1)     ..      x(t+1,'L5') + x(t
+1,'L4') =e= 1 ;

But I got the following error from GLPK (using mip):
**** The following MIP errors were detected in model first:
****  57 equation m1 .. VAR operands relational or boolean
****  57 equation m2 .. VAR operands relational or boolean
****  57 equation m3 .. VAR operands relational or boolean
****  57 equation m4 .. VAR operands relational or boolean
****  57 equation m5 .. VAR operands relational or boolean

I don't understand these error messages...

In summary:
I would appreciate any help (examples, books, etc.) on how to
constrain the movement on the binary variable to the position that is
either 1) the same (L2 to L2 is legal) or 2) adjacent (L2 can only
move to L1 or L3, if not staying at L2). I am stuck and getting
frustrated. Thank you!

You can not have a variable in a dollar condition. That would make the model nonlinear: you are writing essentially an if-then expression.

If I read the description correctly you could do something like:

positive variable pos(t) 'position at t';
defpos(t)..   pos(t) =e= sum(L, ord(L)*x(t,L));
bound1pos(t-1)..  pos(t) =L= pos(t-1) +1;
bound2pos(t-1)..  pos(t) =G= pos(t-1) -1;

## Wednesday, September 16, 2009

### GAMS/Mosek error message

From a log file:

MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.
MOSEK Warning 700    : 8 zero elements are specified in sparse input data.

First: issuing this message one time is more than enough.

I suspect this is related to a very small value in the GAMS model, below some a[i,j] tolerance. As GAMS will not export zero coefficients, a strict reading of these messages would mean there is a bug somewhere. So lets hope these error messages are somewhat poorly formulated and indeed this a matter of small values being present in the model.

Formulating correct and succinct error messages is very important (but often underestimated by software developers, myself included). They provide feedback to user when it really matters: when something is wrong and the user is most likely confused.

### Sign problem

I'm trying to follow your implemantation of Benders Decomposition for SP with GAMS (http://www.amsterdamoptimization.com/pdf/stochbenders.pdf) but after looking deeply I can not discover the reason of a "-" sign. Sorry this basic question!

I look to Eq.7 (pag. 3), and it says theta > .... (no minus sign)

And during your implementation in GAMS you program:
cutconst(iter) = cutconst(iter) - prob(s)*sum(j,(-selmax.m(j))*demand(j,s));
cutcoeff(iter,j) = cutcoeff(iter,j) - prob(s)*(-selling.m(j));

I can not discover why we should have: "-" prob(s)
I know this the right programming, but I do not understand why we need this sign.

The sign of duals is not always well defined. In this case we needed to add a minus sign to get the correct results as GAMS is using a different sign convention than the text. It would have been better if the text and the code were in sync here or at least devote a remark about this.

### GAMS/COINCBC: missing log

In releases 23.1 and 23.2 the log of the branch & bound algorithm is disappeared. After reporting about generated cuts no more information is displayed.

Log of 23.0:

 GAMS/CoinCbc 2.2 LP/MIP Solver written by J. Forrest Problem statistics: 127 columns and 126 rows.                     108 variables have integrality restrictions. Calling CBC main solution routine... Coin Cbc and Clp Solver version 2.20.00, build Sep 20 2008 command line - GAMS/CBC -solve -quit Continuous objective value is 32.7059 - 0.00 seconds Optimal - objective value 32.7059 6 fixed, 0 tightened bounds, 0 strengthened rows, 0 substitutions processed model has 114 rows, 115 columns (98 integer) and 448 elements Objective coefficients multiple of 1 Cutoff increment increased from 1e-005 to 0.999 Pass   1: suminf.   10.30302 (63) obj. -29.2323 iterations 18 Pass   2: suminf.   10.30302 (63) obj. -29.2323 iterations 0 Pass   3: suminf.    8.39496 (51) obj. -26.535 iterations 43 Pass   4: suminf.    7.85121 (55) obj. -26.7163 iterations 4 Pass   5: suminf.    7.85121 (55) obj. -26.7163 iterations 0 Pass   6: suminf.    5.84706 (39) obj. -22 iterations 33 Pass   7: suminf.    5.84706 (39) obj. -22 iterations 0 Pass   8: suminf.    5.84706 (39) obj. -22 iterations 0 Pass   9: suminf.    5.03167 (39) obj. -19.6561 iterations 71 Pass  10: suminf.    5.03167 (39) obj. -19.6561 iterations 0 Pass  11: suminf.    5.03167 (39) obj. -19.6561 iterations 0 Pass  12: suminf.    3.05294 (26) obj. -15.2353 iterations 50 Pass  13: suminf.    3.05294 (26) obj. -15.2353 iterations 0 Pass  14: suminf.    3.05294 (26) obj. -15.2353 iterations 0 Pass  15: suminf.    3.63051 (27) obj. -15.1232 iterations 26 Pass  16: suminf.    3.63051 (27) obj. -15.1232 iterations 0 Pass  17: suminf.    3.63051 (27) obj. -15.1232 iterations 0 Pass  18: suminf.    2.75294 (21) obj. -13.4235 iterations 26 Pass  19: suminf.    2.74866 (20) obj. -13.4171 iterations 6 Pass  20: suminf.    2.74866 (20) obj. -13.4171 iterations 0 No solution found this major pass Before mini branch and bound, 7 integers at bound fixed and 0 continuous Full problem 114 rows 115 columns, reduced to 113 rows 108 columns - too large Mini branch and bound did not improve solution (0.11 seconds) Full problem 115 rows 115 columns, reduced to 115 rows 115 columns - too large After 0.11 seconds - Feasibility pump exiting - took 0.11 seconds 59 added rows had average density of 50.6441 At root node, 59 cuts changed objective from -29.8813 to -26.7458 in 100 passes Cut generator 0 (Probing) - 1834 row cuts, 0 column cuts (53 active)  in 0.775 seconds - new frequency is 1 Cut generator 1 (Gomory) - 5566 row cuts, 0 column cuts (6 active)  in 0.226 seconds - new frequency is 1 Cut generator 2 (Knapsack) - 14 row cuts, 0 column cuts (0 active)  in 0.007 seconds - new frequency is -100 Cut generator 3 (Clique) - 0 row cuts, 0 column cuts (0 active)  in 0.009 seconds - new frequency is -100 Cut generator 4 (MixedIntegerRounding2) - 643 row cuts, 0 column cuts (0 active)  in 0.020 seconds - new frequency is 1 Cut generator 5 (FlowCover) - 90 row cuts, 0 column cuts (0 active)  in 0.144 seconds - new frequency is -100 Cut generator 6 (TwoMirCuts) - 990 row cuts, 0 column cuts (0 active)  in 0.044 seconds - new frequency is -100 Optimal - objective value 26.7458 Optimal - objective value 26.7458 After 0 nodes, 1 on tree, 1e+050 best solution, best possible -26.7458 (2.22 seconds) Integer solution of -18 found after 11709 iterations and 84 nodes (4.46 seconds) Full problem 114 rows 115 columns, reduced to 98 rows 94 columns Integer solution of -21 found after 17199 iterations and 205 nodes (6.05 seconds) Full problem 114 rows 115 columns, reduced to 103 rows 99 columns - too large Search completed - best objective -21, took 25825 iterations and 367 nodes (7.34 seconds) Strong branching done 5428 times (117137 iterations), fathomed 39 nodes and fixed 233 variables Maximum depth 26, 1719 variables fixed on reduced cost Cuts at root node changed objective from 29.8813 to -26.7458 Probing was tried 660 times and created 7528 cuts of which 1317 were active after adding rounds of cuts (1.204 seconds) Gomory was tried 611 times and created 10689 cuts of which 626 were active after adding rounds of cuts (0.429 seconds) Knapsack was tried 100 times and created 14 cuts of which 0 were active after adding rounds of cuts (0.007 seconds) Clique was tried 100 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.009 seconds) MixedIntegerRounding2 was tried 651 times and created 2491 cuts of which 123 were active after adding rounds of cuts (0.090 seconds) FlowCover was tried 100 times and created 90 cuts of which 0 were active after adding rounds of cuts (0.144 seconds) TwoMirCuts was tried 100 times and created 990 cuts of which 0 were active after adding rounds of cuts (0.044 seconds) implication was tried 551 times and created 0 cuts of which 0 were active after adding rounds of cuts 5 bounds tightened after postprocessing Result - Finished objective 21 after 367 nodes and 25825 iterations - took 7.35 seconds (total time 7.39) Total time 7.39 Solved to optimality (within gap tolerances). MIP solution:                    21   (367 nodes, 7.395 seconds) Best possible:                   21 Absolute gap:                     0   (absolute tolerance optca: 0) Relative gap:                     0   (relative tolerance optcr: 0.1)

In gams 23.2 we see:

 GAMS/CoinCbc 2.3 LP/MIP Solver written by J. Forrest Problem statistics: 127 columns and 126 rows.                     108 variables have integrality restrictions. Calling CBC main solution routine... Coin Cbc and Clp Solver version 2.3pre, build Apr 17 2009 command line - GAMS/CBC -solve -quit (default strategy 1) Continuous objective value is 32.7059 - 0.00 seconds 0 fixed, 0 tightened bounds, 91 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions 0 fixed, 0 tightened bounds, 4 strengthened rows, 0 substitutions processed model has 114 rows, 115 columns (98 integer) and 448 elements Objective coefficients multiple of 1 Cutoff increment increased from 1e-005 to 0.999 Pass   1: suminf.   11.18775 (60) obj. -28.9374 iterations 12 Pass   2: suminf.   11.18775 (60) obj. -28.9374 iterations 0 Pass   3: suminf.    7.03271 (40) obj. -21.3003 iterations 77 Pass   4: suminf.    6.88899 (42) obj. -21.037 iterations 4 Pass   5: suminf.    6.88899 (42) obj. -21.037 iterations 0 Pass   6: suminf.    4.86728 (32) obj. -17.0442 iterations 43 Pass   7: suminf.    4.86728 (32) obj. -17.0442 iterations 0 Pass   8: suminf.    4.86728 (32) obj. -17.0442 iterations 0 Pass   9: suminf.    3.82450 (18) obj. -14.6103 iterations 32 Pass  10: suminf.    3.82450 (18) obj. -14.6103 iterations 0 Pass  11: suminf.    3.82450 (18) obj. -14.6103 iterations 0 Pass  12: suminf.    4.04428 (12) obj. -9.88475 iterations 35 Pass  13: suminf.    2.36084 (11) obj. -9.87972 iterations 4 Pass  14: suminf.    2.36084 (11) obj. -9.87972 iterations 0 Pass  15: suminf.    5.41816 (25) obj. -14.5273 iterations 42 Pass  16: suminf.    4.90619 (25) obj. -14.6979 iterations 3 Pass  17: suminf.    4.90619 (25) obj. -14.6979 iterations 0 Pass  18: suminf.    5.63976 (21) obj. -11.9286 iterations 43 Pass  19: suminf.    5.63976 (21) obj. -11.9286 iterations 0 Pass  20: suminf.    5.63976 (21) obj. -11.9286 iterations 0 Pass  21: suminf.    5.60993 (28) obj. -13.8754 iterations 45 Pass  22: suminf.    5.60993 (28) obj. -13.8754 iterations 0 Pass  23: suminf.    5.60993 (28) obj. -13.8754 iterations 0 Pass  24: suminf.    4.23841 (23) obj. -14.553 iterations 48 Pass  25: suminf.    4.23841 (23) obj. -14.553 iterations 0 Pass  26: suminf.    4.23841 (23) obj. -14.553 iterations 0 Pass  27: suminf.    2.62284 (12) obj. -13 iterations 37 Pass  28: suminf.    2.62284 (12) obj. -13 iterations 0 Pass  29: suminf.    2.62284 (12) obj. -13 iterations 0 Pass  30: suminf.    4.52424 (26) obj. -13.3305 iterations 39 No solution found this major pass Before mini branch and bound, 1 integers at bound fixed and 0 continuous Full problem 114 rows 115 columns, reduced to 114 rows 114 columns - too large Mini branch and bound did not improve solution (0.10 seconds) Full problem 115 rows 115 columns, reduced to 115 rows 115 columns - too large After 0.11 seconds - Feasibility pump exiting - took 0.11 seconds 47 added rows had average density of 55.5957 At root node, 47 cuts changed objective from -29.7355 to -26.6652 in 100 passes Cut generator 0 (Probing) - 1402 row cuts, 0 column cuts (46 active)  in 0.755 seconds - new frequency is 1 Cut generator 1 (Gomory) - 5871 row cuts, 0 column cuts (1 active)  in 0.228 seconds - new frequency is 1 Cut generator 2 (Knapsack) - 0 row cuts, 0 column cuts (0 active)  in 0.013 seconds - new frequency is -100 Cut generator 3 (Clique) - 0 row cuts, 0 column cuts (0 active)  in 0.003 seconds - new frequency is -100 Cut generator 4 (MixedIntegerRounding2) - 389 row cuts, 0 column cuts (0 active)  in 0.020 seconds - new frequency is 1 Cut generator 5 (FlowCover) - 33 row cuts, 0 column cuts (0 active)  in 0.149 seconds - new frequency is -100 Cut generator 6 (TwoMirCuts) - 743 row cuts, 0 column cuts (0 active)  in 0.039 seconds - new frequency is -100 Solved to optimality (within gap tolerances). MIP solution:                    21   (98 nodes, 4.061 seconds) Best possible:                   21 Absolute gap:                     0   (absolute tolerance optca: 0) Relative gap:                     0   (relative tolerance optcr: 0.1)

With larger problems you have no clue how cbc is doing. The log is 100% of the “user interface” of a solver, so it is more important for a user than many developers think.

## Monday, September 14, 2009

### Scheduling question

>I thought it would be easy, and I could not find how to do it.
>Assume I have an ordered set of decisions x[100].
>I would like to express in OML these requirements:

> All these decisions should be equal to 1 (positive decisions), except for a subset where the
> decisions should be 0 (negative decisions).

>All the negative decisions should be adjacent in the ordered set of decisions.
>For example, all negative decisions could be from slots {13,14,15,16,17,18,19}.
>On the contrary, the set {5,6,7,13,14,15} could not be a valid set of negative
>decisions as they are not all adjacent (missing 8 to 12).

>Could such constraints be expressed in OML?

This problem is often seen in scheduling of machines where they can not be turned on more than a number of times. There are well-known standard formulations for this. E.g.

`binary variable turnon[t];  turnon[t] >= x[t]-x[t-1];Sum[t, turnon[t]] <= 1;`

The binary variable may be relaxed to a continuous variable between [0,1] to improve performance.

### Postscript viewer

http://www.rampantlogic.com/psview/psview.html

After installing this tool, clicking on a postscript file will show it inside your pdf viewer. Works like a charm.

## Thursday, September 10, 2009

### Cplex QCP Presolver

With respect to this post it is interesting to see how Cplex presolves these split QCP constraints produced by the GAMS link. I suspected from the final model statistics (rows, columns etc.) that Cplex could not presolve this away. The Cplex model before presolving looks like:

\Problem name: gamsmodel
Minimize
obj: z + 0 x
Subject To
_e: z - y - linear_part_of_e  = 0
QCP_row_for_e: linear_part_of_e + [ - x ^2 ] >= 0
Bounds
z Free
x Free
y >= 1
linear_part_of_e Free
End

After presolving it looks like:

\Problem name: x2.lp

Minimize
obj: 0 x + 2 QCV0
Subject To
QCR1: QCV0 - QCV1  = 1
QCP_row_for_e: [ x ^2 - QCV0 ^2 + QCV1 ^2 ] <= 0
Bounds
x Free
QCV1 Free
End

This raises some questions. It is not completely obvious to me why this is a good transformation. And what is the reason to print terms like 0 × x in the objective?

## Wednesday, September 9, 2009

### GAMS Cplex link issues with QCPs

I am working on a very large problem that involves electricity transmission. The losses over the transmission lines are quadratic functions modeled by piecewise linear functions. These functions cause the model to have many constraints. Using a quadratic formulation we can reduce the size of the problem significantly: we go from 1,248,614 rows to 784,934. This looks very promising. Unfortunately the GAMS Cplex link generates a Cplex model with many extra rows. We go from:

`Reduced MIP has 943453 rows, 405564 columns, and 3247549 nonzeros.`

to

`Reduced MIQCP has 1035303 rows, 1047593 columns, and 3095839 nonzeros.`

So all our effort to reduce the size of the model is wasted somehow.

The reason is a poor implementation of the quadratic terms. If we look at the small GAMS model:

variables z,x,y;
equation e;
y.lo=1;
e.. z =g= x*x+y;
model m /e/;
solve m minimizing z using qcp;

then the Cplex model that is generated contains extra rows:
\Problem name: gamsmodel

Minimize
obj: z + 0 x
Subject To
_e: z - y - linear_part_of_e  = 0
QCP_row_for_e: linear_part_of_e + [ - x ^2 ] >= 0
Bounds
z Free
x Free
y >= 1
linear_part_of_e Free
End

AMPL is doing much better. The model:
var x; var y >= 1; var z;
minimize obj:z;
e: z >= x*x+y;

generates the following Cplex model:
\Problem name: c3v1i0o1

Minimize
obj: 0 x1 + 0 x2 + x3
Subject To
c1: - x2 + x3 + [ - x1 ^2 ] >= 0
Bounds
x1 Free
x2 >= 1
x3 Free
End

I reported this to the GAMS people, so hopefully this will be fixed quickly so I can really try my MIQCP formulation.

## Friday, September 4, 2009

### Stochastic Programming in MSF

The next version of Microsoft Solver Foundation will contain facilities for Stochastic Programming: http://blogs.msdn.com/natbr/archive/2009/09/03/solver-foundation-2-0-preview-simulation-and-stochastic-programming.aspx.

What do other modeling systems have?

## Thursday, September 3, 2009

### Numerical issues with CONOPT

Here is a small linear model we solve as CNS (constrained nonlinear system) with CONOPT:

1  variable x;
2  equation e;
3
4  e.. x =e= exp(100);
5
6  model m /e/;
7  solve m using cns;

The resulting error message is somewhat difficult to comprehend:

** An equation in the pre-triangular part of the model cannot
be solved because the equation or its derivatives appear to
be discontinuous or because the function and derivative are
inconsistent.

x appearing in
e: Equation and/or derivative appear to be discontinuous or inconsistent

**** SOLVER STATUS     1 Normal Completion
**** MODEL STATUS      4 Infeasible

LOWER          LEVEL          UPPER

---- VAR x                 -INF      5.0000000E+9        +INF

I suspect CONOPT is using some upper-bound internally for variables leading to this message.

GAMS generates the equation:

---- e  =E=

e..  x =E= 2.68811714181616E43 ; (LHS = 0, INFES = 2.68811714181616E43 ****)

which seems correct.

### Overflow in NLP

The expression log(sum(i,exp(x(i)))) can lead to numerical instability or even overflow. Sometimes it helps to “scale” this as follows:

C+log(sum(i,exp(x(i)-C)))

for some constant C. A reasonable value for C is the largest x(i) you expect. Note that exp(.) overflows quickly. This functional form is used in many economic models, and for x > 10 approximates the max function.

As a somewhat long aside:

When using recent versions of GAMS you need to be aware that exp(.) does never overflow because it is truncated. I.e. exp will actually return min{exp(x),1e299}. This looks smart but on the whole I think this does actually more harm than good and I am not aware of any other numerical software that does such a truncation. As a result the function is no longer smooth (in theory one should use then a DNLP solver, but an exception is introduced for this case). Although f(x)=min{exp(x),1e299} is evaluated correctly, the derivative f’ is not adjusted accordingly (it should be 0 for x>log(1e299) but instead is calculated as 1e299).

Strangely an overflow message can appear in the gradient calculation:

1  scalar x /1000/;
2  scalar f, grad;
3
4  f = exp(x);
6

This will give a message about line 5:

**** Exec Error at line 5: exp: FUNC OVERFLOW: x too large

I.e. GAMS issues a overflow error in the gradient evaluation but not in the function evaluation. Another example of a somewhat unexpected result is:

scalar x /1e10/;
scalar y;

set i /i1*i50/;
loop(i,
x = x*1e6;
y = exp(x);
display x,y;
);

this actually does overflow. But not in the exp()! It overflows in the multiplication which is not truncated. Note also that in normal math we have exp(x)>x, but with this truncation rule there exist x such that x > exp(x) (e.g. take x=1.1e299). As you can guess I am not a fan of this redefinition of exp. Old-fashionedly, I would prefer to be alerted of an overflow so I can fix this in the model compared to truncating behind my back. I assume the truncation was introduced in order not to bother the user with trivialities like overflow; well, in this case I actually like to be bothered and would prefer a consistent application of overflow conditions.

## Wednesday, September 2, 2009

### New Robust Optimization Tool

See http://www.robustopt.com/. Robust optimization is a technique to deal with uncertainty. This tool is Matlab based.

## Tuesday, September 1, 2009

### Minimize instead of Maximize

Someone sent me this link: http://www.cs.clemson.edu/~steve/CPLSCMODS/stealth.note. I find this difficult to believe. I make lots of errors, but when developing a model I run a model in all its variants so many times, that I really should have some idea about the optimal objective or at least a good objective value. It would be very difficult to imagine that I would be so clueless about the optimal objective value that I can not detect an error like this just by looking at the solution. E.g. a hint something is wrong would be the observation that the final solution is worse than the initial point. The only exception would be if the model is so tight that the minimizer and maximizer yield essentially the same objective value, in which case the problem is not that severe.