Wednesday, November 26, 2014

Example of use of Unit Loss Function

In a previous post I showed how to write and solve a unit loss function G(k). Here I show how to use it.

Here I solve some (s,Q) inventory models. These models have a variable Q: order quantity. It is sometimes argued to use the EOQ order quantity for this. EOQ stands for Economic Order Quantity. In the model I try to see how much difference it makes when we use this EOQ or just solve for Q in the (s,Q) models directly.

The results are:

----    155 PARAMETER results 

                            EOQ      Qendog        Qeoq

EOQ .Q                 5200.000
CSEO.Q                             5876.746    5200.000
CSEO.Total Cost                 6337360.618 6337934.428
CSEO.Inv+Short Cost              137360.618  137934.428
CSEO.s                             5658.711    5749.015
CSEO.k                                2.094       2.152
CIS .Q                             5846.467    5200.000
CIS .Total Cost                 6331413.493 6331942.157
CIS .Inv+Short Cost              131413.493  131942.157
CIS .s                             5292.515    5373.301
CIS .k                                1.860       1.912

We can see for two inventory models (Cost per Stockout Event – CSOE and Cost per Item Short – CIS) the differences in Q and s are significant. However the effect on total cost and relevant cost is more limited: things are pretty flat out there.

In practice formulas based on the first order conditions are used to solve these inventory models. However I think there is a case to be made to look at the original cost functions and optimize these directly. This relates more to the original problem and also we may be a little bit more flexible if we want to add a few more wrinkles. In some cases there are closed solutions, e.g. the well known EOQ formula is:

image

In the model below again we use the original cost function and minimize that for Q. Note that for other cases it may not be that easy to find closed solutions.

$ontext

  
Some (s,Q) inventory models

  
We evaluate two inventory models:
     
(1) Cost per Stockout Event (CSOE) Model
     
(2) Cost per Item Short (CIS) Model

  
It is sometimes suggested to input Q*=EOQ into these models
  
opposed to optimizing directly for Q. Here we try to
  
see how much a difference this makes.

$offtext

scalars
   D        
'mean demand ($/year)'           /62000/
   sigma_D  
'standard deviation of demand'    /8000/
   c        
'cost per item ($/item)'           /100/
   cK       
'order cost ($/order)'       /3270.9678/
   ci       
'annual inventory cost'
   h        
'holding charge (% of unit cost)' /0.15/
   mu_dl    
'mean over lead time'
   sigma_dl 
'sigma over lead time'
   L        
'lead time (days)'                  /14/
   B1       
'CSOE penalty'                   /50000/
   cs       
'Item short cost'                   /45/
   Qeoq     
'EOQ'
;

ci = h*c;
mu_dl = D/(365/L);
sigma_dl = sigma_D/sqrt(365/L);

parameter results(*,*,*);

*------------------------------------------------------
* Deterministic EOQ model
*------------------------------------------------------

variables
    tc 
'total cost'
    Q  
'order quantity'
;

* prevent division by zero
Q.lo = 0.1;

equations
    costdef1   
'total cost calculation for simple deterministic case'
;

costdef1..
   tc =e= c*D + cK*(D/Q) + ci*(Q/2);

model eoq /costdef1/;
solve eoq minimizing tc using nlp;

Qeoq = Q.l;
results(
'EOQ','Q','EOQ') = Qeoq;


*------------------------------------------------------
*  Cost per Stockout Event Model
*------------------------------------------------------

positive variables
   k
   PStockout 
'P[x>=k]'
   s         
'order point'
;
equations
    costdef2 
'CSOE total cost function'
    cdf      
'this implements P[x>=k]'
    sdef     
'calculation of order point s'
;

costdef2..
   tc =e=  c*D + cK*(D/Q) + ci*(Q/2+k*sigma_dl) + B1*(D/Q)*PStockOut;

cdf..
   Pstockout =e= 1-errorf(k);

sdef..
   s =e= mu_dl + k*sigma_dl;

model csoe /costdef2,cdf,sdef/;



*------------------------------------------------------
*  Cost per Item Short (CIS)  Model
*------------------------------------------------------

variables
    G  
'unit loss function'
;
equations
    costdef3
'CIS total cost function'
    Gdef    
'this implements G(k)'
;

costdef3..
   tc =e=  c*D + cK*(D/Q) + ci*(Q/2+k*sigma_dl) + cs*sigma_dl*G*(D/Q);

Gdef..
   G =e= 1/sqrt(2*pi)*exp(-0.5*sqr(k)) - k * (1-errorf(k));

model cis /costdef3,sdef,Gdef/;



*------------------------------------------------------
*  Results with Q endogenous
*------------------------------------------------------

solve csoe minimizing tc using nlp;

results(
'CSEO','Total Cost','Qendog') = TC.L;
results(
'CSEO','Inv+Short Cost','Qendog') = TC.L-c*D;
results(
'CSEO','Q','Qendog') = Q.L;
results(
'CSEO','s','Qendog') = s.L;
results(
'CSEO','k','Qendog') = k.L;

solve cis minimizing tc using nlp;

results(
'CIS','Total Cost','Qendog') = TC.L;
results(
'CIS','Inv+Short Cost','Qendog') = TC.L-c*D;
results(
'CIS','Q','Qendog') = Q.L;
results(
'CIS','k','Qendog') = k.L;
results(
'CIS','s','Qendog') = s.L;


*------------------------------------------------------
*  Results with Q fixed to EOQ
*------------------------------------------------------

Q.fx = Qeoq;

solve csoe minimizing tc using nlp;

results(
'CSEO','Total Cost','Qeoq') = TC.L;
results(
'CSEO','Inv+Short Cost','Qeoq') = TC.L-c*D;
results(
'CSEO','Q','Qeoq') = Q.L;
results(
'CSEO','k','Qeoq') = k.L;
results(
'CSEO','s','Qeoq') = s.L;

solve cis minimizing tc using nlp;

results(
'CIS','Total Cost','Qeoq') = TC.L;
results(
'CIS','Inv+Short Cost','Qeoq') = TC.L-c*D;
results(
'CIS','Q','Qeoq') = Q.L;
results(
'CIS','k','Qeoq') = k.L;
results(
'CIS','s','Qeoq') = s.L;

display results;

Thursday, November 20, 2014

Unit Loss Function

A standard normal unit loss function can be described as:

   image

where f(x) is the standard normal density function and F(x) is the standard normal CDF. See: http://planetmath.org/unitnormallossfunction.

Often we want to solve UNL(x)=c for x.

GAMS Version

$ontext

  
Standard Normal Unit Loss Function (UNL)

  
solve UNL(x)=c
  
for a given c

  
Method: solve f(x) - x * (1-F(x)) = c
  
where f is standard normal density function and F is the standard
  
normal CDF.

$offtext


scalar c /0.0331/;

variable x;
equation e;

e..  1/sqrt(2*pi)*exp(-0.5*sqr(x)) - x * (1-errorf(x)) =e= c;

* initial value
x.l = 1;

model m /e/;
solve m using cns;
display x.l;

This solves with CONOPT very quickly, we get:

image

Note: the GAMS math expressions can be inspected with a new tool. This is a nice test example:

image

Wolfram Alpha

The input is easy:

image

The output is unfortunately:

image

Mathematica

The above approach is not really suited for Mathematica. This looks better:

image

R

Super easy input, due to predefined functions for normal pdf and cdf. We give an interval of [0,100] to search:

image

Excel

Its root finder is called Goal Seek.

image

image

It is a little bit sloppy by default. We can make Goal Seek more precise by multiplying the UNC-c cell by a 1000:

image  

Note: the tolerance can also be set in the Excel options (under formulas, Maximum Change). Looks like Excel is using an absolute tolerance on Δx and Δf (i.e. it could stop prematurely if the algorithm chooses a very small Δx).

Python

Quite nice:

image

Sunday, November 16, 2014

Quartiles in R

In this Wikipedia article it looks like the R functions summary and fivepoint use the same algorithm to calculate q1 and q3:
image
However that is not the case, as can be seen here (notice the difference in q1 and q3):
image
Of course quartiles are not uniquely defined. In fact the function quantile offers nine(!) different ways to calculate them: http://stat.ethz.ch/R-manual/R-patched/library/stats/html/quantile.html. The function summary actually calls quantile using its default algorithm type 7.

Wednesday, November 12, 2014

Progressive Hedging (2)

The second example in Pallson and Ravn (http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6738) is a little bit more challenging.

We have:

image

with the following tree:

image

A GAMS representation of the progressive hedging algorithm could be:

$ontext

 
Olafur P. Pallson, Hans F. Ravn
 
Scenario Analysis and the Progressive Hedging Algorithm
   
- Simple Numerical Examples
 
The Technical University of Denmark, 1993


$offtext

set
   iter   
'iteration number' /it0*it20/
   n      
'nodes'     /I,II,III,IV, s1*s6/
   s(n)   
'scenarios: leaf nodes' /s1*s6/
   t      
'levels: dimension of x' /t1*t3/
   dp(n)  
'decision points in tree' /I,II,III,IV/
   tt(t)  
"subset of t's used in current scenario"
;

alias(n,nn,nnn);
table tree(n,nn,*)
       
value  prob
I.II      5    0.3
II.s1     6    0.2
II.s2     5    0.5
II.s3     2    0.3
I.III     2    0.7
III.s4    4    0.6
III.IV    7    0.4
IV.s5     6    0.1
IV.s6     4    0.9
;

display tree;

*
* do some tree manipulations here
*

set paths(s,n,nn) 'links on path to leaf node';
paths(s,n,s)$tree(n,s,
'prob')=yes;
scalar cnt;
* repeat augment paths until no more change
repeat
    cnt =
card(paths);
    paths(s,nnn,n)$(
sum(nn,paths(s,n,nn)) and tree(nnn,n,'prob')) = yes;
until cnt=card(paths);
option paths:0:0:1;
display paths;

set top(n) 'top of tree';
top(n)$(
sum((s,nn)$paths(s,nn,n),1)=0) = yes;
display top;

parameter level(n) 'level of each node in tree';
level(top)=1;
repeat
   cnt =
card(level);
  
loop(paths(s,n,nn)$level(n),
      level(nn) = level(n)+1;
   );
until cnt=card(level);
display level;

alias(t,tdown,tup);
parameters
   sprob0(s,t)
'single probabilities'
   sprob1(s,t)
'accumulated probabilities'
   sval(s,t)
'values'
;
sval(s,t) =
sum(paths(s,n,nn)$(level(n)=ord(t)), tree(n,nn,'value'));
display sval;
sprob0(s,t) =
sum(paths(s,n,nn)$(level(n)=ord(t)), tree(n,nn,'prob'));
display sprob0;
* accumulate probabilities backwards to get conditional probabilities
sprob1(s,t)$sprob0(s,t) =
prod(tdown$(ord(tdown)>=ord(t) and sprob0(s,tdown)), sprob0(s,tdown));
display "table 4a",sprob1;

set stdp(s,t,dp) 'mapping to decision points';
stdp(s,t,dp)$
sum(paths(s,dp,nn)$(level(dp)=ord(t)), 1) = yes;
option stdp:0:1:2;
display stdp;

set tdp(t,dp) 'extracted from stdp for a given scenario';

parameter
   Xs(s,t)
   Ws(s,t) 
'price'
   W(t)
   d(t)
   xhat(dp)
'conditional expectation'
   report(iter,*,*)
'table 1: results from iterations'
   x0(t)
/t1 4/
;
Ws(s,t) = 0;
w(t) = 0;
xhat(dp) = 0;

scalars
   iterno
   continue
/1/
   r  
'penalty parameter'/0/
;


variables
   x(t) 
'for submodel'
   z 
'for submodel'
;

equations
   obj
;


obj.. z =e=
sum(tt(t),0.5*sqr(x(t)-x(t-1)-x0(t))+sqr(x(t)-d(t)))
     +
sum(tt(t),w(t)*x(t)) + 0.5*r*sum(tt(t),sqr(x(t)-sum(tdp(t,dp),xhat(dp))));
x.lo(t)=3;
x.up(t)=6;
model m /obj/;

* reduce output and increase speed in loop
m.solprint = 2;
m.solvelink = %Solvelink.LoadLibrary%;

option xhat:2;
display xhat;

set report_iter(iter) /it0*it5,it10,it15,it20/;
parameter report_xhat(iter,dp) 'table 7';

loop(iter$continue,

  iterno =
ord(iter);
 
display '***** iteration ******',iterno;

* r = 0 in first iteration (it0)
  r = 1$(iterno>1);

* step 1
* solve each scenario
  Xs(s,t)=0;
 
loop(s,
     d(t) = sval(s,t);
     tdp(t,dp) = stdp(s,t,dp);
     tt(t) =
sum(tdp(t,dp),1);
     w(t) = ws(s,t);
    
solve m minimizing z using nlp;
     Xs(s,tt(t)) = x.l(t);
  );
 
display Xs;

* step 2
* Calculate policy Xhat
  xhat(dp) =
sum(stdp(s,t,dp), Xs(s,t)*sprob1(s,t));
 
display Xhat;

* step 3
* Update W
  Ws(s,t) = Ws(s,t) + r*(xs(s,t) -
sum(stdp(s,t,dp),xhat(dp)));
 
display Ws;

  report_xhat(iter,dp)$report_iter(iter) = xhat(dp);

);

display report_xhat;

Much of the “ugly” code is dealing with the tree. We need to manipulate it to get the following data:

image

This is reproduced in GAMS:

image

The PHA algorithm itself is largely the same as for the smaller first problem shown here: http://yetanothermathprogrammingconsultant.blogspot.com/2014/11/progressive-hedging-1.html Interestingly we have some differences in the behavior of the algorithm. The paper shows the following iteration results:

image

while I get:

image

Apart from iteration 0, we are not doing the same thing. But looking at the optimal solution for xhat:

image

I am actually a bit closer to this optimal solution.

We cannot see what the differences are between our calculations and the ones performed in the paper. It would have been easier if the code the authors used to get to their results would be available as an appendix.

Tuesday, November 11, 2014

Progressive Hedging (1)

In Pallson and Ravn (http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6738) two small, basic examples of the progressive hedging algorithm are given. It is always a good exercise to see if we can reproduce the results. The first example is actually not very difficult to reproduce:

Algorithm:

image

Problem:

image

 

$ontext

 
Olafur P. Pallson, Hans F. Ravn
 
Scenario Analysis and the Progressive Hedging Algorithm
   
- Simple Numerical Examples
 
The Technical University of Denmark, 1993


$offtext

set
   iter
'iteration number' /it0*it9/
   s   
'scenarios' /s1*s2/
;

table ds(s,*) 'single stochastic parameter'
     
value   prob
 
s1    5     0.6
 
s2    2     0.4
;

parameter
   Xs(s)
   Ws(s) 
'price'
   report(iter,*,*)
'table 1: results from iterations'
;
Ws(s) = 0;

scalars
   iterno
   continue
/1/
   d  
/0/
   xhat
'conditional expectation' /0/
   r 
'penalty parameter'/0/
   w 
/0/
;


variables
   x 
'for submodel'
   z 
'for submodel'
;

equations
   obj
;


obj.. z =e= sqr(x-d) + w*x + 0.5*r*sqr(x-xhat);
x.lo=3;
x.up=6;
model m /obj/;

m.solprint = 2;
m.solvelink = %Solvelink.LoadLibrary%;

loop(iter$continue,

  iterno =
ord(iter);
 
display '***** iteration ******',iterno;

* r = 0 in first iteration (it0)
  r = 1$(iterno>1);

* step 1
* solve each scenario
 
loop(s,
     d = ds(s,
'value');
     w = ws(s);
    
solve m minimizing z using nlp;
     Xs(s) = x.l;
  );
 
display Xs;

* step 2
* Calculate policy Xhat
  Xhat =
sum(s, Xs(s)*ds(s,'prob'));
 
display Xhat;

* step 3
* Update W
  Ws(s) = Ws(s) + r*(xs(s) - xhat);
 
display Xhat,Ws;


  report(iter,
'x',s) = Xs(s);
  report(iter,
'xhat','-') = xhat;
  report(iter,
'w',s) = Ws(s);
  report(iter,
'z','-') = sum(s,ds(s,'prob')*sqr(Xs(s)-ds(s,'value')));
  report(iter,
'zhat','-') = sum(s,ds(s,'prob')*sqr(xhat-ds(s,'value')));

);

option report:2:1:2;
display report;



*
* comparison
* this should give the same solution
*

equation obj2;
obj2.. z =e=
sum(s,ds(s,'prob')*sqr(x-ds(s,'value')));
model m2 /obj2/;
solve m2 minimizing z using nlp;
display x.l;

 

Results in paper:

image

Results from GAMS model:

image

xhat converges to the optimal solution xhat=3.8 quite quickly.