Introduction
The linear complementarity problem (LCP) can be stated as [1]:
LCP Problem 
\[\begin{align} &\text{Find $\color{darkred}x\ge 0,\color{darkred}w\ge 0$ such that:}\\ & \color{darkred}x^T\color{darkred}w = 0 \\ &\color{darkred}w =\color{darkblue}M \color{darkred}x + \color{darkblue}q\end{align}\] 
In many cases we require that \(\color{darkblue}M\) is positive definite.
The LCP has different applications, including in finance, physics, and economics [2]. Offtheshelf LCP solvers are not widely available. Here I will show how to solve LCPs with other solvers. This question comes up now and then, so here is a short recipe post.
Tip
To test my formulations, I generated some random data. A way to generate a random positive definite matrix in GAMS is as follows:
set
i /i1*i10/
;
alias (i,j,k);
parameter M(i,j) 'positive definite matrix';
M(i,j) = uniform(10,10);
M(i,j) = sum(k, M(k,i)*M(k,j));

The complementarity conditions can be recognized immediately in the solution. They look like:
 177 PARAMETER results
x w
i1 0.276
i2 0.256
i3 28.638
i4 0.152
i5 108.229
i6 32.643
i7 0.135
i8 0.536
i9 0.485
i10 27.609
Each row has either \(\color{darkred}x_i=0\) or \(\color{darkred}w_i=0\).
1. Solve as MCP
A MixedComplementarity Problem (MCP) is a more general problem than an LCP. It can be defined as: \[\begin{aligned} &\text{find $\color{darkred}x_i$ such that one of the following conditions hold:}\\ & \begin{cases} F_i(\color{darkred}x)=0 \text{ and } \color{darkblue}\ell_i \le \color{darkred}x_i \le \color{darkblue}u_i \\ F_i(\color{darkred}x)\gt 0 \text{ and } \color{darkred}x_i =\color{darkblue}\ell_i\\ F_i(\color{darkred}x)\lt 0 \text{ and } \color{darkred}x_i =\color{darkblue}u_i \end{cases}\end{aligned}\] This is often written more compactly as: \[F(\color{darkred}x) \perp \color{darkblue}\ell\le \color{darkred}x\le \color{darkblue}u\] For our LCP problem, this becomes
MCP representation 
\[\color{darkblue}M \color{darkred}x + \color{darkblue}q \ge 0 \perp\color{darkred}x \ge 0\] 
MCP solvers include PATH, MILES and Knitro. This is probably the easiest way to solve LCPs, and I have seen quite a few linear models being implemented and solved this way.
In GAMS this can look like:
positive variable x(i);
equation e(i);
e(i).. sum(j, M(i,j)*x(j)) + q(i) =g= 0;
model lcp1 /e.x/;
option mcp=path;
solve lcp1 using
mcp;

Notes
 MCP models don't have an objective.
 The notation e.x in the model statement means \(e \perp x\).
 The GAMS interface for Knitro does not allow for MCP models.
2. Solve as QP
There is a direct connection between Quadratic Programming (QP) models and LCPs. So it should not come as a surprise we can solve LCPs as QPs:
QP representation 
\[\begin{align}\min\>&\color{darkred}x^T\left( \color{darkblue}M \color{darkred}x + \color{darkblue}q\right)\\ &\color{darkblue}M \color{darkred}x + \color{darkblue}q \ge 0 \\ & \color{darkred}x \ge 0 \end{align}\]

Notes:
 QP solvers are more readily available than LCP and MCP solvers.
 A disadvantage of the QP formulation is that we have the matrix \(\color{darkblue}M\) twice in the model.
 It may look like a good idea to introduce the equality \(\color{darkred}w =\color{darkblue}M \color{darkred}x + \color{darkblue}q\) and use \(\color{darkred}w\) in the model. However this makes the model nonconvex: the objective will contain \(\color{darkred}x^T\color{darkred}w\).
 The optimal objective value is zero.
 If \(\color{darkblue}M\) happens to be nonsymmetric (this is a bit unusual), we can use the following trick. In the quadratic objective \(\color{darkred}x^T\color{darkblue}M \color{darkred}x + \color{darkred}x^T\color{darkblue}q\) replace the matrix \(\color{darkblue}M\) by \[\frac{\color{darkblue}M+\color{darkblue}M^T}{2}\] which is symmetric. This will yield the same objective value as the original, nonsymmetric version.
 Solving LCPs as QPs is a fairly standard and welldocumented approach.
3. Solve as QCP
The QP problem can easily be rewritten as a
quadratically constrained problem:
QCP representation 
\[\begin{align}& \color{darkred}x^T\color{darkblue}M \color{darkred}x + \color{darkred}x^T\color{darkblue}q\le 0 \\&\color{darkblue}M \color{darkred}x + \color{darkblue}q \ge 0\\ & \color{darkred}x \ge 0\end{align}\]

Notes:
 You may need to add a dummy objective to this problem to be able to feed it to a QCP solver.
 This is convex if \(\color{darkblue}M\) is positive semidefinite.
 We still have two copies of \(\color{darkblue}M\).
 This is SOCP representable [4].
4. Solve as MIP
The constraint \( \color{darkred}x^T\color{darkred}w = 0\) can be written pairwise: \(\color{darkred}x_i\color{darkred}w_i = 0\). This can be interpreted as \(\color{darkred}x_i=0 \textbf{ or } \color{darkred}w_i = 0\). Assuming we have reasonable upperbounds on \(\color{darkred}x_i\) and \(\color{darkred}w_i\), we can formulate the "or" condition using a binary variable \(\color{darkred}\delta_i \in \{0,1\}\):
MIP bigM representation 
\[\begin{align} &\color{darkred}x_i \le \color{darkred}\delta_i \color{darkblue}x^{UP}_i \\ & \color{darkred}w_i \le (1\color{darkred}\delta_i) \color{darkblue}w^{UP}_i \\ & \color{darkred}w =\color{darkblue}M \color{darkred}x + \color{darkblue}q \\ & \color{darkred}x\ge 0,\color{darkred}w\ge 0\\ & \color{darkred}\delta_i \in \{0,1\} \end{align}\] 
Notes:
 You may need to add a dummy objective.
 This approach can work even if \(\color{darkblue}M\) is not positive definite and/or not symmetric.
 The main disadvantage of this socalled bigM approach is the need for relatively small upper bounds on \(\color{darkred}x\) and \(\color{darkred}w\). There are some alternative approaches that can help in this respect.
 We can make the formulation more compact by substituting out \(\color{darkred}w_i\).
 A first alternative is to use indicator constraints. An indicator constraint has the form \[\color{darkred}\delta=0 \text{ (or $1$)} \Rightarrow \text{linear constraint}\] You can interpret this as "if \(\color{darkred}\delta=0\) then the linear constraint must hold". The indicator variable \(\color{darkred}\delta\) must be a binary variable. Indicator constraints are available in many solvers (but not all). This would allow you to write:
MIP indicator constraint representation 
\[\begin{align} &\color{darkred}\delta_i=0 \Rightarrow \color{darkred}x_i = 0\\ &\color{darkred}\delta_i=1 \Rightarrow \color{darkred}w_i=0 \\ & \color{darkred}w =\color{darkblue}M \color{darkred}x + \color{darkblue}q \\ & \color{darkred}x\ge 0,\color{darkred}w\ge 0\\ & \color{darkred}\delta_i \in \{0,1\} \end{align}\] 
Unfortunately, GAMS does not really support indicator constraints very well. Specifications for indicator constraints need to be placed in a solver option file. Basically, the GAMS language does not have any syntax for indicator constraints, and the option file approach was really just a hack. (In my opinion, a solver option file should not change the meaning of a model). In addition, in our example, we need to make make sure the variable \(\color{darkred}\delta\) appears in the model equations. So we need to use another hack to achieve this: just add a constraint \(\sum_i \color{darkred}\delta_i \ge 0\).  A second alternative is to use SOS1 sets (SOS1 = Special Ordered Sets of Type 1) [5]. We can write:
MIP SOS1 constraint representation 
\[\begin{align} & (\color{darkred}x_i,\color{darkred}w_i) \in SOS1 \\ & \color{darkred}w =\color{darkblue}M \color{darkred}x + \color{darkblue}q \\ & \color{darkred}x\ge 0,\color{darkred}w\ge 0\end{align}\] 
Having \(\color{darkred}x_i,\color{darkred}w_i\) in a SOS1 set will enforce that only one of them is nonzero. To be precise: we generate \(n=I\) sets with 2 members here. Some solvers may generate SOS1 constraints from indicator constraints under the hood.
5. Solve as LP
In [6] an algorithm is proposed to solve LCP as a series of LPs:
Unfortunately, this is a bit difficult to decipher. For one, the parentheses are not balanced in equation (8). Arguably the most important equation in the whole paper! This gives rise to some interesting questions. Peerreview does not always mean that things are carefully read. Also maybe: should LaTeX check the math for obvious errors? Maybe a machine learning application can spot problems in math notation like this.
But there is more I don't understand. If in the first iteration we set \(z^0=0\), then the diagonal matrix \(\mathbf{diag}(z^0)\) is an allzero matrix. That would mean, we stop immediately. I am probably misinterpreting something here. Journals (and authors) should put way more emphasis on reproducibility. Some working code would show how to interpret this correctly.
It is always interesting: initially, this looks like a quite detailed description of the algorithm. But when trying to reproduce the results by implementing the algorithm, things look a bit shakier. Reproducing results is in my opinion one of the best ways to read a paper.
In the end, I produced a version that worked for some data sets but failed to work for others (seemed like stuck in a local optimum). I may have not interpreted the intentions of the author correctly, or it may be that this method just does not work as reliably as the author indicates. In any case, I had to admit defeat: I was unable to reproduce the results in this paper.
I am well aware the author is very reputable.
It is too bad we cannot "edit" and update papers, to fix obvious errors and other shortcomings. More or less like Wikipedia. And maybe with an extra comment section for additional questions and notes, like in this blog.
Conclusions
 There are many ways to solve LCP problems without using a specialized LCP solver.
 The way GAMS handles indicator constraints is a somewhat embarrassing hack. Indicator constraints should be part of the language.
 Some OR papers can benefit from a little bit of extra quality control and some more emphasis on reproducibility.
 There is more to say about this problem, including some critical remarks, than I initially expected.
References
 Linear complementarity problem, https://en.wikipedia.org/wiki/Linear_complementarity_problem
 Richard W. Cottle, JongShi Pang, Richard E. Stone, The Linear Complementarity Problem, Academic Press, 1992.
 Katta G. Murty. Linear Complementarity, Linear and Nonlinear Programming, http://wwwpersonal.umich.edu/~murty/books/linear_complementarity_webbook/lcpcomplete.pdf
 Secondorder cone programming, https://en.wikipedia.org/wiki/Secondorder_cone_programming
 Special ordered set, https://en.wikipedia.org/wiki/Special_ordered_set
 Olvi L. Mangasarian, Linear complementarity as absolute value equation solution, Optimization Letters 8(4), April 2014
Appendix: GAMS models
$ontext
LCP 
linear complementarity problem
find w,x
such that
w,x
>= 0
x'w = 0
w = Mx +
q
assumption:
M is pos def
https://en.wikipedia.org/wiki/Linear_complementarity_problem
$offtext
*
* random data
*
set
i /i1*i10/
;
alias (i,j,k);
parameter M(i,j) 'positive definite matrix';
M(i,j) = uniform(10,10);
M(i,j) = sum(k, M(k,i)*M(k,j));
parameter q(i);
q(i) = uniform(100,100);
*
* solve with MCP solver
*
* find x such
that
* x >= 0
perp Mx + q >= 0
*
positive variable x(i);
equation e(i);
e(i).. sum(j, M(i,j)*x(j)) + q(i) =g= 0;
model lcp1 /e.x/;
option mcp=path;
solve lcp1 using mcp;
parameter
results(i,*);
results(i,'x') =
x.l(i);
results(i,'w') = sum(j, M(i,j)*x.l(j)) + q(i);
display results;
*
* solve with QP solver
*
* min x'(Mx+q)
* Mx + q
>= 0
* x >= 0
*
variable z 'objective';
equations
qpobj
qpcon(i)
;
qpobj.. z =e= sum((i,j),M(i,j)*x(i)*x(j)) + sum(i,x(i)*q(i));
qpcon(i).. sum(j,M(i,j)*x(j)) + q(i) =g= 0;
model lcp2 /qpobj,qpcon/;
option qcp=cplex;
solve lcp2 using qcp
minimizing z;
results(i,'x') =
x.l(i);
results(i,'w') = sum(j, M(i,j)*x.l(j)) + q(i);
display results;
*
* solve with a conic solver
*
* x'Mx <=
x'q
* Mx + q
>= 0
* x >= 0
*
equations
edummy
qcpcon
;
edummy.. z =e= 0;
qcpcon.. sum((i,j),x(i)*M(i,j)*x(j)) + sum(i,x(i)*q(i)) =l= 0;
model lcp3 /edummy,qcpcon,qpcon/;
option qcp=cplex;
solve lcp3 using qcp
minimizing z;
results(i,'x') =
x.l(i);
results(i,'w') = sum(j, M(i,j)*x.l(j)) + q(i);
display results;
*
* solve with MIP solver
*
* x(i) <=
d(i) * x.up(i)
* w(i) <=
(1d(i)) * w.up(i)
* w = Mx + q
* x,w >= 0
*
binary variable d(i);
positive variable w(i);
equation
wdef(i)
xbnd(i)
wbnd(i)
;
x.up(i) = 1000;
w.up(i) = 1000;
xbnd(i).. x(i) =l= d(i)*x.up(i);
wbnd(i).. w(i) =l= (1d(i))*w.up(i);
wdef(i).. w(i) =e= sum(j, M(i,j)*x(j)) + q(i);
model lcp4 /edummy,xbnd,wbnd,wdef/;
solve lcp4 using mip
minimizing z;
results(i,'x') = x.l(i);
results(i,'w') =
w.l(i);
display results;
* remove bounds
x.up(i) = inf;
w.up(i) = inf;
*
* solve with MIP solver using indicator constraints
* Embarrassingly, this is very awkward to do in GAMS.
*
* d(i) = 0
=> x(i) = 0
* d(i) = 1
=> w(i) = 0
* w = Mx + q
* x,w >= 0
*
equations
xzero(i)
wzero(i)
edummy2 'need to have d in the model
somewhere'
;
xzero(i).. x(i) =e= 0;
wzero(i).. w(i) =e= 0;
edummy2.. sum(i, d(i)) =g= 0;
$onecho > cplex.opt
indic xzero(i)$d(i) 0
indic wzero(i)$d(i) 1
$offecho
model lcp5 /edummy,xzero,wzero,wdef,edummy2/;
lcp5.optfile=1;
option mip=cplex;
solve lcp5 using mip minimizing
z;
results(i,'x') =
x.l(i);
results(i,'w') =
w.l(i);
display results;
*
* solve with MIP solver using SOS1 sets
*
* x(i),w(i)
in SOS1
* w = Mx + q
* x,w >= 0
*
set s /x,w/;
sos1 variable xw(i,s);
equation wdef2(i);
wdef2(i).. xw(i,'w') =e= sum(j, M(i,j)*xw(j,'x')) + q(i);
model lcp6 /edummy,wdef2/;
solve lcp6 using mip
minimizing z;
display xw.l;

No comments:
Post a Comment