Saturday, June 12, 2021

Median, quantiles and quantile regression as linear programming problems

Quantile regression is a bit of an exotic type of regression [1,2,3]. It can be seen as a generalization of \(\ell_1\) or LAD regression [4], and just as LAD regression we can formulate and solve it as an LP.

First I want to discuss some preliminaries: how to find the median and the quantiles of a data vector \(\color{darkblue}y\). That will give us the tools to formulate the quantile regression problem as an LP. The reason for adding these preliminary steps is to develop some intuition about how Quantile Regression problems are defined. I found that most papers just "define" the underlying optimization problem, without much justification. I hope to show with these small auxiliary models how we arrive at the Quantile Regression model. Along the way, we encounter some interesting titbits. I'll discuss a few details that papers typically glance over or even skip, but I find fascinating.

Tuesday, June 8, 2021

Portfolio optimization with risk as constraint?

The standard mean-variance portfolio optimization models have the form:


Model M1
\[\begin{align}\min\>&\color{darkred}{\mathit{Risk}}=\color{darkred}x^T\color{darkblue}Q\color{darkred}x\\ & \color{darkblue}r^T\color{darkred}x \ge \color{darkblue}{\mathit{MinimumReturn}} \\  &\sum_i \color{darkred}x_i = 1\\ & \color{darkred}x \ge 0\end{align}\]


or may be:
Model M2
\[\begin{align}\min\>&\color{darkred}z=\color{darkred}x^T\color{darkblue}Q\color{darkred}x - \color{darkblue}\lambda \cdot\color{darkblue}r^T\color{darkred}x\\ &\sum_i \color{darkred}x_i = 1\\ & \color{darkred}x \ge 0\end{align}\]


where
  • \(\color{darkblue}Q\) is a variance-covariance matrix (we assume here it is positive semi-definite [1]),
  • \(\color{darkblue}\lambda\) is an exogenous constant that sometimes is varied to draw an efficient frontier, and
  • \(\color{darkblue}r\) are the returns.

Wednesday, June 2, 2021

Total Least Squares: nonconvex optimization

Total Least Squares (TLS) is an alternative for OLS (Ordinary Least Squares). It is a form of orthogonal regression and also deals with the problem of EIV (Errors-in-Variables). 


The standard OLS model is \[\color{darkblue}y = \color{darkblue}X\color{darkred}\beta + \color{darkred}\varepsilon\] where we minimize the sum-of-squares of the residuals \[\min ||\color{darkred}\varepsilon||_2^2\] We can interpret \(\color{darkred}\varepsilon\) as the error in \(\color{darkblue}y\).


In TLS, we also allow for errors in \(\color{darkblue}X\). The model becomes \[\color{darkblue}y+\color{darkred}\varepsilon=(\color{darkblue}X+\color{darkred}E)\color{darkred}\beta\] Note that we made a sign change in \(\color{darkred}\varepsilon\). This is pure aesthetics: to make the equation more symmetric looking. The objective is specified as \[\min \> ||\left(\color{darkred}\varepsilon \> \color{darkred}E\right)||_F\] i.e. the Frobenius norm of the matrix formed by \(\color{darkred}\varepsilon\) and \(\color{darkred}E\). The Frobenius norm is just \[||A||_F=\sqrt{\sum_{i,j}a_{i,j}^2}\] We can drop the square root from the objective (the solution will remain the same, but we got rid of a non-linear function with a possible problem near zero: the gradient is not defined there). The remaining problem is a non-convex quadratic problem which can be solved with global MINLP solvers such as Baron or with a global quadratic solver like Gurobi.

Sunday, May 23, 2021

Solving linear complementarity problems without an LCP solver

Introduction


The linear complementarity problem (LCP) can be stated as [1]: 


LCP Problem
\[\begin{align} &\text{Find $\color{darkred}x\ge 0,\color{darkred}w\ge 0$ such that:}\\ & \color{darkred}x^T\color{darkred}w = 0 \\ &\color{darkred}w =\color{darkblue}M  \color{darkred}x + \color{darkblue}q\end{align}\]


In many cases we require that \(\color{darkblue}M\) is positive definite.

The LCP has different applications, including in finance, physics, and economics [2]. Off-the-shelf LCP solvers are not widely available. Here I will show how to solve LCPs with other solvers. This question comes up now and then, so here is a short recipe post.

Monday, May 10, 2021

Clustering models

In Statistics (nowadays called Data Science or A.I. for public relations reasons), clustering is one of the most popular techniques available. Of course, nothing beats linear regression in the popularity contest. Here, I like to discuss two clustering models: \(k\)-means and \(k\)-medoids. For these models, there exist well-defined equivalent Mixed-Integer Programming models. In practice, they don't work very well except for small data sets. I think they are still useful to discuss for different reasons:

  • The formulations are interesting. They have some angles that may not be obvious at first.
  • They define the problem in a succinct way. Verbal descriptions are always fraught with imprecision and vagueness. A reference model can help to make things explicit. I.e. use a model as a documentation tool.
  • Not all data sets are large. For small data sets, we can prove optimality where the usual heuristics only can deliver good solutions, without much information about the quality of the solution. Obviously, clustering is often used in situations where ultimate optimality may not matter much, as it is frequently used as an exploratory tool.
  • We can adapt the model to special cases. Adding constraints such as a minimum and maximum number of points per cluster comes to mind [3].

Thursday, April 29, 2021

What is this BRATIO option in GAMS?

This is a note about BRATIO, a very exotic GAMS option related to providing an advanced basis to a Simplex-based LP (or NLP) solver. 

Tuesday, April 20, 2021

Inverting a matrix by LP

 The questions for this post are:

  • Can we model the inversion of a matrix as an LP problem? (A: yes)
  • Should we do that? (A: no)
  • If we insist, can we make it a bit quicker (A: yes)

Monday, April 19, 2021

Parallel machine scheduling II, three more formulations

In [1] two formulations were discussed for a scheduling problem with multiple machines. Here we add a few more. Some of them are a bit strange. None of them really works better than the ones in [1].

So this is a collection of formulations that may sound reasonable at first, but are really not suited for this problem. If you want to read about good models, skip this post.

Monday, April 5, 2021

This is really a modeling issue

 In [1], the following scenario is described:


  1. A MIP model is solved, delivering a feasible solution.
  2. All integer variables are fixed to their levels.
  3. The resulting LP model is solved in order to produce duals. This model turns out to be infeasible.