Wednesday, June 30, 2021

Arranging points on a line

The problem stated in [1] is:



In the original problem, the poster talked about the distance between neighbors. But we don't know in advance what the neighboring points are. Of course, we can just generalize and talk about any two points. 

This problem looks deceivingly simple. The modeling has some interesting angles.

We will attack this problem in different ways:
  • An MINLP problem. The main idea is to get rid of the abs() function as this is non-differentiable.
  • A MIP model using binary variables or SOS1 variables. Again, we reformulate the abs() function.
  • Apply a fixing strategy to reduce the model complexity. Good solvers actually don't need this.
  • A genetic algorithm (ga) approach.

Saturday, June 12, 2021

Median, quantiles and quantile regression as linear programming problems

Quantile regression is a bit of an exotic type of regression [1,2,3]. It can be seen as a generalization of \(\ell_1\) or LAD regression [4], and just as LAD regression we can formulate and solve it as an LP.

First I want to discuss some preliminaries: how to find the median and the quantiles of a data vector \(\color{darkblue}y\). That will give us the tools to formulate the quantile regression problem as an LP. The reason for adding these preliminary steps is to develop some intuition about how Quantile Regression problems are defined. I found that most papers just "define" the underlying optimization problem, without much justification. I hope to show with these small auxiliary models how we arrive at the Quantile Regression model. Along the way, we encounter some interesting titbits. I'll discuss a few details that papers typically glance over or even skip, but I find fascinating.

Tuesday, June 8, 2021

Portfolio optimization with risk as constraint?

The standard mean-variance portfolio optimization models have the form:


Model M1
\[\begin{align}\min\>&\color{darkred}{\mathit{Risk}}=\color{darkred}x^T\color{darkblue}Q\color{darkred}x\\ & \color{darkblue}r^T\color{darkred}x \ge \color{darkblue}{\mathit{MinimumReturn}} \\  &\sum_i \color{darkred}x_i = 1\\ & \color{darkred}x \ge 0\end{align}\]


or may be:
Model M2
\[\begin{align}\min\>&\color{darkred}z=\color{darkred}x^T\color{darkblue}Q\color{darkred}x - \color{darkblue}\lambda \cdot\color{darkblue}r^T\color{darkred}x\\ &\sum_i \color{darkred}x_i = 1\\ & \color{darkred}x \ge 0\end{align}\]


where
  • \(\color{darkblue}Q\) is a variance-covariance matrix (we assume here it is positive semi-definite [1]),
  • \(\color{darkblue}\lambda\) is an exogenous constant that sometimes is varied to draw an efficient frontier, and
  • \(\color{darkblue}r\) are the returns.

Wednesday, June 2, 2021

Total Least Squares: nonconvex optimization

Total Least Squares (TLS) is an alternative for OLS (Ordinary Least Squares). It is a form of orthogonal regression and also deals with the problem of EIV (Errors-in-Variables). 


The standard OLS model is \[\color{darkblue}y = \color{darkblue}X\color{darkred}\beta + \color{darkred}\varepsilon\] where we minimize the sum-of-squares of the residuals \[\min ||\color{darkred}\varepsilon||_2^2\] We can interpret \(\color{darkred}\varepsilon\) as the error in \(\color{darkblue}y\).


In TLS, we also allow for errors in \(\color{darkblue}X\). The model becomes \[\color{darkblue}y+\color{darkred}\varepsilon=(\color{darkblue}X+\color{darkred}E)\color{darkred}\beta\] Note that we made a sign change in \(\color{darkred}\varepsilon\). This is pure aesthetics: to make the equation more symmetric looking. The objective is specified as \[\min \> ||\left(\color{darkred}\varepsilon \> \color{darkred}E\right)||_F\] i.e. the Frobenius norm of the matrix formed by \(\color{darkred}\varepsilon\) and \(\color{darkred}E\). The Frobenius norm is just \[||A||_F=\sqrt{\sum_{i,j}a_{i,j}^2}\] We can drop the square root from the objective (the solution will remain the same, but we got rid of a non-linear function with a possible problem near zero: the gradient is not defined there). The remaining problem is a non-convex quadratic problem which can be solved with global MINLP solvers such as Baron or with a global quadratic solver like Gurobi.