Nice Pyomo introduction: https://www.youtube.com/watch?v=cjMkVHjhSBI
A few small mistakes wrt GAMS:
GAMS does not have a complements keyword. Instead it is using a simple notation to match variables to equations in the model statement:
|MODEL TRNSP / PROFIT.X, SUPPLY.W, DEMAND.P/ ;|
Secondly it is mentioned that GAMS has a presolve facility. Amazingly it does not. It leaves it to the solver to do that. For LP and MIP problems this may be not so bad, although a built-in presolver has some advantages (smaller problems and solutions files to exchange with the solver and most likely better error messages – we have more context available – the importance of this point is often underestimated IMHO). For NLP/MINLP models a built-in presolver can be more important: many nonlinear solvers do not have a presolver built-in. In general a nonlinear presolver is difficult to do inside the solver as the solver doesn’t know the algebraic form – the math – of the nonlinear functions they are executing (of course they can work on the linear part of the model). The most popular NLP solver with GAMS is CONOPT which actually has very good nonlinear presolving facilities (tailored for use with GAMS) and provides excellent diagnostics. But models to be solved with say MINOS or SNOPT can benefit hugely from a presolver (I have seen models being solved much more easily by AMPL with these solvers thanks to the presolving capabilities of AMPL; it really can make a huge difference). Having said this, careful modeling can lead to models where there is little room for the presolver to make a large difference (I try to implement my models this way – I always feel guilty of not doing a good job when the presolver if taking out large chunks of the model. Actually cleaning up a model to get rid of parts that will be presolved away is often a good exercise.).
Sometimes small presolve-like operations need to be performed by GAMS. Often this has to do with the fact that GAMS has an objective variable while solvers have an objective function. For an LP/MIP we would not care (just add an objective row with a single nonzero element at the location of the objective variable). But in case of QPs it is important to substitute out the objective variable (for NLPs this can also be important: linearly constrained problems may be much easier to solve). Even those simple transformation steps are only done in a somewhat half-baked fashion. Here is an LP file for a small QP problem fed into Cplex:
Obviously there is some junk left around from what one could say is an incomplete attempt to remove the objective variable. Of course in practice the Cplex presolver will take these parts out.