Monday, March 9, 2026

Experience with NLP solvers on a simple economic growth model

Growth Models

In this post, we formulate and solve a relatively simple economic growth model where we want to find an optimal savings rate. Consumers can either use their income for direct consumption or they can save. Savings lead to investment and to more income down the road. The resulting NLP (nonlinear programming) model is not too complicated, and it is interesting to see how we can implement this model using different tools.

I often get the question: why not use Python or R for my (economic) models? This note tries to give an answer using a rather smallish model. Although the model is small and simple, it actually illustrates some of the problems we can face when using say Python or R code. Some of the code I present here is not very obvious, or self-explanatory. One reason is that some algorithm developers look at NLP problems as something like: \[\begin{align}\min\>&f(x) \\ &g(x)=0 \\& h(x)\ge 0\end{align}\] Unfortunately, such a representation is close to the solver, but far away from practical models. It is just the wrong abstraction level. The model implementation impedance becomes even larger if we need to provide analytic gradients. Time we could have spent on developing better models is instead wasted on error-prone low-level detailed programming. This problem is even more costly, when we think about maintenance such as bug-fixing and adding new features to the model. We'll see how this looks like for this model.  

The underlying economic model is often called a Ramsey growth model, after Frank Plumpton Ramsey [1].




Mathematical Model


A theoretical model can look like [2]:

Continuous time model
\[\begin{align} \max\>&\color{darkred}W = \int_{t=0}^\infty u(\color{darkred}C_t) e^{-\color{darkblue}\rho t} dt \\ & f(\color{darkred}K_t) = \color{darkred}C_t + \frac{d\color{darkred}K}{dt} \end{align}\]

The objective maximizes a welfare function characterized by a utility function \(u\) of the instaneous consumption \(\color{darkred}C_t\). There is a discount factor \(\color{darkblue}\rho\) applied in order to value early consumption more than some consumption (far) in the future. The budget constraint says that output (income) can be consumed or saved (capital accumulation). The production function \(f\) indicates how much output is generated for any level of capital.

To be able to run numerical simulations with standard solvers, it is easier to work with a discretization [3,4]:

Discrete time model
\[\begin{align} \max\>&\color{darkred}W = \sum_{t=0}^\infty \left(\frac{1}{1+\color{darkblue}\rho}\right)^t u(\color{darkred}C_t)  \\ & \color{darkred}Y_t = f(\color{darkred}K_t) \\ & \color{darkred}Y_t = \color{darkred}C_t + \color{darkred}I_t \\ & \color{darkred}K_{t+1} = \color{darkred}K_t + \color{darkred}I_t \end{align}\]

The variable \(\color{darkred}Y_t\) (output, income) is now explicitly added to the model. (Of course, we also could have recovered it during postprocessing).

Standard choices for the utility and production function are: \[u(\color{darkred}C_t) = \log(\color{darkred}C_t)\] and \[f(\color{darkred}K_t)= \color{darkblue}a\color{darkred}K_t^{\color{darkblue}b} \color{darkblue}L_t^{1-\color{darkblue}b} \] The constant returns to scale Cobb-Douglas production function uses an exogenous labor component which is generated by the process \[\color{darkblue}L_{t+1} = (1+\color{darkblue}g)\color{darkblue}L_t\]

We can add some deprecation to the capital accumulation constraint: \[\color{darkred}K_{t+1} = (1-\color{darkblue}\delta) \color{darkred}K_t + \color{darkred}I_t\]

The infinite sum is, of course, a bit of an issue for a numerical approach. We can just use a very large horizon. But we can also use a bit of math to attack this. For this we make the (somewhat heroic) assumption that from \(t \ge T\) (i.e., beyond the planning window), \(\color{darkred}C_t\) is constant. Then we can write: \[\begin{align}\max \color{darkred}W = & \sum_{t=0}^{T-1} \left(\frac{1}{1+\color{darkblue}\rho}\right)^t \log(\color{darkred}C_t) + \sum_{t=T}^{\infty} \left(\frac{1}{1+\color{darkblue}\rho}\right)^t \log(\color{darkred}C_T) \\  = & \sum_{t=0}^{T-1} \left(\frac{1}{1+\color{darkblue}\rho}\right)^t \log(\color{darkred}C_t) +\frac{1}{\color{darkblue}\rho(1+\color{darkblue}\rho)^{T-1}}\log(\color{darkred}C_T) \\ = & \sum_{t=0}^T \color{darkblue}\beta_t \log{\color{darkred}C_t}\end{align}\] where \[\color{darkblue}\beta_t = \begin{cases}(1+\color{darkblue}\rho)^{-t} & \text{for $t\lt T$} \\ \color{darkblue}\rho^{-1} (1+\color{darkblue}\rho)^{1-T} & \text{for $t=T$}\end{cases}\]
  
We have used here the well-known geometric series \[\sum_{t=0}^{T-1}a^t = \frac{1-a^T}{1-a}\] This means, assuming \(0\lt a \lt 1\): \[\begin{align} \sum_{t=T}^{\infty}a^t =& \sum_{t=0}^{\infty}a^t - \sum_{t=0}^{T-1}a^t \\ =&\frac{1}{1-a}-\frac{1-a^T}{1-a} \\ =& \frac{a^T}{1-a}\end{align}\]

This approach puts extra weight on the terminal value \(u(\color{darkred}C_t)\). As we see below, this extra weight is actually quite substantial.

----     65 PARAMETER beta  weight factor for future utilities

t0  1.000,    t1  0.952,    t2  0.907,    t3  0.864,    t4  0.823,    t5  0.784,    t6  0.746,    t7  0.711,    t8  0.677
t9  0.645,    t10 0.614,    t11 0.585,    t12 0.557,    t13 0.530,    t14 0.505,    t15 0.481,    t16 0.458,    t17 0.436
t18 0.416,    t19 0.396,    t20 0.377,    t21 0.359,    t22 0.342,    t23 0.326,    t24 0.310,    t25 6.201

Finally, we need something so ensure steady state, i.e., don't consume everything near the end of the planning horizon. This is accomplished by adding the constraint: \[\color{darkred}I_T \ge (\color{darkblue}g + \color{darkblue}\delta) \color{darkred}K_T\]

The complete model looks like:



Complete model
\[\begin{align} \max\>&\color{darkred}W = \sum_{t=0}^T \color{darkblue}\beta_t  \log \color{darkred}C_t  \\ & \color{darkred}Y_t = \color{darkblue}a\color{darkred}K_t^{\color{darkblue}b} \color{darkblue}L_t^{1-\color{darkblue}b} \\ & \color{darkred}Y_t = \color{darkred}C_t + \color{darkred}I_t \\ & \color{darkred}K_{t+1} = (1-\color{darkblue}\delta)\color{darkred}K_t + \color{darkred}I_t \\ & \color{darkred}I_T \ge (\color{darkblue}g + \color{darkblue}\delta) \color{darkred}K_T \\ & \color{darkred}K_0 =\color{darkblue}K^0 \\ &\color{darkred}I_0 =\color{darkblue}I^0 \\ &\color{darkred}C_0 =\color{darkblue}C^0 \\ &\color{darkred}C_t \ge 0.001\\ &\color{darkred}K_t \ge 0.001 \end{align}\]


The results look like:




Implementation 1: GAMS NLP Model



This is our base case. 

GAMS NLP Model

$ontext

 

    Ramsey Growth Model,

    an example of a dynamic general equilibrium model.

    December 2001, Erwin Kalvelagen

   

    References:

       F.P. Ramsey: A Mathematical Theory of Saving,

       Economics Journal, December 1928.

      

       A. Manne: GAMS/MINOS: Three examples, manuscript,

       Department of Operations Research, Stanford University,

       1986

      

       M.I. Lau, A. Pahlke, and T.F. Rutherford,

       Modeling Economic Adjustment: A Primer in Dynamic General

       Equilibrium Analysis, Report, University of Colorado, 1997

      

$offtext

 

*

data for the model

*

 

set 'time periods' / t0*t25 /;

scalars

   rho    'discount factor'             / 0.05 /

   g      'labor growth rate'           / 0.03 /

   delta  'capital depreciation factor' / 0.02 /

   K0     'initial capital'             / 3.00 /

   I0     'initial investment'          / 0.05 /

   C0     'initial consumption'         / 0.95 /

   L0     'initial labor'               / 1.00 /

   b      'Cobb-Douglas coefficient'    / 0.25 /

   a      'Cobb-Douglas coefficient'

;

 

*

derived sets

*

sets

   tfirst(t) ’first period’

   tlast(t) ’last period’

   tnotlast(t) ’all but last’

;

 

tfirst(t)$(ord(t)=1) = yes;

tlast(t)$(ord(t)=card(t)) = yes;

tnotlast(t)= not tlast(t);

 

parameters

   L(t)     'Labor (production input, exogeneous)'

   beta(t)  'weight factor for future utilities'

   tval(t)  'numerical value of t'

;

  

tval(t) = ord(t)-1;

 

*

the terminal weight beta(tlast) will get extra emphasis to

compensate for future utilities beyond tlast.

*

beta(tnotlast(t)) = (1+rho)**(-tval(t));

beta(tlast(t)) = (1/rho)*(1+rho)**(1-tval(t));

display beta;

 

*

labor is exogeneously determined using an exponential growth process

*

L(t) = (1+g)**tval(t) * L0;

display L;

 

*

we can calculate a from Y0 = C0 + I0 and Y0 = f(K0,L0) = a K0^b L0^(1-b)

*

a = (C0 + I0) / (K0**b * L0**(1-b));

display a;

 

 

positive variables

   C(t) 'consumption'

   Y(t) 'production output'

   K(t) 'capital (production input,endogeneous)'

   I(t) 'investment'

;

variable

   W    'total utility'

;

 

equation

   production(t)   'Cobb-Douglas production function'

   allocation(t)   'household chooses between consumption and saving'

   accumulation(t) 'capital accumulation'

   utility         'discounted utility'

   final(t)        'minimal investment in final period'

;

 

utility..                   W =e= sum(t,beta(t)*log(C(t)));

production(t)..             Y(t)=e= a * (K(t)**b) * (L(t)**(1-b));

allocation(t)..             Y(t)=e= C(t) + I(t);

accumulation(tnotlast(t)).. K(t+1) =e= (1-delta)*K(t) + I(t);

final(tlast)..              I(tlast) =g= (g+delta)*K(tlast);

 

*

bounds so real power and log can be evaluated

*

K.lo(t) = 0.001; C.lo(t) = 0.001;

 

*

initial conditions

*

K.fx(tfirst) = K0;

I.fx(tfirst) = I0;

C.fx(tfirst) = C0;

 

*

initial point for nonlinear variables

*

K.l(t) = K0;

C.l(t) = C0;

 

 

model ramsey /all/;

option nlp=conopt;

solve ramsey maximizing W using nlp;

 

 

parameter results(t,*);

results(t,'C') = C.l(t);

results(t,'Y') = Y.l(t);

results(t,'K') = K.l(t);

results(t,'I') = I.l(t);

display results,W.l;

 

 


I follow very closely the mathematical formulation presented in the previous section. Some notes:
  • The time index runs from \(t_0\) to \(t_{25}\).
  • The variables \(\color{darkred}K_t\), \(\color{darkred}I_t\), \(\color{darkred}K_t\), and \(\color{darkred}C_t\) are fixed in the first period \(t_0\).
  • The constraint \[\color{darkred}K_{t+1} = (1-\color{darkblue}\delta)\color{darkred}K_t + \color{darkred}I_t\] is over \(t_0\) to \(t_{24}\).
  • The conditions \(\color{darkred}C_t, \color{darkred}K_t \ge 0.001\) are implemented as bounds.
  • I set initial values for all non-linear variables. That is often a good idea. Also a good idea it to add bounds to protect the evaluation of the nonlinear functions like we did.
  • With a bit of effort we should be able to come up with an initial point that is feasible. I did not bother to do this.
  • GAMS uses automatic differentiation, so we don't have to worry about specifying gradients.
  • We can experiment with different NLP solvers by changing the line option nlp=conopt;
The results look like:

----    133 PARAMETER results  

              C           Y           K           I

t0     0.950000    1.000000    3.000000    0.050000
t1     0.950963    1.021564    2.990000    0.070601
t2     0.966444    1.045406    3.000801    0.078961
t3     0.983589    1.070523    3.019746    0.086934
t4     1.002344    1.096918    3.046285    0.094574
t5     1.022661    1.124591    3.079933    0.101930
t6     1.044501    1.153546    3.120264    0.109045
t7     1.067828    1.183788    3.166904    0.115959
t8     1.092613    1.215321    3.219525    0.122708
t9     1.118832    1.248153    3.277842    0.129321
t10    1.146466    1.282294    3.341606    0.135828
t11    1.175502    1.317755    3.410602    0.142253
t12    1.205931    1.354548    3.484642    0.148617
t13    1.237748    1.392688    3.563567    0.154940
t14    1.270954    1.432193    3.647235    0.161238
t15    1.305556    1.473081    3.735529    0.167525
t16    1.341561    1.515371    3.828343    0.173810
t17    1.378984    1.559087    3.925587    0.180103
t18    1.417844    1.604251    4.027179    0.186407
t19    1.458164    1.650888    4.133042    0.192724
t20    1.499971    1.699022    4.243106    0.199051
t21    1.543300    1.748679    4.357295    0.205379
t22    1.588188    1.799886    4.475528    0.211698
t23    1.634682    1.852667    4.597716    0.217985
t24    1.682833    1.907047    4.723746    0.224214
t25    1.720374    1.963049    4.853485    0.242674


----    133 VARIABLE W.L                   =        5.453  total utility

We can switch solvers without changing the model. All NLP solvers I tried had a very easy job on this: they solved it to optimality using very few iterations.  Below are two logs, one from CONOPT (an active set NLP solver) and one from IPOPT (an interior point solver). These two solvers are very different, so it is always a good idea to try them both. 
CONOPT and IPOPT Log

    C O N O P T   version 4.38.2

    Copyright (C) GAMS Software GmbH

                  GAMS Development Corporation

 

    Will use up to 4 threads.

 

 

    The user model has 79 constraints and 105 variables

    with 234 Jacobian elements, 52 of which are nonlinear.

    The Hessian of the Lagrangian has 52 elements on the diagonal,

    0 elements below the diagonal, and 52 nonlinear variables.

 

    Iter Phase   Ninf   Infeasibility   RGmax      NSB   Step  InItr MX OK

       0   0          6.2205498221E+01 (Input point)

 

    The pre-triangular part of the model has 4 constraints and 6 variables.

    The post-triangular part of the model has 3 constraints and variables.

    There are 44 definitional constraints and defined variables.

 

    Preprocessed model has 28 constraints and 52 variables

    with 163 Jacobian elements, 84 of which are nonlinear.

 

    Iter Phase   Ninf   Infeasibility   RGmax      NSB   Step  InItr MX OK

                      7.8716616597E+00 (Full preprocessed model)

                      7.8716616597E+00 (After scaling)

                      2.4402707201E+00 (After adjusting individual variables)

 

 ** Feasible solution. Value of objective =    5.09043086016

 

    Iter Phase   Ninf     Objective     RGmax      NSB   Step  InItr MX OK

       6   4          5.4534275368E+00 3.3E-04      24 1.0E+00     9 F  T

       8   4          5.4534275370E+00 7.1E-08      24

 

 ** Optimal solution. Reduced gradient less than tolerance.

 

COIN-OR Interior Point Optimizer (Ipopt Library 3.14.19)

written by A. Waechter.

 

Note: This is the free version IPOPT, but you could also use the commercially supported and potentially higher performance version IPOPTH.

 

******************************************************************************

This program contains Ipopt, a library for large-scale nonlinear optimization.

 Ipopt is released as open source code under the Eclipse Public License (EPL).

         For more information visit https://github.com/coin-or/Ipopt

******************************************************************************

 

This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.1.

 

Number of nonzeros in equality constraint Jacobian...:      200

Number of nonzeros in inequality constraint Jacobian.:        2

Number of nonzeros in Lagrangian Hessian.............:       50

 

Total number of variables............................:      101

                     variables with only lower bounds:      101

                variables with lower and upper bounds:        0

                     variables with only upper bounds:        0

Total number of equality constraints.................:       77

Total number of inequality constraints...............:        1

        inequality constraints with only lower bounds:        1

   inequality constraints with lower and upper bounds:        0

        inequality constraints with only upper bounds:        0

 

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rgalpha_du alpha_pr  ls

   0 -1.0771592e+00 1.73e+00 7.53e+00   0.0 0.00e+00    -  0.00e+00 0.00e+00   0

   1  8.4264020e-01 7.08e-01 6.54e+01  -0.0 2.14e+00    -  8.67e-03 5.78e-01f  1

   2  4.6922922e+00 3.18e-02 2.23e+00  -0.6 2.01e+00   0.0 6.98e-01 1.00e+00f  1

   3  5.4680979e+00 1.54e-02 3.71e+00  -1.4 4.68e-01    -  9.62e-01 5.15e-01f  1

   4  5.4371002e+00 3.11e-03 1.48e-02  -2.2 7.66e-01    -  1.00e+00 1.00e+00h  1

   5  5.4666638e+00 2.37e-03 3.15e-03  -3.5 6.00e-01    -  9.97e-01 1.00e+00h  1

   6  5.4536789e+00 6.28e-05 1.00e-04  -4.4 9.08e-02    -  1.00e+00 1.00e+00h  1

   7  5.4534326e+00 1.17e-06 8.04e-06  -6.4 1.22e-02    -  1.00e+00 1.00e+00h  1

   8  5.4534275e+00 2.13e-09 4.98e-08 -11.0 7.47e-04    -  9.99e-01 1.00e+00h  1

   9  5.4534275e+00 2.02e-14 4.81e-13 -11.0 2.72e-06    -  1.00e+00 1.00e+00h  1

 

Number of Iterations....: 9

 

                                   (scaled)                 (unscaled)

Objective...............:  -5.4534275373287304e+00    5.4534275373287304e+00

Dual infeasibility......:   4.8122468765757561e-13    4.8122468765757561e-13

Constraint violation....:   2.0206059048177849e-14    2.0206059048177849e-14

Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00

Complementarity.........:   1.0958077072654773e-11    1.0958077072654773e-11

Overall NLP error.......:   1.0958077072654773e-11    1.0958077072654773e-11

 

 

Number of objective function evaluations             = 10

Number of objective gradient evaluations             = 10

Number of equality constraint evaluations            = 10

Number of inequality constraint evaluations          = 10

Number of equality constraint Jacobian evaluations   = 10

Number of inequality constraint Jacobian evaluations = 1

Number of Lagrangian Hessian evaluations             = 9

Total seconds in IPOPT                               = 0.009

 

EXIT: Optimal Solution Found.


We can see we need about 10 iterations, and the number of function and gradient evaluations is also around 10. The total solution time is less than half a second.

Implementation 2: Python Scipy + SLSQP



This version uses the solver SLSQP from the scipy[5] library. This is a dense solver suited for smallish models. The modeling interface is very simplistic: we have a single vector \(x\). That's it. This is a very uncomfortable representation if the model has multiple variables, like our \(\color{darkred}C_t\), \(\color{darkred}Y_t\), \(\color{darkred}K_t\), \(\color{darkred}I_t\). This means we need to pack and unpack, from \(\color{darkred}C_t\), \(\color{darkred}Y_t\), \(\color{darkred}K_t\), \(\color{darkred}I_t\) to \(x\) and back. In this small model this is annoying but straightforward. For more complex models, this can become quite non-trivial. If not done efficiently, this can also become quite expensive as we need to do this every function evaluation. As we shall see, the abstraction level of this tool is just wrong for practical work.

I really don't want to specify gradients. It often requires lots of attention to provide correct derivates. They are too error-prone. Without user-provided gradients, the solver will use finite differences in this case. That should be ok for this small example. For more challenging problems such an approach may be less appealing. Finite differences have two issues. First we loose quite some precision (roughly half). Solvers assume "exact" gradients, and finite differences just don't deliver that. The second problem that is that finite differences require a ton of function evaluations. All of this is compounded by lack of knowledge about the model to be solved: the solver does not know about the sparsity pattern of the problem and also does not know about whether variables appear linearly in a constraint or in the objective. Basically it has to assume the worst case: all variables appear non-linearly in each equation.

The solver cannot handle maximization problems, so we have to restate this as a minimization problem.

Python Scipy/SLSQP Implementation Attempt 1


Here I try to follow the mathematical model closely.

Python scipy/SLSQP code
#
# Python scipy/SLSQP implementation of Ramsey growth model
# This is a translation of the GAMS code in ramsey.gms
#

import numpy as np
import scipy.optimize as opt

#----------------------------------------------------------------
# data
#----------------------------------------------------------------

T = 26     # time periods (0..25)
ρ = 0.05   # discount factor
g = 0.03   # labor growth rate
δ = 0.02   # capital depreciation factor
K0 = 3.00  # initial capital
I0 = 0.05  # initial investment
C0 = 0.95  # initial consumption
L0 = 1.00  # initial labor
b = 0.25   # Cobb-Douglas coefficient
a = (C0 + I0)/ (K0**b * L0**(1-b))     # Cobb-Douglas coefficient
# print(f"{a=}")

# time indices: 0..T-1 
t = np.arange(0,T)

# weight factor for future utilities
β = (1+ρ)**(-t)
β[T-1] = (1/ρ) * (1+ρ)**(1-T+1)

# labor is exogeneously determined using an exponential growth process
L = (1+g)**t * L0


#----------------------------------------------------------------
# model
#----------------------------------------------------------------

# mapping of variables into a single vector x:
# x = [C[0],...,C[T-1],Y[0],...,Y[T-1],K[0],...,K[T-1],I[0],...,I[T-1]]
# luckily this very regular. In more complicated models, we would need to keep 
# track of the mapping between variables and their position in x.

# lower bounds
C_lo = 0.001*np.ones(T)
Y_lo = np.zeros(T)
K_lo = 0.001*np.ones(T)
I_lo = np.zeros(T)
# t=0 is fixed to the initial values
C_lo[0] = C0
K_lo[0] = K0
I_lo[0] = I0
# combine lower bounds into a single vector
lo = np.concatenate((C_lo,Y_lo,K_lo,I_lo))

# upper bounds
C_up = 1000*np.ones(T) 
Y_up = 1000*np.ones(T)
K_up = 1000*np.ones(T)
I_up = 1000*np.ones(T)
# t=0 is fixed to the initial values
C_up[0] = C0
K_up[0] = K0
I_up[0] = I0
# combine upper bounds into a single vector
up = np.concatenate((C_up,Y_up,K_up,I_up))

# Bounds
bnd = opt.Bounds(lo,up)

# initial values 
x0 = np.concatenate((C0*np.ones(T),np.ones(T),K0*np.ones(T),I0*np.ones(T)))

# extra arguments to be passed around to function evaluations
xargs = (a,b,β,δ,g,L,T,)

# objective function 
# W =e=sum(t,beta(t)*log(C(t)));
# we minimize -W 
def f(x,a,b,β,δ,g,L,T):
    mat = np.reshape(x,(4,T))
    Ct = mat[0,] 
    fval = -np.dot(β,np.log(Ct)) 
    return fval


# Constraints

# equality constraints:
# Y(t) =e= a * (K(t)**b) * (L(t)**(1-b))
# Y(t) =e= C(t) + I(t)
# K(t+1) =e= (1-delta)*K(t) + I(t)  
def Feq(x,a,b,β,δ,g,L,T):
    # unpack x
    mat = np.reshape(x,(4,T))    
    Ct = mat[0,]
    Yt = mat[1,] 
    Kt = mat[2,]
    It = mat[3,]

    # calc LHS - RHS = 0 for all equality constraints
    F1 = Yt - a*np.multiply(Kt**b,L**(1-b))
    F2 = Yt - (Ct+It)
    F3 = Kt[1:] - ((1-δ)*Kt[:-1]+It[:-1]) 
    return np.concatenate((F1,F2,F3))

# inequality constraints:
# I(tlast) =g= (g+delta)*K(tlast)
def Fineq(x,a,b,β,δ,g,L,T):
    # unpack x
    mat = np.reshape(x,(4,T))    
    Kt = mat[2,]
    It = mat[3,]

    # calc LHS-RHS >= 0
    F4 = It[T-1] - (g+δ)*Kt[T-1]
    return F4

cons = [{'type':'eq','fun':Feq,'args':xargs},
        {'type':'ineq','fun':Fineq,'args':xargs}]

# solve
result = opt.minimize(f, x0, args=xargs, bounds=bnd, constraints=cons, 
                      method='SLSQP', options={'maxiter':1000,'disp':True})
print()
print(result)

'''
This version of the model yields:

     message: Singular matrix C in LSQ subproblem
     success: False

Other solvers don't have this problem.
'''


Some notes about this implementation:
  • The first period, \(t=0\), is fixed by setting lower and upper bounds.
  • We use a bunch of extra arguments for the evaluation of functions (objective and constraints). They are passed on through the args option.
  • There is one inequality and three equality constraints. The solver assumes equalities have the form \(g(x)=0\). Inequalities have the form \(h(x)\ge 0\). All constraints are assumed to be non-linear (even the linear ones). 

This version results in the alarming error message:

    Singular matrix C in LSQ subproblem    (Exit mode 6)

It looks like this happens immediately, in the first iteration. It is not clear where we should look at to fix this. When starting from the optimal GAMS solution we get the same problem. We can, of course, give up. Instead I tried the following experiment:

  1. Substitute out the variable \(\color{darkred}Y_t\). This means we combine \[\begin{align} & \color{darkred}Y_t = \color{darkblue}a\color{darkred}K_t^{\color{darkblue}b} \color{darkblue}L_t^{1-\color{darkblue}b} \\ & \color{darkred}Y_t = \color{darkred}C_t + \color{darkred}I_t  \end{align}\] into \[\color{darkblue}a\color{darkred}K_t^{\color{darkblue}b} \color{darkblue}L_t^{1-\color{darkblue}b} = \color{darkred}C_t + \color{darkred}I_t \]
  2. Remove the first period from the model: the values are all exogenous. This means we no longer have fixed variables in the model.
Essentially, I have become a human presolver. (Not always a good idea: software can do this much better. Presolvers have become a very important part of LP, MIP and NLP solvers.). 

Python Scipy/SLSQP Implementation Attempt 2

This version works on a "presolved" model (presolving done by hand).

Python Scipy/SLSQP code v2
#
# Python scipy implementation of Ramsey growth model
# This is a translation of the GAMS code in ramsey.gms
#
# version 2: presolve model ourselves.
#  - substitute out Y[t]
#  - remove fixed variables from model


import numpy as np
import scipy.optimize as opt
import pandas as pd

#----------------------------------------------------------------
# data
#----------------------------------------------------------------

T = 25     # time periods (1..25)
ρ = 0.05   # discount factor
g = 0.03   # labor growth rate
δ = 0.02   # capital depreciation factor
C0 = 0.95  # initial consumption
K0 = 3.00  # initial capital
I0 = 0.05  # initial investment
L0 = 1.00  # initial labor
b = 0.25   # Cobb-Douglas coefficient
a = (C0 + I0)/ (K0**b * L0**(1-b))     # Cobb-Douglas coefficient

# time indices: 1..T 
# we do t=0 separately (that is exogenous)
t = np.arange(1,T+1)

# weight factor for future utilities
β = (1+ρ)**(-t)
β[T-1] = (1/ρ) * (1+ρ)**(1-T)

# labor is exogeneously determined using an exponential growth process
L = (1+g)**t * L0

#----------------------------------------------------------------
# model
#----------------------------------------------------------------

# mapping of variables into a single vector x:
# x = [C[0],...,C[T-1],K[0],...,K[T-1],I[0],...,I[T-1]]
# note: Y is substituted out 

# lower bounds
C_lo = 0.001*np.ones(T)
K_lo = 0.001*np.ones(T)
I_lo = np.zeros(T)
# combine lower bounds into a single vector
lo = np.concatenate((C_lo,K_lo,I_lo))

# upper bounds
C_up = 1000*np.ones(T) 
K_up = 1000*np.ones(T)
I_up = 1000*np.ones(T)
# combine upper bounds into a single vector
up = np.concatenate((C_up,K_up,I_up))

# Bounds
bnd = opt.Bounds(lo,up)

# initial values 
x0 = np.concatenate((C0*np.ones(T),K0*np.ones(T),I0*np.ones(T)))

# extra arguments to be passed around
# note: added K0,I0 (no longer part of the bounds)
xargs = (a,b,β,δ,g,L,K0,I0,T,)

# objective function 
# W =e=sum(t,beta(t)*log(C(t)));
# we minimize -W
# Note: we dropped the first period so beware when comparing objective values. 
def f(x,a,b,β,δ,g,L,K0,I0,T):
    mat = np.reshape(x,(3,T))
    Ct = mat[0,] 
    fval = -np.dot(β,np.log(Ct)) 
    return fval

# Constraints

# equality constraints:
# Y(t) =e= a * (K(t)**b) * (L(t)**(1-b))
# Y(t) =e= C(t) + I(t)
# K(t+1) =e= (1-delta)*K(t) + I(t)  
def Feq(x,a,b,β,δ,g,L,K0,I0,T):
    # unpack x
    mat = np.reshape(x,(3,T)) #    
    Ct = mat[0,]
    Kt = mat[1,]
    It = mat[2,]

    # calc LHS - RHS = 0 for all equality constraints
    F1 = a*np.multiply(Kt**b,L**(1-b)) - (Ct+It)
    F2 = Kt[1:] - ((1-δ)*Kt[:-1]+It[:-1]) 
    F3 = Kt[0] - ((1-δ)*K0+I0)
    return np.concatenate((F1,F2,[F3]))

# inequality constraints:
# I(tlast) =g= (g+delta)*K(tlast)
def Fineq(x,a,b,β,δ,g,L,K0,I0,T):
    # unpack x
    mat = np.reshape(x,(3,T))    
    Kt = mat[1,]
    It = mat[2,]

    # calc LHS-RHS >= 0
    F = It[T-1] - (g+δ)*Kt[T-1]
    return F

cons = [{'type':'eq','fun':Feq,'args':xargs},
        {'type':'ineq','fun':Fineq,'args':xargs}]

# solve
result = opt.minimize(f, x0, args=xargs, bounds=bnd, constraints=cons, 
                      method='SLSQP', options={'maxiter':1000,'disp':True})
# print(result)

""" 
Optimization terminated successfully    (Exit mode 0)
            Current function value: -5.504718231397155
            Iterations: 32
            Function evaluations: 2433
            Gradient evaluations: 32
"""

#----------------------------------------------------------------
# reporting
# store results in a dataframe for reporting
# insert initial values for t=0 (exogenous)
#----------------------------------------------------------------

df = pd.DataFrame(np.reshape(result.x,(3,T)).T, columns=['C','K','I'])
df0 = pd.DataFrame({'C':[C0],'K':[K0],'I':[I0]})
df  = pd.concat((df0,df), ignore_index=True)
L = np.concatenate((np.array([L0]),L))
df['Y'] = a*np.multiply(df['K']**b,L**(1-b))
print()
print(df)

β = np.concatenate((np.array([1]),β))
W = float(np.dot(β,np.log(df['C'])))
print()
print(f"{W=}")

This version yields:

Optimization terminated successfully    (Exit mode 0)
            Current function value: -5.504718231397155
            Iterations: 32
            Function evaluations: 2433
            Gradient evaluations: 32

We see the excessive number of function evaluations: this is the steep cost of finite-difference gradients. The solution looks like:

           C         K         I         Y
0   0.950000  3.000000  0.050000  1.000000
1   0.951448  2.990000  0.070116  1.021564
2   0.966905  3.000316  0.078458  1.045363
3   0.983661  3.018767  0.086775  1.070436
4   1.002540  3.045167  0.094277  1.096817
5   1.023288  3.078541  0.101176  1.124464
6   1.044920  3.118147  0.108430  1.153351
7   1.067556  3.164214  0.115981  1.183536
8   1.092118  3.216911  0.122956  1.215074
9   1.118741  3.275528  0.129192  1.247933
10  1.146629  3.339210  0.135435  1.282064
11  1.175249  3.407860  0.142241  1.317490
12  1.204902  3.481944  0.149383  1.354285
13  1.236224  3.561688  0.156280  1.392504
14  1.269503  3.646735  0.162641  1.432144
15  1.304563  3.736441  0.168607  1.473170
16  1.341106  3.830320  0.174461  1.515567
17  1.379010  3.928175  0.180334  1.559344
18  1.418337  4.029946  0.186190  1.604527
19  1.459101  4.135537  0.192036  1.651137
20  1.501039  4.244862  0.198159  1.699198
21  1.543709  4.358124  0.205054  1.748763
22  1.587217  4.476015  0.212718  1.799935
23  1.633401  4.599213  0.219417  1.852818
24  1.683621  4.726646  0.223718  1.907339
25  1.720494  4.855831  0.242792  1.963286

W=5.453424937009604


All in all, this was a very frustrating experience. Unless you have a lot of free time on your hands, I don't recommend going this route for anything but the simplest problems.

Implementation 3: Python Scipy + Trust Region



Scipy also has a different solver for constrained NLP problems: method='trust-constr'. We go back to our first Python version with the first period fixed to their exogenous values. Instead of using the extra arguments passed on to the function evaluation routines, we package things in a class. 

Python code scipy+trust region
#
# Solving the Ramsey model using scipy.optimize.minimize with the trust-constr method
# We use a class to keep all data and functions together. This allows us to use
# the model parameters in the function evaluation without the extra "args".  
#
# This method works but is rather slow.
#

import numpy as np
import scipy.optimize as opt
import pandas as pd


class RamseyModel:

    #----------------------------------------------------------------
    # data
    #----------------------------------------------------------------

    T = 26     # time periods (0..25)
    ρ = 0.05   # discount factor
    g = 0.03   # labor growth rate
    δ = 0.02   # capital depreciation factor
    K0 = 3.00  # initial capital
    I0 = 0.05  # initial investment
    C0 = 0.95  # initial consumption
    L0 = 1.00  # initial labor
    b = 0.25   # Cobb-Douglas coefficient
    a = (C0 + I0)/ (K0**b * L0**(1-b))     # Cobb-Douglas coefficient


    # time indices: 0..T-1 
    t = np.arange(0,T)

    # weight factor for future utilities
    β = (1+ρ)**(-t)
    β[T-1] = (1/ρ) * (1+ρ)**(1-T+1)

    # labor is exogeneously determined using an exponential growth process
    L = (1+g)**t * L0

    #----------------------------------------------------------------
    # model
    #----------------------------------------------------------------

    # mapping of variables into a single vector x:
    # x = [C[0],...,C[T-1],Y[0],...,Y[T-1],K[0],...,K[T-1],I[0],...,I[T-1]]
    # luckily this very regular. In more complicated models, we would need to keep 
    # track of the mapping between variables and their position in x.

    # lower bounds
    C_lo = 0.001*np.ones(T)
    Y_lo = np.zeros(T)
    K_lo = 0.001*np.ones(T)
    I_lo = np.zeros(T)
    # t=0 is fixed to the initial values
    C_lo[0] = C0
    K_lo[0] = K0
    I_lo[0] = I0
    # combine lower bounds into a single vector
    lo = np.concatenate((C_lo,Y_lo,K_lo,I_lo))

    # upper bounds
    C_up = 1000*np.ones(T) 
    Y_up = 1000*np.ones(T)
    K_up = 1000*np.ones(T)
    I_up = 1000*np.ones(T)
    # t=0 is fixed to the initial values
    C_up[0] = C0
    K_up[0] = K0
    I_up[0] = I0
    # combine upper bounds into a single vector
    up = np.concatenate((C_up,Y_up,K_up,I_up))

    bounds = opt.Bounds(lo,up)

    # initial values 
    x0 = np.concatenate((C0*np.ones(T),np.ones(T),K0*np.ones(T),I0*np.ones(T)))

    # objective function 
    # W =e=sum(t,beta(t)*log(C(t)));
    # minimize -W 
    def f(self,x):
        mat = np.reshape(x,(4,self.T))
        Ct = mat[0,] 
        fval = -np.dot(self.β,np.log(Ct)) 
        return fval
    
    # production function
    # Y(t)=e= a * (K(t)**b) * (L(t)**(1-b))
    def prod(self,x):
        mat = np.reshape(x,(4,self.T))
        Yt = mat[1,] 
        Kt = mat[2,]
        F1 = Yt - self.a*np.multiply(Kt**self.b,self.L**(1-self.b))
        return F1
    
    # sub matrix for identity
    # Y(t) =e= C(t) + I(t)
    # =>          C  Y  K  I
    #      A1 = [ I -I  0  I ] 
    def A1(self):
        AC = np.identity(self.T)
        AY = -np.identity(self.T)
        AK = np.zeros((self.T,self.T))
        AI = np.identity(self.T)
        return np.concatenate((AC,AY,AK,AI),axis=1)
    
    # capital accumulation
    # K(t+1) =e= (1-delta)*K(t) + I(t)
    # =>          C  Y  K   I
    #      A2 = [ 0  0  AK  I ] and drop last row
    #      AK = (1-delta)*I - U
    #      U is the identity matrix shifted by one period, with zeros in the first column 
    def A2(self):
        AC = np.zeros((self.T,self.T))
        AY = np.zeros((self.T,self.T))
        AK = (1-self.δ)*np.identity(self.T) - np.eye(self.T,k=1)
        AI = np.identity(self.T)
        return np.concatenate((AC,AY,AK,AI),axis=1)[:-1,:]
    
    # I(tlast) =g= (g+delta)*K(tlast)
    # => I[25] - (g+delta)*K[25] >= 0
    def A3(self):
        a =  np.zeros((1,4*self.T))
        a[0,3*self.T + self.T-1] = 1
        a[0,2*self.T + self.T-1] = -(self.g + self.δ)
        return a
    
    def constraints(self):
        c1 = opt.NonlinearConstraint(self.prod,0,0)
        c2 = opt.LinearConstraint(self.A1(),0,0)
        c3 = opt.LinearConstraint(self.A2(),0,0)
        c4 = opt.LinearConstraint(self.A3(),0,np.inf)
        return [c1,c2,c3,c4]
    

m = RamseyModel()

# solve with the trust-constr method
result = opt.minimize(m.f,m.x0,bounds=m.bounds,constraints=m.constraints(),method='trust-constr',options={'verbose':2})  
print(result) 

'''
`gtol` termination condition is satisfied.
Number of iterations: 297, function evaluations: 30345, CG iterations: 832, optimality: 3.42e-09, constraint violation: 4.33e-10, execution time:  4.8 s.
'''

#----------------------------------------------------------------
# reporting
# store results in a dataframe for reporting
#----------------------------------------------------------------

df = pd.DataFrame(np.reshape(result.x,(4,m.T)).T, columns=['C','Y','K','I'])
print()
print(df)

W = float(np.dot(m.β,np.log(df['C'])))
print()
print(f"{W=}")


Notes:
  • The classes NonlinearConstraint and LinearConstraint do not make things easier. The code to specify the constraints is not at all self-explanatory. I added lots of comments to make this somewhat understandable. Again this demonstrates rather painfully that we are using the wrong abstraction level here for specifying non-linear programming models. Essentially we are writing a matrix generator here (old tools for generating LP matrices).
  • Does the solver exploit that there is only one nonlinear constraint? I am not sure, but my hunch is that it ignores this. Most likely, it considers each constraint to be nonlinear. 
The log looks like:

Trust region log
| niter |f evals|CG iter|  obj func   |tr radius |   opt    |  c viol  |
|-------|-------|-------|-------------|----------|----------|----------|
|   1   |  105  |   0   | +1.0772e+00 | 1.00e+00 | 1.44e+00 | 7.41e-01 |
|   2   |  210  |   1   | -5.7010e-01 | 7.00e+00 | 7.97e-01 | 7.41e-01 |
|   3   |  315  |   4   | -4.2307e+00 | 2.96e+01 | 1.24e-01 | 4.77e-01 |
|   4   |  420  |   9   | -5.0216e+00 | 2.96e+01 | 1.29e-01 | 3.33e-01 |
|   5   |  525  |  15   | -5.1623e+00 | 2.96e+01 | 1.40e-01 | 1.77e-01 |
|   6   |  630  |  24   | -5.0480e+00 | 2.96e+01 | 1.44e-01 | 7.35e-02 |
|   7   |  735  |  28   | -5.1438e+00 | 2.96e+01 | 1.37e-01 | 4.38e-02 |
|   8   |  840  |  34   | -5.2895e+00 | 2.96e+01 | 1.02e-01 | 2.77e-02 |
|   9   |  945  |  47   | -5.1425e+00 | 5.87e+01 | 7.63e-02 | 4.49e-02 |
|  10   |  945  |  47   | -5.1425e+00 | 2.93e+02 | 6.37e-02 | 4.49e-02 |
|  11   | 1050  |  73   | -5.4550e+00 | 2.93e+02 | 3.60e-02 | 3.46e-02 |
|  12   | 1155  |  91   | -5.4550e+00 | 2.93e+01 | 3.60e-02 | 3.46e-02 |
|  13   | 1260  |  108  | -5.4550e+00 | 2.93e+00 | 3.60e-02 | 3.46e-02 |
|  14   | 1365  |  116  | -5.4550e+00 | 1.40e+00 | 3.60e-02 | 3.46e-02 |
|  15   | 1470  |  120  | -5.3997e+00 | 9.81e+00 | 3.05e-02 | 1.86e-02 |
|  16   | 1575  |  133  | -5.3158e+00 | 9.81e+00 | 3.11e-02 | 1.16e-02 |
|  17   | 1680  |  147  | -5.3158e+00 | 1.57e+00 | 3.11e-02 | 1.16e-02 |
|  18   | 1785  |  153  | -5.3358e+00 | 1.10e+01 | 2.03e-02 | 6.83e-03 |
|  19   | 1890  |  166  | -5.3358e+00 | 1.28e+00 | 2.03e-02 | 6.83e-03 |
|  20   | 1995  |  171  | -5.3353e+00 | 8.98e+00 | 1.58e-02 | 3.94e-03 |
|  21   | 2100  |  183  | -5.3025e+00 | 8.98e+00 | 1.59e-02 | 2.19e-03 |
|  22   | 2205  |  191  | -5.2953e+00 | 8.98e+00 | 1.56e-02 | 1.13e-03 |
|  23   | 2310  |  200  | -5.2862e+00 | 8.98e+00 | 1.50e-02 | 5.91e-04 |
|  24   | 2415  |  206  | -5.2862e+00 | 8.98e+00 | 1.36e-02 | 3.01e-04 |
|  25   | 2415  |  206  | -5.2862e+00 | 4.49e+01 | 2.74e-02 | 3.01e-04 |
|  26   | 2520  |  218  | -5.4459e+00 | 4.49e+01 | 9.39e-03 | 5.50e-03 |
|  27   | 2625  |  227  | -5.4569e+00 | 4.49e+01 | 1.94e-03 | 3.79e-03 |
|  28   | 2730  |  236  | -5.4466e+00 | 4.49e+01 | 2.48e-03 | 1.91e-03 |
|  29   | 2835  |  240  | -5.4422e+00 | 4.49e+01 | 2.10e-03 | 9.61e-04 |
|  30   | 2940  |  248  | -5.4431e+00 | 4.49e+01 | 1.61e-03 | 5.30e-04 |
|  31   | 2940  |  248  | -5.4431e+00 | 2.24e+02 | 3.38e-03 | 5.30e-04 |
|  32   | 3045  |  255  | -5.4494e+00 | 2.24e+02 | 1.71e-03 | 6.15e-04 |
|  33   | 3150  |  263  | -5.4494e+00 | 2.24e+01 | 1.71e-03 | 6.15e-04 |
|  34   | 3255  |  271  | -5.4494e+00 | 2.24e+00 | 1.71e-03 | 6.15e-04 |
|  35   | 3360  |  280  | -5.4506e+00 | 2.64e+00 | 9.73e-04 | 3.37e-04 |
|  36   | 3465  |  290  | -5.4503e+00 | 2.64e+00 | 9.66e-04 | 1.74e-04 |
|  37   | 3570  |  296  | -5.4502e+00 | 2.64e+00 | 9.03e-04 | 9.04e-05 |
|  38   | 3675  |  303  | -5.4503e+00 | 2.64e+00 | 8.77e-04 | 4.67e-05 |
|  39   | 3780  |  305  | -5.4502e+00 | 2.64e+00 | 8.58e-04 | 2.35e-05 |
|  40   | 3885  |  310  | -5.4502e+00 | 2.64e-01 | 8.58e-04 | 2.35e-05 |
|  41   | 3990  |  312  | -5.4509e+00 | 5.28e-01 | 6.32e-04 | 2.20e-05 |
|  42   | 4095  |  315  | -5.4509e+00 | 9.31e-02 | 6.32e-04 | 2.20e-05 |
|  43   | 4200  |  317  | -5.4511e+00 | 1.86e-01 | 5.57e-04 | 2.14e-05 |
|  44   | 4305  |  319  | -5.4511e+00 | 9.31e-02 | 5.57e-04 | 2.14e-05 |
|  45   | 4410  |  321  | -5.4513e+00 | 1.86e-01 | 5.05e-04 | 2.10e-05 |
|  46   | 4515  |  324  | -5.4513e+00 | 8.25e-02 | 5.05e-04 | 2.10e-05 |
|  47   | 4620  |  326  | -5.4514e+00 | 1.65e-01 | 4.36e-04 | 2.07e-05 |
|  48   | 4620  |  326  | -5.4514e+00 | 1.00e+00 | 7.43e-04 | 2.07e-05 |
|  49   | 4725  |  338  | -5.4527e+00 | 7.00e+00 | 3.03e-04 | 5.07e-05 |
|  50   | 4830  |  355  | -5.4531e+00 | 9.98e+00 | 3.86e-04 | 2.56e-05 |
|  51   | 4935  |  360  | -5.4531e+00 | 9.98e+00 | 3.05e-04 | 1.30e-05 |
|  52   | 5040  |  366  | -5.4531e+00 | 1.08e+00 | 3.05e-04 | 1.30e-05 |
|  53   | 5145  |  372  | -5.4531e+00 | 5.39e-01 | 3.05e-04 | 1.30e-05 |
|  54   | 5250  |  377  | -5.4531e+00 | 1.43e-01 | 3.05e-04 | 1.30e-05 |
|  55   | 5355  |  380  | -5.4532e+00 | 2.85e-01 | 2.30e-04 | 1.32e-05 |
|  56   | 5460  |  386  | -5.4532e+00 | 8.39e-02 | 2.30e-04 | 1.32e-05 |
|  57   | 5565  |  390  | -5.4532e+00 | 8.39e-02 | 1.94e-04 | 1.31e-05 |
|  58   | 5670  |  394  | -5.4532e+00 | 1.43e-01 | 1.71e-04 | 1.29e-05 |
|  59   | 5775  |  399  | -5.4532e+00 | 5.30e-02 | 1.71e-04 | 1.29e-05 |
|  60   | 5880  |  402  | -5.4533e+00 | 5.30e-02 | 1.49e-04 | 1.27e-05 |
|  61   | 5985  |  405  | -5.4533e+00 | 2.99e-01 | 1.43e-04 | 1.25e-05 |
|  62   | 6090  |  410  | -5.4533e+00 | 1.50e-01 | 1.43e-04 | 1.25e-05 |
|  63   | 6195  |  415  | -5.4533e+00 | 7.48e-02 | 1.43e-04 | 1.25e-05 |
|  64   | 6300  |  417  | -5.4533e+00 | 1.50e-01 | 8.89e-05 | 1.22e-05 |
|  65   | 6405  |  422  | -5.4533e+00 | 7.48e-02 | 8.89e-05 | 1.22e-05 |
|  66   | 6510  |  424  | -5.4533e+00 | 7.48e-02 | 4.11e-05 | 1.20e-05 |
|  67   | 6510  |  424  | -5.4533e+00 | 1.00e+00 | 1.37e-04 | 1.20e-05 |
|  68   | 6615  |  439  | -5.4534e+00 | 7.00e+00 | 4.67e-05 | 8.50e-06 |
|  69   | 6720  |  446  | -5.4534e+00 | 7.00e+00 | 1.92e-05 | 4.42e-06 |
|  70   | 6825  |  454  | -5.4534e+00 | 7.00e+00 | 1.54e-05 | 2.24e-06 |
|  71   | 6930  |  468  | -5.4534e+00 | 7.00e+00 | 1.79e-05 | 1.13e-06 |
|  72   | 7035  |  473  | -5.4534e+00 | 7.00e+00 | 1.07e-05 | 5.70e-07 |
|  73   | 7140  |  478  | -5.4534e+00 | 7.00e+00 | 8.46e-06 | 2.87e-07 |
|  74   | 7245  |  479  | -5.4534e+00 | 7.00e+00 | 8.19e-06 | 1.44e-07 |
|  75   | 7350  |  480  | -5.4534e+00 | 7.00e+00 | 2.14e-06 | 8.38e-08 |
|  76   | 7350  |  480  | -5.4534e+00 | 3.50e+01 | 2.69e-05 | 8.38e-08 |
|  77   | 7455  |  494  | -5.4534e+00 | 3.50e+00 | 2.69e-05 | 8.38e-08 |
|  78   | 7560  |  508  | -5.4534e+00 | 3.50e-01 | 2.69e-05 | 8.38e-08 |
|  79   | 7665  |  521  | -5.4534e+00 | 3.50e-02 | 2.69e-05 | 8.38e-08 |
|  80   | 7770  |  530  | -5.4534e+00 | 2.45e-01 | 4.56e-06 | 1.34e-07 |
|  81   | 7875  |  541  | -5.4534e+00 | 2.45e-02 | 4.56e-06 | 1.34e-07 |
|  82   | 7980  |  547  | -5.4534e+00 | 2.45e-03 | 4.56e-06 | 1.34e-07 |
|  83   | 8085  |  549  | -5.4534e+00 | 1.71e-02 | 3.26e-06 | 1.34e-07 |
|  84   | 8190  |  552  | -5.4534e+00 | 1.71e-03 | 3.26e-06 | 1.34e-07 |
|  85   | 8295  |  553  | -5.4534e+00 | 1.20e-02 | 2.79e-06 | 1.34e-07 |
|  86   | 8400  |  555  | -5.4534e+00 | 5.73e-03 | 2.79e-06 | 1.34e-07 |
|  87   | 8505  |  556  | -5.4534e+00 | 2.86e-03 | 2.79e-06 | 1.34e-07 |
|  88   | 8610  |  557  | -5.4534e+00 | 2.00e-02 | 2.01e-06 | 1.34e-07 |
|  89   | 8715  |  558  | -5.4534e+00 | 1.00e-02 | 2.01e-06 | 1.34e-07 |
|  90   | 8820  |  559  | -5.4534e+00 | 2.00e-02 | 6.39e-07 | 1.35e-07 |
|  91   | 8925  |  560  | -5.4534e+00 | 1.40e-01 | 3.15e-06 | 1.35e-07 |
|  92   | 9030  |  565  | -5.4534e+00 | 1.75e-02 | 3.15e-06 | 1.35e-07 |
|  93   | 9135  |  566  | -5.4534e+00 | 2.59e-03 | 3.15e-06 | 1.35e-07 |
|  94   | 9240  |  567  | -5.4534e+00 | 1.81e-02 | 9.42e-07 | 1.34e-07 |
|  95   | 9345  |  568  | -5.4534e+00 | 3.62e-02 | 4.09e-06 | 1.34e-07 |
|  96   | 9450  |  571  | -5.4534e+00 | 5.81e-03 | 4.09e-06 | 1.34e-07 |
|  97   | 9555  |  572  | -5.4534e+00 | 1.16e-02 | 2.06e-06 | 1.34e-07 |
|  98   | 9660  |  573  | -5.4534e+00 | 2.32e-02 | 4.97e-06 | 1.33e-07 |
|  99   | 9765  |  576  | -5.4534e+00 | 2.32e-03 | 4.97e-06 | 1.33e-07 |
|  100  | 9870  |  577  | -5.4534e+00 | 1.63e-02 | 1.84e-06 | 1.33e-07 |
|  101  | 9975  |  578  | -5.4534e+00 | 3.25e-02 | 7.08e-06 | 1.33e-07 |
|  102  | 10080 |  581  | -5.4534e+00 | 6.71e-03 | 7.08e-06 | 1.33e-07 |
|  103  | 10185 |  582  | -5.4534e+00 | 2.60e-03 | 7.08e-06 | 1.33e-07 |
|  104  | 10290 |  583  | -5.4534e+00 | 1.82e-02 | 2.49e-06 | 1.33e-07 |
|  105  | 10395 |  584  | -5.4534e+00 | 9.11e-03 | 2.49e-06 | 1.33e-07 |
|  106  | 10500 |  585  | -5.4534e+00 | 1.82e-02 | 4.27e-06 | 1.33e-07 |
|  107  | 10605 |  586  | -5.4534e+00 | 1.82e-03 | 4.27e-06 | 1.33e-07 |
|  108  | 10710 |  587  | -5.4534e+00 | 1.28e-02 | 2.07e-06 | 1.33e-07 |
|  109  | 10815 |  588  | -5.4534e+00 | 2.55e-02 | 5.95e-06 | 1.32e-07 |
|  110  | 10920 |  590  | -5.4534e+00 | 2.77e-03 | 5.95e-06 | 1.32e-07 |
|  111  | 11025 |  591  | -5.4534e+00 | 5.53e-03 | 1.56e-06 | 1.32e-07 |
|  112  | 11130 |  592  | -5.4534e+00 | 3.87e-02 | 1.12e-06 | 1.32e-07 |
|  113  | 11235 |  593  | -5.4534e+00 | 7.75e-02 | 1.25e-05 | 1.31e-07 |
|  114  | 11340 |  596  | -5.4534e+00 | 5.42e-01 | 6.69e-06 | 1.28e-07 |
|  115  | 11445 |  602  | -5.4534e+00 | 3.80e+00 | 1.47e-05 | 1.25e-07 |
|  116  | 11550 |  608  | -5.4534e+00 | 3.80e+00 | 3.20e-06 | 8.19e-08 |
|  117  | 11655 |  622  | -5.4534e+00 | 3.80e+00 | 6.05e-07 | 4.15e-08 |
|  118  | 11655 |  622  | -5.4534e+00 | 1.90e+01 | 6.19e-06 | 4.15e-08 |
|  119  | 11760 |  637  | -5.4534e+00 | 1.90e+01 | 6.88e-07 | 2.29e-08 |
|  120  | 11865 |  642  | -5.4534e+00 | 1.90e+01 | 2.53e-07 | 1.15e-08 |
|  121  | 11865 |  642  | -5.4534e+00 | 9.49e+01 | 1.13e-06 | 1.15e-08 |
|  122  | 11970 |  647  | -5.4534e+00 | 9.49e+01 | 1.21e-07 | 5.89e-09 |
|  123  | 12075 |  648  | -5.4534e+00 | 9.49e+01 | 6.50e-07 | 2.97e-09 |
|  124  | 12180 |  649  | -5.4534e+00 | 9.49e+01 | 1.98e-07 | 1.50e-09 |
|  125  | 12285 |  650  | -5.4534e+00 | 9.49e+01 | 4.81e-07 | 7.73e-10 |
|  126  | 12390 |  651  | -5.4534e+00 | 9.49e+01 | 8.64e-07 | 4.33e-10 |
|  127  | 12495 |  653  | -5.4534e+00 | 9.49e+00 | 8.64e-07 | 4.33e-10 |
|  128  | 12600 |  655  | -5.4534e+00 | 9.49e-01 | 8.64e-07 | 4.33e-10 |
|  129  | 12705 |  657  | -5.4534e+00 | 2.47e-01 | 8.64e-07 | 4.33e-10 |
|  130  | 12810 |  660  | -5.4534e+00 | 4.47e-02 | 8.64e-07 | 4.33e-10 |
|  131  | 12915 |  663  | -5.4534e+00 | 4.47e-03 | 8.64e-07 | 4.33e-10 |
|  132  | 13020 |  667  | -5.4534e+00 | 4.47e-04 | 8.64e-07 | 4.33e-10 |
|  133  | 13125 |  668  | -5.4534e+00 | 4.47e-05 | 8.64e-07 | 4.33e-10 |
|  134  | 13230 |  669  | -5.4534e+00 | 2.24e-05 | 8.64e-07 | 4.33e-10 |
|  135  | 13335 |  670  | -5.4534e+00 | 2.24e-05 | 8.50e-07 | 4.33e-10 |
|  136  | 13440 |  671  | -5.4534e+00 | 2.24e-05 | 8.33e-07 | 4.33e-10 |
|  137  | 13545 |  672  | -5.4534e+00 | 2.24e-05 | 8.18e-07 | 4.33e-10 |
|  138  | 13650 |  673  | -5.4534e+00 | 2.24e-05 | 8.03e-07 | 4.33e-10 |
|  139  | 13755 |  674  | -5.4534e+00 | 2.24e-05 | 7.89e-07 | 4.33e-10 |
|  140  | 13860 |  675  | -5.4534e+00 | 2.24e-05 | 7.72e-07 | 4.33e-10 |
|  141  | 13965 |  676  | -5.4534e+00 | 2.24e-05 | 7.59e-07 | 4.33e-10 |
|  142  | 14070 |  677  | -5.4534e+00 | 2.24e-05 | 7.43e-07 | 4.33e-10 |
|  143  | 14175 |  678  | -5.4534e+00 | 2.24e-05 | 7.27e-07 | 4.33e-10 |
|  144  | 14280 |  679  | -5.4534e+00 | 2.24e-05 | 7.12e-07 | 4.33e-10 |
|  145  | 14385 |  680  | -5.4534e+00 | 1.12e-05 | 7.12e-07 | 4.33e-10 |
|  146  | 14490 |  681  | -5.4534e+00 | 2.24e-05 | 7.05e-07 | 4.33e-10 |
|  147  | 14595 |  682  | -5.4534e+00 | 2.24e-05 | 6.89e-07 | 4.33e-10 |
|  148  | 14700 |  683  | -5.4534e+00 | 2.24e-05 | 6.74e-07 | 4.33e-10 |
|  149  | 14805 |  684  | -5.4534e+00 | 2.24e-05 | 6.59e-07 | 4.33e-10 |
|  150  | 14910 |  685  | -5.4534e+00 | 1.12e-05 | 6.59e-07 | 4.33e-10 |
|  151  | 15015 |  686  | -5.4534e+00 | 2.24e-05 | 6.52e-07 | 4.33e-10 |
|  152  | 15120 |  687  | -5.4534e+00 | 2.24e-05 | 6.37e-07 | 4.33e-10 |
|  153  | 15225 |  688  | -5.4534e+00 | 1.12e-05 | 6.37e-07 | 4.33e-10 |
|  154  | 15330 |  689  | -5.4534e+00 | 2.24e-05 | 6.28e-07 | 4.33e-10 |
|  155  | 15435 |  690  | -5.4534e+00 | 2.24e-05 | 6.14e-07 | 4.33e-10 |
|  156  | 15540 |  691  | -5.4534e+00 | 1.12e-05 | 6.14e-07 | 4.33e-10 |
|  157  | 15645 |  692  | -5.4534e+00 | 2.24e-05 | 6.07e-07 | 4.33e-10 |
|  158  | 15750 |  693  | -5.4534e+00 | 1.12e-05 | 6.07e-07 | 4.33e-10 |
|  159  | 15855 |  694  | -5.4534e+00 | 2.24e-05 | 5.99e-07 | 4.33e-10 |
|  160  | 15960 |  695  | -5.4534e+00 | 1.12e-05 | 5.99e-07 | 4.33e-10 |
|  161  | 16065 |  696  | -5.4534e+00 | 2.24e-05 | 5.92e-07 | 4.33e-10 |
|  162  | 16170 |  697  | -5.4534e+00 | 1.12e-05 | 5.92e-07 | 4.33e-10 |
|  163  | 16275 |  698  | -5.4534e+00 | 2.24e-05 | 5.84e-07 | 4.33e-10 |
|  164  | 16380 |  699  | -5.4534e+00 | 1.12e-05 | 5.84e-07 | 4.33e-10 |
|  165  | 16485 |  700  | -5.4534e+00 | 2.24e-05 | 5.76e-07 | 4.33e-10 |
|  166  | 16590 |  701  | -5.4534e+00 | 1.12e-05 | 5.76e-07 | 4.33e-10 |
|  167  | 16695 |  702  | -5.4534e+00 | 2.24e-05 | 5.68e-07 | 4.33e-10 |
|  168  | 16800 |  703  | -5.4534e+00 | 1.12e-05 | 5.68e-07 | 4.33e-10 |
|  169  | 16905 |  704  | -5.4534e+00 | 2.24e-05 | 5.62e-07 | 4.33e-10 |
|  170  | 17010 |  705  | -5.4534e+00 | 1.12e-05 | 5.62e-07 | 4.33e-10 |
|  171  | 17115 |  706  | -5.4534e+00 | 2.24e-05 | 5.53e-07 | 4.33e-10 |
|  172  | 17220 |  707  | -5.4534e+00 | 1.12e-05 | 5.53e-07 | 4.33e-10 |
|  173  | 17325 |  708  | -5.4534e+00 | 2.24e-05 | 5.46e-07 | 4.33e-10 |
|  174  | 17430 |  709  | -5.4534e+00 | 1.12e-05 | 5.46e-07 | 4.33e-10 |
|  175  | 17535 |  710  | -5.4534e+00 | 2.24e-05 | 5.39e-07 | 4.33e-10 |
|  176  | 17640 |  711  | -5.4534e+00 | 1.12e-05 | 5.39e-07 | 4.33e-10 |
|  177  | 17745 |  712  | -5.4534e+00 | 2.24e-05 | 5.32e-07 | 4.33e-10 |
|  178  | 17850 |  713  | -5.4534e+00 | 1.12e-05 | 5.32e-07 | 4.33e-10 |
|  179  | 17955 |  714  | -5.4534e+00 | 2.24e-05 | 5.25e-07 | 4.33e-10 |
|  180  | 18060 |  715  | -5.4534e+00 | 1.12e-05 | 5.25e-07 | 4.33e-10 |
|  181  | 18165 |  716  | -5.4534e+00 | 2.24e-05 | 5.16e-07 | 4.33e-10 |
|  182  | 18270 |  717  | -5.4534e+00 | 1.12e-05 | 5.16e-07 | 4.33e-10 |
|  183  | 18375 |  718  | -5.4534e+00 | 2.24e-05 | 5.08e-07 | 4.33e-10 |
|  184  | 18480 |  719  | -5.4534e+00 | 1.12e-05 | 5.08e-07 | 4.33e-10 |
|  185  | 18585 |  720  | -5.4534e+00 | 2.24e-05 | 5.02e-07 | 4.33e-10 |
|  186  | 18690 |  721  | -5.4534e+00 | 1.12e-05 | 5.02e-07 | 4.33e-10 |
|  187  | 18795 |  722  | -5.4534e+00 | 2.24e-05 | 4.94e-07 | 4.33e-10 |
|  188  | 18900 |  723  | -5.4534e+00 | 1.12e-05 | 4.94e-07 | 4.33e-10 |
|  189  | 19005 |  724  | -5.4534e+00 | 2.24e-05 | 4.87e-07 | 4.33e-10 |
|  190  | 19110 |  725  | -5.4534e+00 | 1.12e-05 | 4.87e-07 | 4.33e-10 |
|  191  | 19215 |  726  | -5.4534e+00 | 2.24e-05 | 4.80e-07 | 4.33e-10 |
|  192  | 19320 |  727  | -5.4534e+00 | 1.12e-05 | 4.80e-07 | 4.33e-10 |
|  193  | 19425 |  728  | -5.4534e+00 | 1.12e-05 | 4.72e-07 | 4.33e-10 |
|  194  | 19530 |  729  | -5.4534e+00 | 2.24e-05 | 4.66e-07 | 4.33e-10 |
|  195  | 19635 |  730  | -5.4534e+00 | 1.12e-05 | 4.66e-07 | 4.33e-10 |
|  196  | 19740 |  731  | -5.4534e+00 | 2.24e-05 | 4.59e-07 | 4.33e-10 |
|  197  | 19845 |  732  | -5.4534e+00 | 1.12e-05 | 4.59e-07 | 4.33e-10 |
|  198  | 19950 |  733  | -5.4534e+00 | 2.24e-05 | 4.50e-07 | 4.33e-10 |
|  199  | 20055 |  734  | -5.4534e+00 | 1.12e-05 | 4.50e-07 | 4.33e-10 |
|  200  | 20160 |  735  | -5.4534e+00 | 2.24e-05 | 4.44e-07 | 4.33e-10 |
|  201  | 20265 |  736  | -5.4534e+00 | 1.12e-05 | 4.44e-07 | 4.33e-10 |
|  202  | 20370 |  737  | -5.4534e+00 | 2.24e-05 | 4.36e-07 | 4.33e-10 |
|  203  | 20475 |  738  | -5.4534e+00 | 1.12e-05 | 4.36e-07 | 4.33e-10 |
|  204  | 20580 |  739  | -5.4534e+00 | 1.12e-05 | 4.29e-07 | 4.33e-10 |
|  205  | 20685 |  740  | -5.4534e+00 | 2.24e-05 | 4.21e-07 | 4.33e-10 |
|  206  | 20790 |  741  | -5.4534e+00 | 1.12e-05 | 4.21e-07 | 4.33e-10 |
|  207  | 20895 |  742  | -5.4534e+00 | 2.24e-05 | 4.14e-07 | 4.33e-10 |
|  208  | 21000 |  743  | -5.4534e+00 | 1.12e-05 | 4.14e-07 | 4.33e-10 |
|  209  | 21105 |  744  | -5.4534e+00 | 2.24e-05 | 4.06e-07 | 4.33e-10 |
|  210  | 21210 |  745  | -5.4534e+00 | 1.12e-05 | 4.06e-07 | 4.33e-10 |
|  211  | 21315 |  746  | -5.4534e+00 | 2.24e-05 | 4.00e-07 | 4.33e-10 |
|  212  | 21420 |  747  | -5.4534e+00 | 1.12e-05 | 4.00e-07 | 4.33e-10 |
|  213  | 21525 |  748  | -5.4534e+00 | 2.24e-05 | 3.92e-07 | 4.33e-10 |
|  214  | 21630 |  749  | -5.4534e+00 | 1.12e-05 | 3.92e-07 | 4.33e-10 |
|  215  | 21735 |  750  | -5.4534e+00 | 1.12e-05 | 3.85e-07 | 4.33e-10 |
|  216  | 21840 |  751  | -5.4534e+00 | 2.24e-05 | 3.77e-07 | 4.33e-10 |
|  217  | 21945 |  752  | -5.4534e+00 | 1.12e-05 | 3.77e-07 | 4.33e-10 |
|  218  | 22050 |  753  | -5.4534e+00 | 2.24e-05 | 3.71e-07 | 4.33e-10 |
|  219  | 22155 |  754  | -5.4534e+00 | 1.12e-05 | 3.71e-07 | 4.33e-10 |
|  220  | 22260 |  755  | -5.4534e+00 | 2.24e-05 | 3.62e-07 | 4.33e-10 |
|  221  | 22365 |  756  | -5.4534e+00 | 1.12e-05 | 3.62e-07 | 4.33e-10 |
|  222  | 22470 |  757  | -5.4534e+00 | 1.12e-05 | 3.55e-07 | 4.33e-10 |
|  223  | 22575 |  758  | -5.4534e+00 | 1.12e-05 | 3.48e-07 | 4.33e-10 |
|  224  | 22680 |  759  | -5.4534e+00 | 1.12e-05 | 3.42e-07 | 4.33e-10 |
|  225  | 22785 |  760  | -5.4534e+00 | 1.12e-05 | 3.36e-07 | 4.33e-10 |
|  226  | 22890 |  761  | -5.4534e+00 | 1.12e-05 | 3.28e-07 | 4.33e-10 |
|  227  | 22995 |  762  | -5.4534e+00 | 1.12e-05 | 3.22e-07 | 4.33e-10 |
|  228  | 23100 |  763  | -5.4534e+00 | 2.24e-05 | 3.13e-07 | 4.33e-10 |
|  229  | 23205 |  764  | -5.4534e+00 | 1.12e-05 | 3.13e-07 | 4.33e-10 |
|  230  | 23310 |  765  | -5.4534e+00 | 1.12e-05 | 3.06e-07 | 4.33e-10 |
|  231  | 23415 |  766  | -5.4534e+00 | 2.24e-05 | 3.00e-07 | 4.33e-10 |
|  232  | 23520 |  767  | -5.4534e+00 | 1.12e-05 | 3.00e-07 | 4.33e-10 |
|  233  | 23625 |  768  | -5.4534e+00 | 2.24e-05 | 2.93e-07 | 4.33e-10 |
|  234  | 23730 |  769  | -5.4534e+00 | 1.12e-05 | 2.93e-07 | 4.33e-10 |
|  235  | 23835 |  770  | -5.4534e+00 | 1.12e-05 | 2.86e-07 | 4.33e-10 |
|  236  | 23940 |  771  | -5.4534e+00 | 1.12e-05 | 2.80e-07 | 4.33e-10 |
|  237  | 24045 |  772  | -5.4534e+00 | 1.12e-05 | 2.73e-07 | 4.33e-10 |
|  238  | 24150 |  773  | -5.4534e+00 | 2.24e-05 | 2.67e-07 | 4.33e-10 |
|  239  | 24255 |  774  | -5.4534e+00 | 1.12e-05 | 2.67e-07 | 4.33e-10 |
|  240  | 24360 |  775  | -5.4534e+00 | 1.12e-05 | 2.60e-07 | 4.33e-10 |
|  241  | 24465 |  776  | -5.4534e+00 | 1.12e-05 | 2.53e-07 | 4.33e-10 |
|  242  | 24570 |  777  | -5.4534e+00 | 1.12e-05 | 2.46e-07 | 4.33e-10 |
|  243  | 24675 |  778  | -5.4534e+00 | 1.12e-05 | 2.42e-07 | 4.33e-10 |
|  244  | 24780 |  779  | -5.4534e+00 | 1.12e-05 | 2.35e-07 | 4.33e-10 |
|  245  | 24885 |  780  | -5.4534e+00 | 1.12e-05 | 2.27e-07 | 4.33e-10 |
|  246  | 24990 |  781  | -5.4534e+00 | 2.24e-05 | 2.20e-07 | 4.33e-10 |
|  247  | 25095 |  782  | -5.4534e+00 | 1.12e-05 | 2.20e-07 | 4.33e-10 |
|  248  | 25200 |  783  | -5.4534e+00 | 1.12e-05 | 2.15e-07 | 4.33e-10 |
|  249  | 25305 |  784  | -5.4534e+00 | 1.12e-05 | 2.09e-07 | 4.33e-10 |
|  250  | 25410 |  785  | -5.4534e+00 | 1.12e-05 | 2.02e-07 | 4.33e-10 |
|  251  | 25515 |  786  | -5.4534e+00 | 1.12e-05 | 1.96e-07 | 4.33e-10 |
|  252  | 25620 |  787  | -5.4534e+00 | 1.12e-05 | 1.89e-07 | 4.33e-10 |
|  253  | 25725 |  788  | -5.4534e+00 | 1.12e-05 | 1.83e-07 | 4.33e-10 |
|  254  | 25830 |  789  | -5.4534e+00 | 1.12e-05 | 1.79e-07 | 4.33e-10 |
|  255  | 25935 |  790  | -5.4534e+00 | 1.12e-05 | 1.73e-07 | 4.33e-10 |
|  256  | 26040 |  791  | -5.4534e+00 | 1.12e-05 | 1.67e-07 | 4.33e-10 |
|  257  | 26145 |  792  | -5.4534e+00 | 1.12e-05 | 1.61e-07 | 4.33e-10 |
|  258  | 26250 |  793  | -5.4534e+00 | 1.12e-05 | 1.56e-07 | 4.33e-10 |
|  259  | 26355 |  794  | -5.4534e+00 | 1.12e-05 | 1.49e-07 | 4.33e-10 |
|  260  | 26460 |  795  | -5.4534e+00 | 1.12e-05 | 1.45e-07 | 4.33e-10 |
|  261  | 26565 |  796  | -5.4534e+00 | 1.12e-05 | 1.39e-07 | 4.33e-10 |
|  262  | 26670 |  797  | -5.4534e+00 | 1.12e-05 | 1.34e-07 | 4.33e-10 |
|  263  | 26775 |  798  | -5.4534e+00 | 2.24e-05 | 1.29e-07 | 4.33e-10 |
|  264  | 26880 |  799  | -5.4534e+00 | 1.12e-05 | 1.29e-07 | 4.33e-10 |
|  265  | 26985 |  800  | -5.4534e+00 | 2.24e-05 | 1.25e-07 | 4.33e-10 |
|  266  | 27090 |  801  | -5.4534e+00 | 1.12e-05 | 1.25e-07 | 4.33e-10 |
|  267  | 27195 |  802  | -5.4534e+00 | 1.12e-05 | 1.21e-07 | 4.33e-10 |
|  268  | 27300 |  803  | -5.4534e+00 | 2.24e-05 | 1.15e-07 | 4.33e-10 |
|  269  | 27405 |  804  | -5.4534e+00 | 1.12e-05 | 1.15e-07 | 4.33e-10 |
|  270  | 27510 |  805  | -5.4534e+00 | 1.12e-05 | 1.11e-07 | 4.33e-10 |
|  271  | 27615 |  806  | -5.4534e+00 | 2.24e-05 | 1.07e-07 | 4.33e-10 |
|  272  | 27720 |  807  | -5.4534e+00 | 1.12e-05 | 1.07e-07 | 4.33e-10 |
|  273  | 27825 |  808  | -5.4534e+00 | 1.12e-05 | 1.02e-07 | 4.33e-10 |
|  274  | 27930 |  809  | -5.4534e+00 | 2.24e-05 | 9.73e-08 | 4.33e-10 |
|  275  | 28035 |  810  | -5.4534e+00 | 1.12e-05 | 9.73e-08 | 4.33e-10 |
|  276  | 28140 |  811  | -5.4534e+00 | 2.24e-05 | 9.45e-08 | 4.33e-10 |
|  277  | 28245 |  812  | -5.4534e+00 | 1.12e-05 | 9.45e-08 | 4.33e-10 |
|  278  | 28350 |  813  | -5.4534e+00 | 2.24e-05 | 9.02e-08 | 4.33e-10 |
|  279  | 28455 |  814  | -5.4534e+00 | 1.12e-05 | 9.02e-08 | 4.33e-10 |
|  280  | 28560 |  815  | -5.4534e+00 | 2.24e-05 | 8.47e-08 | 4.33e-10 |
|  281  | 28665 |  816  | -5.4534e+00 | 1.12e-05 | 8.47e-08 | 4.33e-10 |
|  282  | 28770 |  817  | -5.4534e+00 | 2.24e-05 | 8.13e-08 | 4.33e-10 |
|  283  | 28875 |  818  | -5.4534e+00 | 2.24e-05 | 7.54e-08 | 4.33e-10 |
|  284  | 28980 |  819  | -5.4534e+00 | 2.24e-05 | 6.80e-08 | 4.33e-10 |
|  285  | 29085 |  820  | -5.4534e+00 | 2.24e-05 | 6.09e-08 | 4.33e-10 |
|  286  | 29190 |  821  | -5.4534e+00 | 4.47e-05 | 5.49e-08 | 4.33e-10 |
|  287  | 29295 |  822  | -5.4534e+00 | 2.24e-05 | 5.49e-08 | 4.33e-10 |
|  288  | 29400 |  823  | -5.4534e+00 | 4.47e-05 | 5.06e-08 | 4.33e-10 |
|  289  | 29505 |  824  | -5.4534e+00 | 2.24e-05 | 5.06e-08 | 4.33e-10 |
|  290  | 29610 |  825  | -5.4534e+00 | 4.47e-05 | 4.59e-08 | 4.33e-10 |
|  291  | 29715 |  826  | -5.4534e+00 | 2.24e-05 | 4.59e-08 | 4.33e-10 |
|  292  | 29820 |  827  | -5.4534e+00 | 4.47e-05 | 4.23e-08 | 4.33e-10 |
|  293  | 29925 |  828  | -5.4534e+00 | 4.47e-05 | 3.43e-08 | 4.33e-10 |
|  294  | 30030 |  829  | -5.4534e+00 | 8.95e-05 | 2.78e-08 | 4.33e-10 |
|  295  | 30135 |  830  | -5.4534e+00 | 8.95e-05 | 1.79e-08 | 4.33e-10 |
|  296  | 30240 |  831  | -5.4534e+00 | 1.79e-04 | 1.14e-08 | 4.33e-10 |
|  297  | 30345 |  832  | -5.4534e+00 | 3.58e-04 | 3.42e-09 | 4.33e-10 |

`gtol` termination condition is satisfied.
Number of iterations: 297, function evaluations: 30345, CG iterations: 832, optimality: 3.42e-09, constraint violation: 4.33e-10, execution time:  4.5 s.

           C         Y         K         I
0   0.950000  1.000000  3.000000  0.050000
1   0.950960  1.021564  2.990000  0.070603
2   0.966442  1.045406  3.000803  0.078963
3   0.983588  1.070524  3.019750  0.086936
4   1.002343  1.096918  3.046291  0.094576
5   1.022660  1.124592  3.079941  0.101931
6   1.044501  1.153547  3.120274  0.109047
7   1.067828  1.183789  3.166915  0.115961
8   1.092613  1.215322  3.219537  0.122709
9   1.118832  1.248155  3.277855  0.129323
10  1.146466  1.282296  3.341621  0.135829
11  1.175502  1.317756  3.410618  0.142254
12  1.205931  1.354549  3.484660  0.148619
13  1.237748  1.392690  3.563585  0.154942
14  1.270955  1.432195  3.647255  0.161240
15  1.305556  1.473083  3.735550  0.167526
16  1.341562  1.515374  3.828365  0.173812
17  1.378985  1.559090  3.925610  0.180104
18  1.417845  1.604254  4.027202  0.186409
19  1.458165  1.650890  4.133067  0.192725
20  1.499973  1.699025  4.243131  0.199052
21  1.543302  1.748682  4.357320  0.205380
22  1.588190  1.799888  4.475554  0.211698
23  1.634684  1.852669  4.597741  0.217985
24  1.682836  1.907049  4.723772  0.224213
25  1.720375  1.963051  4.853509  0.242676

W=5.453426258078583

We converge to the correct solution but at a rather high price: about 4.5 seconds. That is about a factor 10 higher than what we expect to see. The number of iterations is large (297). But more alarming: the number of function evaluations is extremely large: 30345. This is quite something for a small, easy model like this.

One way to help the NLP solver is to use analytical gradients for both the objective and the non-linear constraint. In this case that is not too difficult. For many practical problems however, providing correct gradients can be rather difficult and time consuming. The objective is \[f  = -\sum_t \color{darkblue}\beta_t \log \color{darkred}C_t\] We can form: \[\frac{\partial f}{\partial \color{darkred}C_t} = - \frac{\color{darkblue}\beta_t}{\color{darkred}C_t}\] The gradients are zero w.r.t to the other variables. The gradients of the production function \[g(\color{darkred}K_t) = \color{darkblue}a\color{darkred}K_t^{\color{darkblue}b} \color{darkblue}L_t^{1-\color{darkblue}b}\] are also easy: \[\frac{\partial g}{\partial \color{darkred}K_t} =  \color{darkblue}a  \color{darkblue}b \color{darkred}K_t^{\color{darkblue}b-1} \color{darkblue}L_t^{1-\color{darkblue}b}\] To achieve better performance we also add:
  • A feasible initial point. This can often help.
  • Convert problem into a equality constrained problem. This may cause this particular solver to use a different algorithm. We can always convert inequalities into equalities by adding slack variables. Due to the special structure of this model, we don't need a slack here.
  • Use the manual presolve approach shown before. 
  • We can use a better starting point. Constructing an initial feasible solution is not too difficult in this case. (In general it may be quite difficult.)

Optimized version
#
# Solving the Ramsey model using scipy.optimize.minimize with the trust-constr method
# We use a class to keep all data and functions together. This allows us to use
# the model parameters in the function evaluation without the extra "args".  
#
# Optimized version:
#    Substitute out fixed variables and y[t]
#    Provide analytic gradients
#    Equality constrained version
#    Initial feasible solution
#


import numpy as np
import scipy.optimize as opt
import pandas as pd


class RamseyModel:

    #----------------------------------------------------------------
    # data
    #----------------------------------------------------------------

    T = 25     # time periods (1..25). 
    ρ = 0.05   # discount factor
    g = 0.03   # labor growth rate
    δ = 0.02   # capital depreciation factor
    K0 = 3.00  # initial capital
    I0 = 0.05  # initial investment
    C0 = 0.95  # initial consumption
    L0 = 1.00  # initial labor
    b = 0.25   # Cobb-Douglas coefficient
    a = (C0 + I0)/ (K0**b * L0**(1-b))     # Cobb-Douglas coefficient

    # time indices: 1..T 
    # we do t=0 separately (that is exogenous)
    t = np.arange(1,T+1)

    # weight factor for future utilities
    β = (1+ρ)**(-t)
    β[T-1] = (1/ρ) * (1+ρ)**(1-T)

    # labor is exogeneously determined using an exponential growth process
    L = (1+g)**t * L0

    # precompute L^(1-b)
    Lb = L**(1-b)

    #----------------------------------------------------------------
    # model
    #----------------------------------------------------------------

    # mapping of variables into a single vector x:
    # x = [C[0],...,C[T-1],K[0],...,K[T-1],I[0],...,I[T-1]]
    # note: Y is substituted out 

    # lower bounds
    C_lo = 0.001*np.ones(T)
    K_lo = 0.001*np.ones(T)
    K_lo[0] = (1-δ)*K0 + I0
    I_lo = np.zeros(T)
    # combine lower bounds into a single vector
    lo = np.concatenate((C_lo,K_lo,I_lo))

    # upper bounds
    C_up = 1000*np.ones(T) 
    K_up = 1000*np.ones(T)
    K_up[0] = (1-δ)*K0 + I0
    I_up = 1000*np.ones(T)
    # combine upper bounds into a single vector
    up = np.concatenate((C_up,K_up,I_up))
  
    # Bounds
    bounds = opt.Bounds(lo,up)

    # initial values 
    x0 = np.concatenate((C0*np.ones(T),K0*np.ones(T),I0*np.ones(T)))

    # create an initial feasible solution by iterating on the equations
    def init_feas_sol(self):
        C_init = np.zeros(self.T)
        K_init = np.zeros(self.T)
        I_init = np.zeros(self.T)
        C_init[0] = self.C0
        K_init[0] = self.K0  
        I_init[0] = self.I0
        for t in range(1,self.T):
            K_init[t] = (1-self.δ)*K_init[t-1] + I_init[t-1]
            I_init[t] = (self.g + self.δ)*K_init[t]
            y = self.a * (K_init[t]**self.b) * (self.Lb[t])
            C_init[t] = y - I_init[t]
        return np.concatenate((C_init,K_init,I_init))

    # objective function 
    # W =e=sum(t,beta(t)*log(C(t)));
    # we minimize -W
    # Note: we dropped the first period so beware when comparing objective values. 
    def f(self, x):        
        Ct = x[0:self.T] 
        return -np.dot(self.β,np.log(Ct)) 

    # production function
    # C(t)+I(t) =e= a * (K(t)**b) * (L(t)**(1-b))
    def prod(self,x):
        Ct = x[0:self.T]
        Kt = x[self.T:2*self.T]
        It = x[2*self.T:3*self.T]
        return  Ct+It - self.a*np.multiply(Kt**self.b,self.Lb)

    # capital accumulation
    # K(t+1) =e= (1-delta)*K(t) + I(t)
    # K(0)   =e= (1-delta)*K0 + I0 (this goes into the rhs, i.e. the bounds) 
    # =>          C  K   I
    #      A  = [ 0  AK  I ] 
    #      AK = (1-delta)*I - U
    #      U is the identity matrix shifted by one period, with zeros in the first column 
    #      after this drop the last row.
    #
    def A1(self):
        AC = np.zeros((self.T,self.T))
        AK = (1-self.δ)*np.identity(self.T) - np.eye(self.T,k=1)
        AI = np.identity(self.T)
        return np.concatenate((AC,AK,AI),axis=1)[:-1,:]    

    # I(tlast) =g= (g+delta)*K(tlast)
    # => I[24] - (g+delta)*K[24] >= 0
    def A2(self):
        a =  np.zeros((1,3*self.T))
        a[0,3*self.T-1] = 1
        a[0,2*self.T-1] = -(self.g + self.δ)
        return a

    def constraints(self):
        c1 = opt.NonlinearConstraint(self.prod,0,0,jac=self.jacprod)
        c2 = opt.LinearConstraint(self.A1(),0,0)
        # c3 = opt.LinearConstraint(self.A2(),0,np.inf)
        c3 = opt.LinearConstraint(self.A2(),0,0)
        return [c1,c2,c3]

    def gradobj(self,x):
        Ct = x[0:self.T]
        gradC = -self.β / Ct
        gradRest = np.zeros(2*self.T)
        return np.concatenate((gradC,gradRest))
    

    def jacprod(self,x):
        Kt = x[self.T:2*self.T]
        JacCt = np.identity(self.T)
        JacKt = np.diag(-self.a * self.b * (Kt**(self.b-1)) * (self.Lb))
        JacIt = np.identity(self.T)
        return np.concatenate((JacCt, JacKt, JacIt), axis=1)


m = RamseyModel()
x0 = m.init_feas_sol()
# solve with the trust-constr method
result = opt.minimize(m.f,x0,bounds=m.bounds,jac=m.gradobj,constraints=m.constraints(),method='trust-constr',options={'verbose':2})  
#print(result) 

#----------------------------------------------------------------
# reporting
# store results in a dataframe for reporting
# insert initial values for t=0 (exogenous)
#----------------------------------------------------------------

df = pd.DataFrame(np.reshape(result.x,(3,m.T)).T, columns=['C','K','I'])
df0 = pd.DataFrame({'C':[m.C0],'K':[m.K0],'I':[m.I0]})
df  = pd.concat((df0,df), ignore_index=True)
L = np.concatenate((np.array([m.L0]),m.L))
df['Y'] = m.a*np.multiply(df['K']**m.b,L**(1-m.b))
print()
print(df)

β = np.concatenate((np.array([1]),m.β))
W = float(np.dot(β,np.log(df['C'])))
print()
print(f"{W=}")
Results
PS C:\projects\OptimizationModels\Ramsey Growth Model> py .\ramseyfast.py                                                                                                                                             
| niter |f evals|CG iter|  obj func   |tr radius |   opt    |  c viol  |
|-------|-------|-------|-------------|----------|----------|----------|
|   1   |   1   |   0   | -5.4318e+00 | 1.00e+00 | 2.46e-01 | 2.24e-02 |
|   2   |   2   |   2   | -5.6836e+00 | 6.64e+00 | 1.97e-01 | 2.49e-01 |
|   3   |   3   |   4   | -5.9904e+00 | 2.35e+01 | 4.22e-02 | 5.42e-01 |
|   4   |   4   |   9   | -5.5997e+00 | 4.70e+01 | 4.82e-02 | 1.82e-01 |
|   5   |   5   |  11   | -5.4764e+00 | 4.70e+01 | 4.96e-02 | 7.36e-02 |
|   6   |   6   |  14   | -5.2683e+00 | 4.70e+01 | 4.41e-02 | 3.35e-02 |
|   7   |   7   |  18   | -5.0290e+00 | 4.70e+01 | 5.01e-02 | 1.26e-02 |
|   8   |   8   |  23   | -4.7047e+00 | 4.70e+01 | 6.26e-02 | 7.39e-03 |
|   9   |   8   |  23   | -4.7047e+00 | 2.35e+02 | 7.94e-02 | 7.39e-03 |
|  10   |   9   |  29   | -5.6795e+00 | 2.35e+02 | 7.34e-03 | 3.98e-02 |
|  11   |  10   |  40   | -5.5059e+00 | 2.35e+02 | 8.05e-03 | 2.15e-02 |
|  12   |  11   |  43   | -5.4582e+00 | 2.35e+02 | 4.95e-03 | 1.24e-02 |
|  13   |  11   |  43   | -5.4582e+00 | 1.18e+03 | 1.89e-02 | 1.24e-02 |
|  14   |  12   |  49   | -5.5727e+00 | 1.18e+03 | 1.56e-03 | 1.41e-02 |
|  15   |  13   |  54   | -5.5401e+00 | 1.18e+03 | 1.21e-03 | 8.18e-03 |
|  16   |  14   |  56   | -5.5190e+00 | 1.18e+03 | 1.12e-03 | 4.54e-03 |
|  17   |  15   |  60   | -5.5078e+00 | 1.18e+03 | 7.59e-04 | 2.36e-03 |
|  18   |  15   |  60   | -5.5078e+00 | 5.88e+03 | 3.40e-03 | 2.36e-03 |
|  19   |  16   |  64   | -5.5141e+00 | 5.88e+03 | 9.02e-04 | 1.97e-03 |
|  20   |  17   |  69   | -5.5089e+00 | 5.88e+03 | 7.46e-04 | 1.02e-03 |
|  21   |  18   |  75   | -5.5066e+00 | 5.88e+03 | 5.34e-04 | 5.24e-04 |
|  22   |  19   |  80   | -5.5054e+00 | 5.88e+03 | 3.89e-04 | 2.67e-04 |
|  23   |  19   |  80   | -5.5054e+00 | 2.94e+04 | 7.70e-04 | 2.67e-04 |
|  24   |  20   |  85   | -5.5056e+00 | 2.94e+04 | 2.72e-04 | 1.71e-04 |
|  25   |  21   |  89   | -5.5051e+00 | 2.94e+04 | 2.31e-04 | 8.64e-05 |
|  26   |  22   |  94   | -5.5049e+00 | 2.94e+04 | 1.54e-04 | 4.36e-05 |
|  27   |  23   |  97   | -5.5048e+00 | 2.94e+04 | 1.33e-04 | 2.19e-05 |
|  28   |  24   |  101  | -5.5048e+00 | 2.94e+04 | 6.98e-05 | 1.12e-05 |
|  29   |  25   |  103  | -5.5047e+00 | 2.94e+04 | 6.55e-05 | 5.61e-06 |
|  30   |  26   |  106  | -5.5047e+00 | 2.94e+04 | 3.20e-05 | 2.84e-06 |
|  31   |  26   |  106  | -5.5047e+00 | 1.47e+05 | 1.11e-04 | 2.84e-06 |
|  32   |  27   |  109  | -5.5047e+00 | 1.47e+04 | 1.11e-04 | 2.84e-06 |
|  33   |  28   |  112  | -5.5047e+00 | 1.47e+03 | 1.11e-04 | 2.84e-06 |
|  34   |  29   |  115  | -5.5047e+00 | 1.47e+02 | 1.11e-04 | 2.84e-06 |
|  35   |  30   |  118  | -5.5047e+00 | 1.47e+01 | 1.11e-04 | 2.84e-06 |
|  36   |  31   |  120  | -5.5047e+00 | 1.47e+00 | 1.11e-04 | 2.84e-06 |
|  37   |  32   |  124  | -5.5047e+00 | 4.38e-01 | 1.11e-04 | 2.84e-06 |
|  38   |  33   |  126  | -5.5047e+00 | 1.74e-01 | 1.11e-04 | 2.84e-06 |
|  39   |  34   |  128  | -5.5047e+00 | 1.02e+00 | 2.19e-05 | 3.42e-06 |
|  40   |  35   |  132  | -5.5047e+00 | 1.41e+00 | 7.66e-06 | 1.72e-06 |
|  41   |  35   |  132  | -5.5047e+00 | 7.04e+00 | 2.45e-05 | 1.72e-06 |
|  42   |  36   |  136  | -5.5047e+00 | 7.04e+00 | 9.37e-06 | 9.28e-07 |
|  43   |  37   |  139  | -5.5047e+00 | 7.04e+00 | 4.33e-06 | 4.68e-07 |
|  44   |  38   |  142  | -5.5047e+00 | 7.04e+00 | 3.28e-06 | 2.36e-07 |
|  45   |  39   |  144  | -5.5047e+00 | 7.04e+00 | 2.72e-06 | 1.19e-07 |
|  46   |  40   |  147  | -5.5047e+00 | 7.04e+00 | 1.04e-06 | 5.97e-08 |
|  47   |  40   |  147  | -5.5047e+00 | 3.52e+01 | 4.67e-06 | 5.97e-08 |
|  48   |  41   |  151  | -5.5047e+00 | 3.52e+01 | 1.78e-06 | 3.04e-08 |
|  49   |  42   |  155  | -5.5047e+00 | 3.52e+01 | 6.12e-07 | 1.55e-08 |
|  50   |  43   |  157  | -5.5047e+00 | 3.52e+01 | 5.11e-07 | 7.81e-09 |
|  51   |  44   |  160  | -5.5047e+00 | 3.52e+01 | 2.26e-07 | 3.93e-09 |
|  52   |  44   |  160  | -5.5047e+00 | 1.76e+02 | 1.21e-06 | 3.93e-09 |
|  53   |  45   |  164  | -5.5047e+00 | 1.76e+02 | 2.35e-07 | 2.04e-09 |
|  54   |  46   |  166  | -5.5047e+00 | 1.76e+02 | 2.18e-07 | 1.02e-09 |
|  55   |  47   |  169  | -5.5047e+00 | 1.76e+02 | 1.09e-07 | 5.16e-10 |
|  56   |  48   |  172  | -5.5047e+00 | 1.76e+02 | 7.35e-08 | 2.59e-10 |
|  57   |  49   |  176  | -5.5047e+00 | 1.76e+02 | 5.89e-08 | 1.30e-10 |
|  58   |  50   |  179  | -5.5047e+00 | 1.76e+02 | 4.51e-08 | 6.55e-11 |
|  59   |  50   |  179  | -5.5047e+00 | 8.80e+02 | 2.39e-07 | 6.55e-11 |
|  60   |  51   |  182  | -5.5047e+00 | 8.80e+02 | 4.32e-08 | 3.63e-11 |
|  61   |  52   |  183  | -5.5047e+00 | 8.80e+02 | 4.40e-08 | 1.83e-11 |
|  62   |  53   |  186  | -5.5047e+00 | 8.80e+02 | 2.47e-08 | 9.21e-12 |
|  63   |  54   |  187  | -5.5047e+00 | 8.80e+02 | 5.87e-08 | 4.67e-12 |
|  64   |  55   |  190  | -5.5047e+00 | 8.80e+02 | 3.38e-08 | 2.43e-12 |
|  65   |  56   |  193  | -5.5047e+00 | 8.80e+02 | 5.32e-08 | 1.42e-12 |
|  66   |  57   |  196  | -5.5047e+00 | 8.80e+01 | 5.32e-08 | 1.42e-12 |
|  67   |  58   |  199  | -5.5047e+00 | 8.80e+00 | 5.32e-08 | 1.42e-12 |
|  68   |  59   |  202  | -5.5047e+00 | 8.80e-01 | 5.32e-08 | 1.42e-12 |
|  69   |  60   |  204  | -5.5047e+00 | 8.80e-02 | 5.32e-08 | 1.42e-12 |
|  70   |  61   |  206  | -5.5047e+00 | 4.34e-02 | 5.32e-08 | 1.42e-12 |
|  71   |  62   |  208  | -5.5047e+00 | 1.09e-02 | 5.32e-08 | 1.42e-12 |
|  72   |  63   |  210  | -5.5047e+00 | 1.91e-03 | 5.32e-08 | 1.42e-12 |
|  73   |  64   |  213  | -5.5047e+00 | 4.25e-04 | 5.32e-08 | 1.42e-12 |
|  74   |  65   |  216  | -5.5047e+00 | 7.36e-05 | 5.32e-08 | 1.42e-12 |
|  75   |  66   |  219  | -5.5047e+00 | 2.30e-05 | 5.32e-08 | 1.42e-12 |
|  76   |  67   |  220  | -5.5047e+00 | 1.15e-05 | 5.32e-08 | 1.42e-12 |
|  77   |  68   |  221  | -5.5047e+00 | 5.74e-06 | 5.32e-08 | 1.42e-12 |
|  78   |  69   |  222  | -5.5047e+00 | 1.15e-05 | 4.57e-08 | 1.42e-12 |
|  79   |  70   |  223  | -5.5047e+00 | 1.15e-05 | 3.16e-08 | 1.43e-12 |
|  80   |  71   |  224  | -5.5047e+00 | 4.56e-06 | 3.16e-08 | 1.43e-12 |
|  81   |  72   |  225  | -5.5047e+00 | 2.28e-06 | 3.16e-08 | 1.43e-12 |
|  82   |  73   |  226  | -5.5047e+00 | 9.56e-07 | 3.16e-08 | 1.43e-12 |
|  83   |  74   |  227  | -5.5047e+00 | 3.07e-07 | 3.16e-08 | 1.43e-12 |
|  84   |  75   |  228  | -5.5047e+00 | 1.54e-07 | 3.16e-08 | 1.43e-12 |
|  85   |  76   |  229  | -5.5047e+00 | 7.24e-08 | 3.16e-08 | 1.43e-12 |
|  86   |  77   |  230  | -5.5047e+00 | 3.60e-08 | 3.16e-08 | 1.43e-12 |
|  87   |  78   |  231  | -5.5047e+00 | 1.53e-08 | 3.16e-08 | 1.43e-12 |
|  88   |  79   |  232  | -5.5047e+00 | 6.83e-09 | 3.16e-08 | 1.43e-12 |
|  89   |  80   |  232  | -5.5047e+00 | 1.00e+00 | 7.51e-08 | 1.43e-12 |
|  90   |  81   |  234  | -5.5047e+00 | 2.81e-01 | 7.51e-08 | 1.43e-12 |
|  91   |  82   |  236  | -5.5047e+00 | 8.73e-02 | 7.51e-08 | 1.43e-12 |
|  92   |  83   |  238  | -5.5047e+00 | 1.09e-02 | 7.51e-08 | 1.43e-12 |
|  93   |  84   |  241  | -5.5047e+00 | 1.09e-03 | 7.51e-08 | 1.43e-12 |
|  94   |  85   |  244  | -5.5047e+00 | 1.09e-04 | 7.51e-08 | 1.43e-12 |
|  95   |  86   |  247  | -5.5047e+00 | 6.49e-04 | 1.78e-08 | 1.60e-12 |
|  96   |  87   |  249  | -5.5047e+00 | 6.49e-05 | 1.78e-08 | 1.60e-12 |
|  97   |  88   |  251  | -5.5047e+00 | 6.49e-06 | 1.78e-08 | 1.60e-12 |
|  98   |  89   |  252  | -5.5047e+00 | 4.54e-05 | 4.72e-09 | 1.61e-12 |

`gtol` termination condition is satisfied.
Number of iterations: 98, function evaluations: 89, CG iterations: 252, optimality: 4.72e-09, constraint violation: 1.61e-12, execution time: 0.85 s.

           C         K         I         Y
0   0.950000  3.000000  0.050000  1.000000
1   0.950963  2.990000  0.070601  1.021564
2   0.966444  3.000801  0.078961  1.045406
3   0.983589  3.019746  0.086934  1.070523
4   1.002344  3.046285  0.094574  1.096918
5   1.022661  3.079933  0.101930  1.124591
6   1.044501  3.120264  0.109045  1.153546
7   1.067828  3.166904  0.115959  1.183788
8   1.092613  3.219526  0.122708  1.215321
9   1.118832  3.277843  0.129321  1.248153
10  1.146466  3.341607  0.135828  1.282294
11  1.175502  3.410603  0.142253  1.317755
12  1.205931  3.484644  0.148617  1.354548
13  1.237748  3.563568  0.154940  1.392688
14  1.270954  3.647237  0.161238  1.432193
15  1.305556  3.735530  0.167525  1.473081
16  1.341561  3.828345  0.173811  1.515372
17  1.378984  3.925588  0.180103  1.559088
18  1.417844  4.027180  0.186407  1.604252
19  1.458164  4.133044  0.192724  1.650888
20  1.499971  4.243107  0.199051  1.699022
21  1.543300  4.357296  0.205380  1.748680
22  1.588189  4.475530  0.211697  1.799886
23  1.634682  4.597717  0.217985  1.852667
24  1.682833  4.723747  0.224214  1.907047
25  1.720375  4.853486  0.242674  1.963049

W=5.45342753698953
 
We see that this now solves in less than a second. The effort to achieve this was considerable. Given that this is a small, simple model, we can expect the effort to become unbearable for large, complex models. Setting aside that the algorithm is not suited for large, sparse problems.

I want to point out that the number of function evaluations has decreased dramatically. It is now merely 89. (This is still much larger than for solvers like ipopt). 

For larger models, we can specify a sparse Jacobian. That saves both memory and CPU time.
 

Implementation 4: Pyomo + Ipopt


Pyomo [6] + Ipopt should make it easy to implement and solve our small NLP model. The problem starts with installation. The installation instructions for installing Ipopt don't work (the expected executable ipopt.exe is nowhere to be found).  It is always frustrating if the documented installation procedure fails to produce a working system. A workaround was to download ipopt binaries from github.

The model itself was rather straightforward:

Pyomo model
# solve the Ramsey model using Pyomo/ipopt

import pyomo.environ as pyo
import pandas as pd

#----------------------------------------------------------------
# data
#----------------------------------------------------------------


T = 26     # time periods (0..25)
ρ = 0.05   # discount factor
g = 0.03   # labor growth rate
δ = 0.02   # capital depreciation factor
K0 = 3.00  # initial capital
I0 = 0.05  # initial investment
C0 = 0.95  # initial consumption
L0 = 1.00  # initial labor
b = 0.25   # Cobb-Douglas coefficient
a = (C0 + I0)/ (K0**b * L0**(1-b))     # Cobb-Douglas coefficient
# print(f"{a=}")

# weight factor for future utilities
β = [(1+ρ)**(-t) for t in range(T)]
β[T-1] = (1/ρ) * (1+ρ)**(1-(T-1))

# labor is exogeneously determined using an exponential growth process
L = [(1+g)**t * L0 for t in range(T)]

#----------------------------------------------------------------
# model
#----------------------------------------------------------------

m = pyo.ConcreteModel()

# Sets
# This is a set of numbers
m.t = pyo.Set(initialize=[t for t in range(T)])

# Define variables
m.C = pyo.Var(m.t, domain=pyo.NonNegativeReals, bounds=(0.001,None),initialize=1)
m.Y = pyo.Var(m.t, domain=pyo.NonNegativeReals, initialize=1)
m.K = pyo.Var(m.t, domain=pyo.NonNegativeReals, bounds=(0.001,None),initialize=1)
m.I = pyo.Var(m.t, domain=pyo.NonNegativeReals, initialize=0.01)

# define constraints and objective

#  W =e=sum(t,beta(t)*log(C(t)));
def obj_rule(m):
    return sum(β[t]*pyo.log(m.C[t]) for t in m.t)
m.obj = pyo.Objective(rule=obj_rule, sense=pyo.maximize)

# Y(t)=e= a *(K(t)**b) * (L(t)**(1-b));
def production_rule(m,t):
    return m.Y[t] == a * (m.K[t]**b) * (L[t]**(1-b))
m.production = pyo.Constraint(m.t, rule=production_rule)

# Y(t)=e= C(t)+ I(t);
def allocation_rule(m,t):
    return m.Y[t] == m.C[t]+ m.I[t]
m.allocation = pyo.Constraint(m.t, rule=allocation_rule)

# K(t+1) =e= (1-delta)*K(t) + I(t);
def accumulation_rule(m,t):
    if (t==T-1):
        return pyo.Constraint.Skip
    return m.K[t+1] == (1-δ)*m.K[t]+ m.I[t]
m.accumulation = pyo.Constraint(m.t, rule=accumulation_rule)

# I(tlast) =g=(g+delta)*K(tlast);
def final_rule(m):
    return m.I[T-1] >= (g+δ)*m.K[T-1]
m.final = pyo.Constraint(rule=final_rule)

# fix variables
m.K[0].fix(K0)
m.I[0].fix(I0)
m.C[0].fix(C0)

# print model
m.pprint()

#----------------------------------------------------------------
# solve
#----------------------------------------------------------------

# ipopt.exe is missing from conda install ipopt. This is downloaded
# from the ipopt repository on github.
ipoptexe = r'C:\Users\erwin\Downloads\Ipopt-3.14.12-win64-msvs2019-md\Ipopt-3.14.12-win64-msvs2019-md\bin\ipopt.exe'
opt = pyo.SolverFactory('ipopt',executable=ipoptexe)
res = opt.solve(m, tee=True) 

#----------------------------------------------------------------
# reporting
#----------------------------------------------------------------

results = {
    'C': [pyo.value(m.C[t]) for t in m.t],
    'Y': [pyo.value(m.Y[t]) for t in m.t],
    'K': [pyo.value(m.K[t]) for t in m.t],
    'I': [pyo.value(m.I[t]) for t in m.t]
}

df = pd.DataFrame(results, index=list(m.t))
print(df)
print(f"W={pyo.value(m.obj)}")

It is possible to reduce the amount of boilerplate code a bit by using the Python decorators @model.Objective and @model.Constraint. The code for the objective and constraints becomes a little bit less dense, with a bit more emphasis on the objective and constraint themselves, although the number of lines is the same. 

# traditional

def obj_rule(m):
    return sum(β[t]*pyo.log(m.C[t]) for t in m.t)
m.obj = pyo.Objective(rule=obj_rule, sense=pyo.maximize)

def production_rule(m,t):
    return m.Y[t] == a * (m.K[t]**b) * (L[t]**(1-b))
m.production = pyo.Constraint(m.t, rule=production_rule)


# with decorators

@m.Objective(sense=pyo.maximize)
def obj(m):
    return sum(β[t]*pyo.log(m.C[t]) for t in m.t)

@m.Constraint(m.t)
def production(m,t):
    return m.Y[t] == a * (m.K[t]**b) * (L[t]**(1-b))

Model with decorators
# solve the Ramsey model using Pyomo/ipopt
# this time using Python decorators @model.Objective and @model.Constraint 


import pyomo.environ as pyo
import pandas as pd

#----------------------------------------------------------------
# data
#----------------------------------------------------------------


T = 26     # time periods (0..25)
ρ = 0.05   # discount factor
g = 0.03   # labor growth rate
δ = 0.02   # capital depreciation factor
K0 = 3.00  # initial capital
I0 = 0.05  # initial investment
C0 = 0.95  # initial consumption
L0 = 1.00  # initial labor
b = 0.25   # Cobb-Douglas coefficient
a = (C0 + I0)/ (K0**b * L0**(1-b))     # Cobb-Douglas coefficient
# print(f"{a=}")

# weight factor for future utilities
β = [(1+ρ)**(-t) for t in range(T)]
β[T-1] = (1/ρ) * (1+ρ)**(1-(T-1))

# labor is exogeneously determined using an exponential growth process
L = [(1+g)**t * L0 for t in range(T)]

#----------------------------------------------------------------
# model
#----------------------------------------------------------------

m = pyo.ConcreteModel()

# Sets
# This is a set of numbers
m.t = pyo.Set(initialize=[t for t in range(T)])

# Define variables
m.C = pyo.Var(m.t, domain=pyo.NonNegativeReals, bounds=(0.001,None),initialize=1)
m.Y = pyo.Var(m.t, domain=pyo.NonNegativeReals, initialize=1)
m.K = pyo.Var(m.t, domain=pyo.NonNegativeReals, bounds=(0.001,None),initialize=1)
m.I = pyo.Var(m.t, domain=pyo.NonNegativeReals, initialize=0.01)

# define constraints and objective

#  W =e=sum(t,beta(t)*log(C(t)));
@m.Objective(sense=pyo.maximize)
def obj(m):
    return sum(β[t]*pyo.log(m.C[t]) for t in m.t)

# Y(t) =e= a *(K(t)**b) * (L(t)**(1-b));
@m.Constraint(m.t)
def production(m,t):
    return m.Y[t] == a * (m.K[t]**b) * (L[t]**(1-b))

# Y(t) =e= C(t)+ I(t);
@m.Constraint(m.t)
def allocation(m,t):
    return m.Y[t] == m.C[t]+ m.I[t]

# K(t+1) =e= (1-delta)*K(t) + I(t);
@m.Constraint(m.t)
def accumulation(m,t):
    if (t==T-1):
        return pyo.Constraint.Skip
    return m.K[t+1] == (1-δ)*m.K[t]+ m.I[t]

# I(tlast) =g= (g+delta)*K(tlast);
@m.Constraint()
def final(m):
    return m.I[T-1] >= (g+δ)*m.K[T-1]

# fix variables
m.K[0].fix(K0)
m.I[0].fix(I0)
m.C[0].fix(C0)

# fixed variables are not printed correctly
# we can't really see what is sent to the solver.
m.pprint()

#----------------------------------------------------------------
# solve
#----------------------------------------------------------------

# ipopt.exe is missing from conda install ipopt. This is downloaded
# from the ipopt repository on github.
ipoptexe = r'C:\Users\erwin\Downloads\Ipopt-3.14.12-win64-msvs2019-md\Ipopt-3.14.12-win64-msvs2019-md\bin\ipopt.exe'
opt = pyo.SolverFactory('ipopt',executable=ipoptexe)
res = opt.solve(m, tee=True) 

#----------------------------------------------------------------
# reporting
#----------------------------------------------------------------

results = {
    'C': [pyo.value(m.C[t]) for t in m.t],
    'Y': [pyo.value(m.Y[t]) for t in m.t],
    'K': [pyo.value(m.K[t]) for t in m.t],
    'I': [pyo.value(m.I[t]) for t in m.t]
}

df = pd.DataFrame(results, index=list(m.t))
print(df)
print(f"W={pyo.value(m.obj)}")

The results look like:

           C         Y         K         I
0   0.950000  1.000000  3.000000  0.050000
1   0.950963  1.021564  2.990000  0.070601
2   0.966444  1.045406  3.000801  0.078961
3   0.983589  1.070523  3.019746  0.086934
4   1.002344  1.096918  3.046285  0.094574
5   1.022661  1.124591  3.079933  0.101930
6   1.044501  1.153546  3.120264  0.109045
7   1.067828  1.183788  3.166904  0.115959
8   1.092613  1.215321  3.219525  0.122708
9   1.118832  1.248153  3.277842  0.129321
10  1.146466  1.282294  3.341606  0.135828
11  1.175502  1.317755  3.410602  0.142253
12  1.205931  1.354548  3.484643  0.148617
13  1.237748  1.392688  3.563567  0.154940
14  1.270955  1.432193  3.647236  0.161238
15  1.305556  1.473081  3.735529  0.167525
16  1.341561  1.515371  3.828344  0.173811
17  1.378984  1.559087  3.925587  0.180103
18  1.417844  1.604251  4.027179  0.186407
19  1.458164  1.650888  4.133042  0.192724
20  1.499971  1.699022  4.243106  0.199051
21  1.543300  1.748679  4.357295  0.205380
22  1.588188  1.799886  4.475529  0.211698
23  1.634682  1.852667  4.597716  0.217985
24  1.682833  1.907047  4.723746  0.224214
25  1.720374  1.963049  4.853485  0.242674
W=5.453427574449533

The Ipopt log is very similar to the GAMS run with the same solver:

Model,Log,Results
1 Set Declarations
    t : Size=1, Index=None, Ordered=Insertion
        Key  : Dimen : Domain : Size : Members
        None :     1 :    Any :   26 : {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25}

4 Var Declarations
    C : Size=26, Index=t
        Key : Lower : Value : Upper : Fixed : Stale : Domain
          0 : 0.001 :  0.95 :  None :  True : False : NonNegativeReals
          1 : 0.001 :     1 :  None : False : False : NonNegativeReals
          2 : 0.001 :     1 :  None : False : False : NonNegativeReals
          3 : 0.001 :     1 :  None : False : False : NonNegativeReals
          4 : 0.001 :     1 :  None : False : False : NonNegativeReals
          5 : 0.001 :     1 :  None : False : False : NonNegativeReals
          6 : 0.001 :     1 :  None : False : False : NonNegativeReals
          7 : 0.001 :     1 :  None : False : False : NonNegativeReals
          8 : 0.001 :     1 :  None : False : False : NonNegativeReals
          9 : 0.001 :     1 :  None : False : False : NonNegativeReals
         10 : 0.001 :     1 :  None : False : False : NonNegativeReals
         11 : 0.001 :     1 :  None : False : False : NonNegativeReals
         12 : 0.001 :     1 :  None : False : False : NonNegativeReals
         13 : 0.001 :     1 :  None : False : False : NonNegativeReals
         14 : 0.001 :     1 :  None : False : False : NonNegativeReals
         15 : 0.001 :     1 :  None : False : False : NonNegativeReals
         16 : 0.001 :     1 :  None : False : False : NonNegativeReals
         17 : 0.001 :     1 :  None : False : False : NonNegativeReals
         18 : 0.001 :     1 :  None : False : False : NonNegativeReals
         19 : 0.001 :     1 :  None : False : False : NonNegativeReals
         20 : 0.001 :     1 :  None : False : False : NonNegativeReals
         21 : 0.001 :     1 :  None : False : False : NonNegativeReals
         22 : 0.001 :     1 :  None : False : False : NonNegativeReals
         23 : 0.001 :     1 :  None : False : False : NonNegativeReals
         24 : 0.001 :     1 :  None : False : False : NonNegativeReals
         25 : 0.001 :     1 :  None : False : False : NonNegativeReals
    I : Size=26, Index=t
        Key : Lower : Value : Upper : Fixed : Stale : Domain
          0 :     0 :  0.05 :  None :  True : False : NonNegativeReals
          1 :     0 :  0.01 :  None : False : False : NonNegativeReals
          2 :     0 :  0.01 :  None : False : False : NonNegativeReals
          3 :     0 :  0.01 :  None : False : False : NonNegativeReals
          4 :     0 :  0.01 :  None : False : False : NonNegativeReals
          5 :     0 :  0.01 :  None : False : False : NonNegativeReals
          6 :     0 :  0.01 :  None : False : False : NonNegativeReals
          7 :     0 :  0.01 :  None : False : False : NonNegativeReals
          8 :     0 :  0.01 :  None : False : False : NonNegativeReals
          9 :     0 :  0.01 :  None : False : False : NonNegativeReals
         10 :     0 :  0.01 :  None : False : False : NonNegativeReals
         11 :     0 :  0.01 :  None : False : False : NonNegativeReals
         12 :     0 :  0.01 :  None : False : False : NonNegativeReals
         13 :     0 :  0.01 :  None : False : False : NonNegativeReals
         14 :     0 :  0.01 :  None : False : False : NonNegativeReals
         15 :     0 :  0.01 :  None : False : False : NonNegativeReals
         16 :     0 :  0.01 :  None : False : False : NonNegativeReals
         17 :     0 :  0.01 :  None : False : False : NonNegativeReals
         18 :     0 :  0.01 :  None : False : False : NonNegativeReals
         19 :     0 :  0.01 :  None : False : False : NonNegativeReals
         20 :     0 :  0.01 :  None : False : False : NonNegativeReals
         21 :     0 :  0.01 :  None : False : False : NonNegativeReals
         22 :     0 :  0.01 :  None : False : False : NonNegativeReals
         23 :     0 :  0.01 :  None : False : False : NonNegativeReals
         24 :     0 :  0.01 :  None : False : False : NonNegativeReals
         25 :     0 :  0.01 :  None : False : False : NonNegativeReals
    K : Size=26, Index=t
        Key : Lower : Value : Upper : Fixed : Stale : Domain
          0 : 0.001 :   3.0 :  None :  True : False : NonNegativeReals
          1 : 0.001 :     1 :  None : False : False : NonNegativeReals
          2 : 0.001 :     1 :  None : False : False : NonNegativeReals
          3 : 0.001 :     1 :  None : False : False : NonNegativeReals
          4 : 0.001 :     1 :  None : False : False : NonNegativeReals
          5 : 0.001 :     1 :  None : False : False : NonNegativeReals
          6 : 0.001 :     1 :  None : False : False : NonNegativeReals
          7 : 0.001 :     1 :  None : False : False : NonNegativeReals
          8 : 0.001 :     1 :  None : False : False : NonNegativeReals
          9 : 0.001 :     1 :  None : False : False : NonNegativeReals
         10 : 0.001 :     1 :  None : False : False : NonNegativeReals
         11 : 0.001 :     1 :  None : False : False : NonNegativeReals
         12 : 0.001 :     1 :  None : False : False : NonNegativeReals
         13 : 0.001 :     1 :  None : False : False : NonNegativeReals
         14 : 0.001 :     1 :  None : False : False : NonNegativeReals
         15 : 0.001 :     1 :  None : False : False : NonNegativeReals
         16 : 0.001 :     1 :  None : False : False : NonNegativeReals
         17 : 0.001 :     1 :  None : False : False : NonNegativeReals
         18 : 0.001 :     1 :  None : False : False : NonNegativeReals
         19 : 0.001 :     1 :  None : False : False : NonNegativeReals
         20 : 0.001 :     1 :  None : False : False : NonNegativeReals
         21 : 0.001 :     1 :  None : False : False : NonNegativeReals
         22 : 0.001 :     1 :  None : False : False : NonNegativeReals
         23 : 0.001 :     1 :  None : False : False : NonNegativeReals
         24 : 0.001 :     1 :  None : False : False : NonNegativeReals
         25 : 0.001 :     1 :  None : False : False : NonNegativeReals
    Y : Size=26, Index=t
        Key : Lower : Value : Upper : Fixed : Stale : Domain
          0 :     0 :     1 :  None : False : False : NonNegativeReals
          1 :     0 :     1 :  None : False : False : NonNegativeReals
          2 :     0 :     1 :  None : False : False : NonNegativeReals
          3 :     0 :     1 :  None : False : False : NonNegativeReals
          4 :     0 :     1 :  None : False : False : NonNegativeReals
          5 :     0 :     1 :  None : False : False : NonNegativeReals
          6 :     0 :     1 :  None : False : False : NonNegativeReals
          7 :     0 :     1 :  None : False : False : NonNegativeReals
          8 :     0 :     1 :  None : False : False : NonNegativeReals
          9 :     0 :     1 :  None : False : False : NonNegativeReals
         10 :     0 :     1 :  None : False : False : NonNegativeReals
         11 :     0 :     1 :  None : False : False : NonNegativeReals
         12 :     0 :     1 :  None : False : False : NonNegativeReals
         13 :     0 :     1 :  None : False : False : NonNegativeReals
         14 :     0 :     1 :  None : False : False : NonNegativeReals
         15 :     0 :     1 :  None : False : False : NonNegativeReals
         16 :     0 :     1 :  None : False : False : NonNegativeReals
         17 :     0 :     1 :  None : False : False : NonNegativeReals
         18 :     0 :     1 :  None : False : False : NonNegativeReals
         19 :     0 :     1 :  None : False : False : NonNegativeReals
         20 :     0 :     1 :  None : False : False : NonNegativeReals
         21 :     0 :     1 :  None : False : False : NonNegativeReals
         22 :     0 :     1 :  None : False : False : NonNegativeReals
         23 :     0 :     1 :  None : False : False : NonNegativeReals
         24 :     0 :     1 :  None : False : False : NonNegativeReals
         25 :     0 :     1 :  None : False : False : NonNegativeReals

1 Objective Declarations
    obj : Size=1, Index=None, Active=True
        Key  : Active : Sense    : Expression
        None :   True : maximize : log(C[0]) + 0.9523809523809523*log(C[1]) + 0.9070294784580498*log(C[2]) + 0.863837598531476*log(C[3]) + 0.8227024747918819*log(C[4]) + 0.7835261664684589*log(C[5]) + 0.7462153966366275*log(C[6]) + 0.7106813301301214*log(C[7]) + 0.676839362028687*log(C[8]) + 0.6446089162177971*log(C[9]) + 0.6139132535407591*log(C[10]) + 0.5846792890864372*log(C[11]) + 0.5568374181775593*log(C[12]) + 0.5303213506452945*log(C[13]) + 0.5050679529955185*log(C[14]) + 0.48101709809097004*log(C[15]) + 0.4581115219914*log(C[16]) + 0.4362966876108571*log(C[17]) + 0.41552065486748296*log(C[18]) + 0.39573395701665043*log(C[19]) + 0.3768894828730004*log(C[20]) + 0.35894236464095275*log(C[21]) + 0.3418498710866216*log(C[22]) + 0.3255713057967825*log(C[23]) + 0.31006791028264996*log(C[24]) + 6.2013582056529994*log(C[25])

4 Constraint Declarations
    accumulation : Size=25, Index=t, Active=True
        Key : Lower : Body                         : Upper : Active
          0 :   0.0 :    K[1] - (0.98*K[0] + I[0]) :   0.0 :   True
          1 :   0.0 :    K[2] - (0.98*K[1] + I[1]) :   0.0 :   True
          2 :   0.0 :    K[3] - (0.98*K[2] + I[2]) :   0.0 :   True
          3 :   0.0 :    K[4] - (0.98*K[3] + I[3]) :   0.0 :   True
          4 :   0.0 :    K[5] - (0.98*K[4] + I[4]) :   0.0 :   True
          5 :   0.0 :    K[6] - (0.98*K[5] + I[5]) :   0.0 :   True
          6 :   0.0 :    K[7] - (0.98*K[6] + I[6]) :   0.0 :   True
          7 :   0.0 :    K[8] - (0.98*K[7] + I[7]) :   0.0 :   True
          8 :   0.0 :    K[9] - (0.98*K[8] + I[8]) :   0.0 :   True
          9 :   0.0 :   K[10] - (0.98*K[9] + I[9]) :   0.0 :   True
         10 :   0.0 : K[11] - (0.98*K[10] + I[10]) :   0.0 :   True
         11 :   0.0 : K[12] - (0.98*K[11] + I[11]) :   0.0 :   True
         12 :   0.0 : K[13] - (0.98*K[12] + I[12]) :   0.0 :   True
         13 :   0.0 : K[14] - (0.98*K[13] + I[13]) :   0.0 :   True
         14 :   0.0 : K[15] - (0.98*K[14] + I[14]) :   0.0 :   True
         15 :   0.0 : K[16] - (0.98*K[15] + I[15]) :   0.0 :   True
         16 :   0.0 : K[17] - (0.98*K[16] + I[16]) :   0.0 :   True
         17 :   0.0 : K[18] - (0.98*K[17] + I[17]) :   0.0 :   True
         18 :   0.0 : K[19] - (0.98*K[18] + I[18]) :   0.0 :   True
         19 :   0.0 : K[20] - (0.98*K[19] + I[19]) :   0.0 :   True
         20 :   0.0 : K[21] - (0.98*K[20] + I[20]) :   0.0 :   True
         21 :   0.0 : K[22] - (0.98*K[21] + I[21]) :   0.0 :   True
         22 :   0.0 : K[23] - (0.98*K[22] + I[22]) :   0.0 :   True
         23 :   0.0 : K[24] - (0.98*K[23] + I[23]) :   0.0 :   True
         24 :   0.0 : K[25] - (0.98*K[24] + I[24]) :   0.0 :   True
    allocation : Size=26, Index=t, Active=True
        Key : Lower : Body                    : Upper : Active
          0 :   0.0 :    Y[0] - (C[0] + I[0]) :   0.0 :   True
          1 :   0.0 :    Y[1] - (C[1] + I[1]) :   0.0 :   True
          2 :   0.0 :    Y[2] - (C[2] + I[2]) :   0.0 :   True
          3 :   0.0 :    Y[3] - (C[3] + I[3]) :   0.0 :   True
          4 :   0.0 :    Y[4] - (C[4] + I[4]) :   0.0 :   True
          5 :   0.0 :    Y[5] - (C[5] + I[5]) :   0.0 :   True
          6 :   0.0 :    Y[6] - (C[6] + I[6]) :   0.0 :   True
          7 :   0.0 :    Y[7] - (C[7] + I[7]) :   0.0 :   True
          8 :   0.0 :    Y[8] - (C[8] + I[8]) :   0.0 :   True
          9 :   0.0 :    Y[9] - (C[9] + I[9]) :   0.0 :   True
         10 :   0.0 : Y[10] - (C[10] + I[10]) :   0.0 :   True
         11 :   0.0 : Y[11] - (C[11] + I[11]) :   0.0 :   True
         12 :   0.0 : Y[12] - (C[12] + I[12]) :   0.0 :   True
         13 :   0.0 : Y[13] - (C[13] + I[13]) :   0.0 :   True
         14 :   0.0 : Y[14] - (C[14] + I[14]) :   0.0 :   True
         15 :   0.0 : Y[15] - (C[15] + I[15]) :   0.0 :   True
         16 :   0.0 : Y[16] - (C[16] + I[16]) :   0.0 :   True
         17 :   0.0 : Y[17] - (C[17] + I[17]) :   0.0 :   True
         18 :   0.0 : Y[18] - (C[18] + I[18]) :   0.0 :   True
         19 :   0.0 : Y[19] - (C[19] + I[19]) :   0.0 :   True
         20 :   0.0 : Y[20] - (C[20] + I[20]) :   0.0 :   True
         21 :   0.0 : Y[21] - (C[21] + I[21]) :   0.0 :   True
         22 :   0.0 : Y[22] - (C[22] + I[22]) :   0.0 :   True
         23 :   0.0 : Y[23] - (C[23] + I[23]) :   0.0 :   True
         24 :   0.0 : Y[24] - (C[24] + I[24]) :   0.0 :   True
         25 :   0.0 : Y[25] - (C[25] + I[25]) :   0.0 :   True
    final : Size=1, Index=None, Active=True
        Key  : Lower : Body               : Upper : Active
        None :  -Inf : 0.05*K[25] - I[25] :   0.0 :   True
    production : Size=26, Index=t, Active=True
        Key : Lower : Body                                                      : Upper : Active
          0 :   0.0 :                      Y[0] - 0.7598356856515925*K[0]**0.25 :   0.0 :   True
          1 :   0.0 :   Y[1] - 0.7598356856515925*K[1]**0.25*1.0224166622294937 :   0.0 :   True
          2 :   0.0 :   Y[2] - 0.7598356856515925*K[2]**0.25*1.0453358312044985 :   0.0 :   True
          3 :   0.0 :   Y[3] - 0.7598356856515925*K[3]**0.25*1.0687687714489968 :   0.0 :   True
          4 :   0.0 :             Y[4] - 0.7598356856515925*K[4]**0.25*1.092727 :   0.0 :   True
          5 :   0.0 :    Y[5] - 0.7598356856515925*K[5]**0.25*1.117222292068048 :   0.0 :   True
          6 :   0.0 :   Y[6] - 0.7598356856515925*K[6]**0.25*1.1422666868245983 :   0.0 :   True
          7 :   0.0 :   Y[7] - 0.7598356856515925*K[7]**0.25*1.1678724933191482 :   0.0 :   True
          8 :   0.0 :   Y[8] - 0.7598356856515925*K[8]**0.25*1.1940522965290001 :   0.0 :   True
          9 :   0.0 :    Y[9] - 0.7598356856515925*K[9]**0.25*1.220818963544642 :   0.0 :   True
         10 :   0.0 :  Y[10] - 0.7598356856515925*K[10]**0.25*1.248185649893783 :   0.0 :   True
         11 :   0.0 : Y[11] - 0.7598356856515925*K[11]**0.25*1.2761658060071528 :   0.0 :   True
         12 :   0.0 :  Y[12] - 0.7598356856515925*K[12]**0.25*1.304773183829245 :   0.0 :   True
         13 :   0.0 : Y[13] - 0.7598356856515925*K[13]**0.25*1.3340218435772462 :   0.0 :   True
         14 :   0.0 : Y[14] - 0.7598356856515925*K[14]**0.25*1.3639261606514839 :   0.0 :   True
         15 :   0.0 : Y[15] - 0.7598356856515925*K[15]**0.25*1.3945008327007782 :   0.0 :   True
         16 :   0.0 : Y[16] - 0.7598356856515925*K[16]**0.25*1.4257608868461793 :   0.0 :   True
         17 :   0.0 : Y[17] - 0.7598356856515925*K[17]**0.25*1.4577216870666336 :   0.0 :   True
         18 :   0.0 : Y[18] - 0.7598356856515925*K[18]**0.25*1.4903989417502141 :   0.0 :   True
         19 :   0.0 : Y[19] - 0.7598356856515925*K[19]**0.25*1.5238087114146235 :   0.0 :   True
         20 :   0.0 : Y[20] - 0.7598356856515925*K[20]**0.25*1.5579674166007653 :   0.0 :   True
         21 :   0.0 : Y[21] - 0.7598356856515925*K[21]**0.25*1.5928918459432615 :   0.0 :   True
         22 :   0.0 : Y[22] - 0.7598356856515925*K[22]**0.25*1.6285991644218862 :   0.0 :   True
         23 :   0.0 : Y[23] - 0.7598356856515925*K[23]**0.25*1.6651069217979675 :   0.0 :   True
         24 :   0.0 : Y[24] - 0.7598356856515925*K[24]**0.25*1.7024330612399046 :   0.0 :   True
         25 :   0.0 : Y[25] - 0.7598356856515925*K[25]**0.25*1.7405959281420424 :   0.0 :   True

10 Declarations: t C Y K I obj production allocation accumulation final
Ipopt 3.14.12: 

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.14.12, running with linear solver MUMPS 5.5.1.

Number of nonzeros in equality constraint Jacobian...:      200
Number of nonzeros in inequality constraint Jacobian.:        2
Number of nonzeros in Lagrangian Hessian.............:       50

Total number of variables............................:      101
                     variables with only lower bounds:      101
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:       77
Total number of inequality constraints...............:        1
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        1

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  5.1293294e-02 1.99e+00 7.20e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1 -6.0133069e-01 1.80e+00 6.39e+00  -1.0 1.99e+00    -  1.95e-01 9.50e-02f  1
   2 -6.3798787e+00 1.26e-01 5.22e+00  -1.0 1.80e+00    -  4.24e-01 1.00e+00f  1
   3 -5.9831295e+00 6.48e-02 5.42e+00  -1.0 9.05e-01    -  8.18e-01 5.24e-01h  1
   4 -6.2811709e+00 1.42e-01 2.98e+00  -1.0 6.48e+00    -  3.60e-01 4.51e-01f  1
   5 -4.9491998e+00 6.81e-02 5.99e-02  -1.0 3.94e+00    -  1.00e+00 1.00e+00f  1
   6 -5.5775815e+00 3.44e-02 5.43e-02  -2.5 3.38e+00    -  8.55e-01 9.42e-01h  1
   7 -5.4786209e+00 6.16e-03 7.71e-03  -2.5 1.07e+00    -  1.00e+00 1.00e+00h  1
   8 -5.4594846e+00 1.08e-03 1.58e-03  -3.8 3.93e-01    -  1.00e+00 1.00e+00h  1
   9 -5.4532912e+00 7.14e-06 2.06e-05  -3.8 3.04e-02    -  1.00e+00 1.00e+00h  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10 -5.4534401e+00 1.78e-06 2.64e-06  -5.7 1.51e-02    -  1.00e+00 1.00e+00h  1
  11 -5.4534276e+00 4.24e-10 2.46e-09  -8.6 2.13e-04    -  1.00e+00 1.00e+00h  1

Number of Iterations....: 11

                                   (scaled)                 (unscaled)
Objective...............:  -5.4534275744495329e+00   -5.4534275744495329e+00
Dual infeasibility......:   2.4567716279101526e-09    2.4567716279101526e-09
Constraint violation....:   4.2353254237070814e-10    4.2353254237070814e-10
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   5.8578602273391468e-09    5.8578602273391468e-09
Overall NLP error.......:   5.8578602273391468e-09    5.8578602273391468e-09


Number of objective function evaluations             = 12
Number of objective gradient evaluations             = 12
Number of equality constraint evaluations            = 12
Number of inequality constraint evaluations          = 12
Number of equality constraint Jacobian evaluations   = 12
Number of inequality constraint Jacobian evaluations = 12
Number of Lagrangian Hessian evaluations             = 11
Total seconds in IPOPT                               = 0.020

EXIT: Optimal Solution Found.
           C         Y         K         I
0   0.950000  1.000000  3.000000  0.050000
1   0.950963  1.021564  2.990000  0.070601
2   0.966444  1.045406  3.000801  0.078961
3   0.983589  1.070523  3.019746  0.086934
4   1.002344  1.096918  3.046285  0.094574
5   1.022661  1.124591  3.079933  0.101930
6   1.044501  1.153546  3.120264  0.109045
7   1.067828  1.183788  3.166904  0.115959
8   1.092613  1.215321  3.219525  0.122708
9   1.118832  1.248153  3.277842  0.129321
10  1.146466  1.282294  3.341606  0.135828
11  1.175502  1.317755  3.410602  0.142253
12  1.205931  1.354548  3.484643  0.148617
13  1.237748  1.392688  3.563567  0.154940
14  1.270955  1.432193  3.647236  0.161238
15  1.305556  1.473081  3.735529  0.167525
16  1.341561  1.515371  3.828344  0.173811
17  1.378984  1.559087  3.925587  0.180103
18  1.417844  1.604251  4.027179  0.186407
19  1.458164  1.650888  4.133042  0.192724
20  1.499971  1.699022  4.243106  0.199051
21  1.543300  1.748679  4.357295  0.205380
22  1.588188  1.799886  4.475529  0.211698
23  1.634682  1.852667  4.597716  0.217985
24  1.682833  1.907047  4.723746  0.224214
25  1.720374  1.963049  4.853485  0.242674
W=5.453427574449533

Notes: 
  • Again, we see the advantage of automatic differentiation. The code is not cluttered with gradient calculations.
  • Pyomo can operate in two ways: concrete and abstract models. Concrete models are the simplest. They use standard Python data representations (scalars, lists and numpy arrays). Abstract models have their own data representation in the form of Parameters. They form a data abstraction layer. I chose here the simpler approach.

Implementation 5: R



Surprisingly, R has very limited support for larger, more complex nonlinear programming models. There are just a few solvers available (not the standard solvers I am familiar with), and proper modeling tools are nowhere to be found (there are tools for linear and convex models). Note that the package ipoptr is no longer available on CRAN (this is where R packages are installed from). 

Here, I use the auglag solver. Interestingly it does not have bounds on the variables (some people call these "variable bounds," but I always find that a bit of a strange expressions - as if bounds can be variable). Bounds in NLPs have an important role (more so than in LP/MIP where they are largely for performance and convenience). Bounds are used to convey a domain: non-linear functions (including gradients) should not be evaluated when \(x\) is outside its bounds. This way we can protect e.g. \(\log(x)\) by imposing the bound \(x\ge 0.001\). In this respect a bound is very different from a general constraint. As a workaround for not being able to use bounds, we have to return very large values in the functions to steer the solver away from areas where functions cannot be evaluated.  A disadvantage with this approach is that we introduce a non-differentiable behavior. That can be an issue.

As there are no bounds, we cannot fix variables either. These fixes are used to exogenize variables in the first period. We have to implement our fixes as general constraints. 


R code
library(alabama)

#-----------------------------------------
# data
#-----------------------------------------

T <- 26

ρ  <- 0.05
g  <- 0.03
δ  <- 0.02
K0 <- 3
I0 <- 0.05
C0 <- 0.95
L0 <- 1
b  <- 0.25


τ <- 0:(T-1)

β <- (1+ρ)^(-τ)
β[T] <- (1/ρ)*(1+ρ)^(1-T+1)


L <- (1+g)^τ * L0
Lb <- L^(1-b)

a <- (C0+I0)/(K0^b * L0^(1-b))

#-----------------------------------------
# indexing
#-----------------------------------------

Ci <- 1:T
Yi <- (T+1):(2*T)
Ki <- (2*T+1):(3*T)
Ii <- (3*T+1):(4*T)

n <- 4*T

#-----------------------------------------
# objective
#-----------------------------------------

# we don't have bounds, so protect log(Ct) here
obj <- function(x) {
  Ct <- x[Ci]
  if(any(Ct <= 0.001)) return(1e20)
  -sum(β * log(Ct))
}

#-----------------------------------------
# equality constraints
#-----------------------------------------

heq <- function(x){
  
  Ct <- x[Ci]
  Yt <- x[Yi]
  Kt <- x[Ki]
  It <- x[Ii]
  
  # protect against Kt[t] < 0
  Kt[Kt<0] <- 0
  
  prod  <- Yt - a*Kt^b*Lb
  alloc <- Yt - Ct - It
  accum <- Kt[2:T] - ((1-δ)*Kt[1:(T-1)] + It[1:(T-1)])
  
  # in GAMS period 1 is fixed
  # no bounds in this solver so we use equality constraints
  init <- c(
    Kt[1]-K0,
    It[1]-I0,
    Ct[1]-C0
  )
  
  c(prod,alloc,accum,init)
}

#-----------------------------------------
# single inequality constraint
#-----------------------------------------

hin <- function(x) {
  
  Klast <- x[3*T]
  Ilast <- x[4*T]
  
  Ilast - (g+δ)*Klast
}

#-----------------------------------------
# starting point
#-----------------------------------------

x0 <- rep(1,n)

x0[Ci] <- C0
x0[Yi] <- C0+I0
x0[Ki] <- K0
x0[Ii] <- I0

#-----------------------------------------
# solve
#-----------------------------------------

res <- auglag(
  par=x0,
  fn=obj,
  heq=heq,
  hin=hin
)

#-----------------------------------------
# results
#-----------------------------------------

x <- res$par

Ct <- x[Ci]
Yt <- x[Yi]
Kt <- x[Ki]
It <- x[Ii]

results <- data.frame(
  t=0:(T-1),
  C=Ct,
  Y=Yt,
  K=Kt,
  I=It
)

print(results)
cat("Utility =", -res$value,"\n")
Log and results
Min(hin):  -0.1 Max(abs(heq)):  0.7405959 
Outer iteration:  1 
Min(hin):  -0.1 Max(abs(heq)):  0.7405959 
par:  0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 
fval =   1.077 
 
Outer iteration:  2 
Min(hin):  0.04751261 Max(abs(heq)):  0.1130587 
par:  0.992594 1.07081 1.09086 1.10529 1.11615 1.12509 1.14061 1.16392 1.19518 1.22955 1.26429 1.29732 1.32843 1.35945 1.3877 1.41778 1.4418 1.46477 1.48413 1.5008 1.51372 1.53322 1.57136 1.66905 1.8843 1.73253 1.12 1.11864 1.14264 1.16488 1.19035 1.21637 1.24513 1.27516 1.30687 1.33891 1.3715 1.40448 1.43843 1.47293 1.50878 1.54618 1.58491 1.62601 1.66993 1.71692 1.76791 1.8237 1.88351 1.94598 2.00586 2.07686 3.08414 3.19577 3.1925 3.19316 3.20018 3.22149 3.25851 3.30756 3.36159 3.41446 3.4633 3.5084 3.55206 3.59734 3.64463 3.6982 3.75774 3.83039 3.91942 4.03105 4.17043 4.3448 4.55185 4.77611 4.96034 4.98527 0.0922529 -0.0199431 -0.0157887 -0.00892593 0.00575922 0.0222862 0.0353476 0.0416303 0.0418534 0.0391898 0.0367329 0.0364224 0.039122 0.0422811 0.0496997 0.0569041 0.0713877 0.0893694 0.113866 0.143994 0.181916 0.218187 0.239682 0.204279 0.0487772 0.296776 
fval =   -6.711 
 
Outer iteration:  3 
Min(hin):  -0.0001289302 Max(abs(heq)):  0.001523877 
par:  0.950066 0.953437 0.96823 0.984393 1.00294 1.02413 1.04711 1.07079 1.09494 1.12004 1.14665 1.17495 1.20499 1.23688 1.27048 1.30587 1.34234 1.37998 1.41854 1.45833 1.49971 1.54338 1.58981 1.63849 1.6882 1.72164 1.00027 1.02328 1.04578 1.07204 1.0977 1.12585 1.15456 1.18493 1.21613 1.24897 1.28309 1.31871 1.35544 1.39399 1.43362 1.47457 1.51701 1.56074 1.60573 1.65248 1.70069 1.75031 1.80178 1.85483 1.90914 1.96485 3.00091 2.99228 3.00426 3.02288 3.05177 3.08654 3.12768 3.17344 3.225 3.28243 3.34647 3.41676 3.49287 3.574 3.66022 3.75066 3.84471 3.94283 4.04503 4.15142 4.26267 4.37852 4.49787 4.61992 4.74399 4.87019 0.0504945 0.0709581 0.0780052 0.0886077 0.0952203 0.102318 0.107846 0.114603 0.121544 0.129299 0.136825 0.144116 0.150701 0.157429 0.163407 0.168867 0.174849 0.180918 0.18722 0.194221 0.201051 0.206885 0.21197 0.21641 0.221023 0.243381 
fval =   -5.472 
 
Outer iteration:  4 
Min(hin):  -6.441312e-06 Max(abs(heq)):  2.857182e-05 
par:  0.949991 0.949726 0.96545 0.982394 1.00151 1.02234 1.04501 1.06867 1.09332 1.11889 1.14583 1.17431 1.20454 1.23633 1.26996 1.30535 1.34185 1.37962 1.41852 1.45845 1.50001 1.54386 1.59009 1.63848 1.68819 1.7212 1.00001 1.0216 1.04555 1.07077 1.09729 1.12506 1.15409 1.18433 1.21582 1.24863 1.28282 1.31838 1.35534 1.39367 1.43341 1.47448 1.51692 1.56073 1.60598 1.65268 1.70094 1.75075 1.80207 1.85484 1.90901 1.96464 3.00003 2.99008 3.0022 3.02229 3.05025 3.08506 3.1261 3.17266 3.22488 3.28288 3.34697 3.41703 3.49277 3.57372 3.6596 3.74986 3.844 3.9422 4.04447 4.15104 4.26225 4.37793 4.49726 4.6193 4.74327 4.86924 0.0500229 0.0718969 0.0801138 0.0883906 0.0957994 0.10273 0.109083 0.115664 0.122503 0.129744 0.136998 0.144076 0.150801 0.157346 0.163456 0.169137 0.175076 0.181112 0.187463 0.194227 0.200929 0.206888 0.211981 0.21636 0.22083 0.243456 
fval =   -5.454 
 
Outer iteration:  5 
Min(hin):  -1.205817e-06 Max(abs(heq)):  2.656057e-05 
par:  0.949998 0.94967 0.965404 0.982355 1.00147 1.02233 1.045 1.06867 1.09333 1.11889 1.14582 1.1743 1.20454 1.23633 1.26995 1.30535 1.34184 1.37962 1.41851 1.45845 1.50001 1.54386 1.59008 1.63847 1.68818 1.72118 1.00001 1.02158 1.04553 1.07075 1.09727 1.12506 1.15409 1.18433 1.21583 1.24864 1.28281 1.31838 1.35534 1.39368 1.4334 1.47449 1.51692 1.56073 1.60597 1.65268 1.70094 1.75075 1.80207 1.85484 1.90901 1.96464 3.00003 2.99006 3.00219 3.02228 3.05024 3.08503 3.12608 3.17265 3.22487 3.28289 3.34698 3.41704 3.49277 3.57372 3.65959 3.74985 3.844 3.94219 4.04447 4.15104 4.26225 4.37793 4.49726 4.61929 4.74326 4.86923 0.0500173 0.0719141 0.080132 0.0883967 0.0958014 0.102735 0.109091 0.115667 0.122508 0.129749 0.136992 0.144075 0.150799 0.157349 0.163452 0.169142 0.175074 0.181119 0.187458 0.194231 0.200932 0.206887 0.211978 0.216363 0.220829 0.24346 
fval =   -5.454 
 
Outer iteration:  6 
Min(hin):  4.876042e-07 Max(abs(heq)):  3.669441e-06 
par:  0.949999 0.949624 0.965376 0.98234 1.00146 1.02231 1.04498 1.06865 1.09331 1.11888 1.14581 1.1743 1.20454 1.23633 1.26995 1.30535 1.34184 1.37962 1.41851 1.45845 1.50001 1.54386 1.59009 1.63848 1.68818 1.72118 1 1.02157 1.04552 1.07075 1.09727 1.12506 1.15408 1.18432 1.21582 1.24863 1.28281 1.31838 1.35534 1.39368 1.43341 1.47449 1.51692 1.56073 1.60597 1.65268 1.70093 1.75075 1.80207 1.85484 1.90901 1.96464 3 2.99001 3.00216 3.02226 3.05023 3.08504 3.12608 3.17265 3.22487 3.28288 3.34698 3.41703 3.49277 3.57372 3.65959 3.74985 3.844 3.94219 4.04447 4.15104 4.26224 4.37793 4.49726 4.61929 4.74327 4.86923 0.0500028 0.0719441 0.08015 0.0884088 0.0958113 0.102743 0.109097 0.115673 0.122514 0.129753 0.136997 0.144077 0.1508 0.157349 0.163452 0.169143 0.175077 0.181117 0.18746 0.194229 0.200927 0.206886 0.211978 0.21636 0.220829 0.243462 
fval =   -5.453 
 
Outer iteration:  7 
Min(hin):  4.307232e-07 Max(abs(heq)):  3.256118e-06 
par:  0.950001 0.949617 0.965372 0.982336 1.00146 1.02231 1.04499 1.06865 1.09331 1.11888 1.14581 1.1743 1.20454 1.23633 1.26995 1.30535 1.34184 1.37962 1.41851 1.45845 1.50001 1.54386 1.59009 1.63848 1.68819 1.72118 1 1.02156 1.04552 1.07075 1.09727 1.12506 1.15408 1.18433 1.21583 1.24863 1.28281 1.31838 1.35534 1.39368 1.4334 1.47449 1.51692 1.56073 1.60597 1.65268 1.70094 1.75075 1.80207 1.85484 1.90901 1.96464 3 2.99001 3.00215 3.02226 3.05022 3.08503 3.12607 3.17265 3.22487 3.28289 3.34698 3.41704 3.49277 3.57372 3.65959 3.74985 3.844 3.94219 4.04447 4.15104 4.26225 4.37793 4.49726 4.61929 4.74326 4.86923 0.050001 0.071947 0.0801522 0.0884087 0.0958095 0.102743 0.109097 0.115673 0.122513 0.129752 0.136995 0.144076 0.1508 0.157349 0.163451 0.169143 0.175076 0.181117 0.187459 0.19423 0.200929 0.206887 0.211978 0.21636 0.220829 0.243462 
fval =   -5.453 
 
Outer iteration:  8 
Min(hin):  -1.85032e-07 Max(abs(heq)):  1.00107e-06 
par:  0.949999 0.949616 0.965371 0.982337 1.00146 1.02231 1.04499 1.06865 1.09331 1.11888 1.14581 1.1743 1.20454 1.23633 1.26995 1.30535 1.34184 1.37962 1.41851 1.45845 1.50001 1.54386 1.59009 1.63848 1.68818 1.72118 1 1.02156 1.04552 1.07075 1.09727 1.12506 1.15408 1.18432 1.21583 1.24863 1.28281 1.31838 1.35534 1.39368 1.4334 1.47449 1.51692 1.56073 1.60597 1.65268 1.70093 1.75075 1.80207 1.85484 1.90901 1.96464 3 2.99 3.00215 3.02226 3.05022 3.08503 3.12607 3.17265 3.22487 3.28289 3.34698 3.41704 3.49277 3.57372 3.65959 3.74985 3.844 3.94219 4.04447 4.15104 4.26225 4.37793 4.49726 4.61929 4.74326 4.86923 0.0500009 0.0719479 0.0801521 0.088409 0.0958103 0.102743 0.109097 0.115673 0.122514 0.129752 0.136996 0.144076 0.1508 0.157349 0.163452 0.169143 0.175076 0.181118 0.18746 0.19423 0.200928 0.206886 0.211978 0.21636 0.220829 0.243461 
fval =   -5.453 
 
    t         C        Y        K          I
1   0 0.9500000 1.000000 3.000000 0.05000001
2   1 0.9496141 1.021564 2.990000 0.07194949
3   2 0.9653700 1.045523 3.002149 0.08015302
4   3 0.9823360 1.070746 3.022260 0.08840976
5   4 1.0014612 1.097272 3.050224 0.09581083
6   5 1.0223130 1.125056 3.085030 0.10274301
7   6 1.0449860 1.154083 3.126073 0.10909685
8   7 1.0686511 1.184324 3.172648 0.11567306
9   8 1.0933109 1.215825 3.224868 0.12251399
10  9 1.1188810 1.248633 3.282885 0.12975224
11 10 1.1458137 1.282809 3.346980 0.13699569
12 11 1.1742996 1.318376 3.417036 0.14407604
13 12 1.2045366 1.355337 3.492771 0.15080020
14 13 1.2363299 1.393679 3.573716 0.15734871
15 14 1.2699520 1.433404 3.659590 0.16345215
16 15 1.3053476 1.474490 3.749850 0.16914285
17 16 1.3418418 1.516918 3.843996 0.17507628
18 17 1.3796162 1.560734 3.942193 0.18111741
19 18 1.4185106 1.605970 4.044466 0.18745972
20 19 1.4584524 1.652682 4.151037 0.19422955
21 20 1.5000065 1.700935 4.262245 0.20092836
22 21 1.5438599 1.750746 4.377929 0.20688614
23 22 1.5900885 1.802066 4.497256 0.21197801
24 23 1.6384758 1.854836 4.619289 0.21636057
25 24 1.6881848 1.909014 4.743264 0.22082892
26 25 1.7211772 1.964639 4.869228 0.24346139
Utility = 5.453402 
 
The log prints the vector \(x\) for every outer iteration. I have never seen that before. Obviously, this solver was designed for small problems.

Implementation 6: Julia, JuMP+Ipopt



Julia has a modeling tool JuMP[7] and this can use the Ipopt NLP solver. This framework allows us to express our model in a rather straightforward manner. There were no real issues here.

Julia code
# 
# Julia version of the Ramsey Growth Model
#
using JuMP, Ipopt, DataFrames


#----------------------------------------------------------------
# data
#----------------------------------------------------------------

T = 26     # number of time periods (0..25)
ρ = 0.05   # discount factor
g = 0.03   # labor growth rate
δ = 0.02   # capital depreciation factor
K0 = 3.00  # initial capital
I0 = 0.05  # initial investment
C0 = 0.95  # initial consumption
L0 = 1.00  # initial labor
b = 0.25   # Cobb-Douglas coefficient
a = (C0 + I0)/ (K0^b * L0^(1-b))     # Cobb-Douglas coefficient

𝜏 = 0:T-1 # time periods 0..25

# weight factor for future utilities
β = (1+ρ).^(-𝜏)
β[T] = (1/ρ) * (1+ρ)^(1-T+1)   

# labor is exogeneously determined using an exponential growth process
L = (1+g).^𝜏 * L0

#----------------------------------------------------------------
# model
#----------------------------------------------------------------

model = Model(Ipopt.Optimizer)

# decision variables
@variable(model, K[1:T] >= 0)       # capital
@variable(model, C[1:T] >= 0.001)   # consumption
@variable(model, I[1:T] >= 0)       # investment
@variable(model, Y[1:T] >= 0.001)   # output

# fix to initial values
fix(K[1],K0;force=true)
fix(C[1],C0;force=true)
fix(I[1],I0;force=true)

# initial point
set_start_value.(K, K0)
set_start_value.(C, C0)

@objective(model,    Max, sum(β[t]*log(C[t]) for t in 1:T))
@constraint(model,   production[t in 1:T],      Y[t] == a * (K[t]^b) * (L[t]^(1-b)))
@constraint(model,   allocation[t in 1:T],      Y[t] == C[t] + I[t])
@constraint(model,   accumulation[t in 1:T-1],  K[t+1] == (1-δ)*K[t] + I[t])
@constraint(model,   final,                     I[T] >= (g + δ) * K[T])

optimize!(model)

#----------------------------------------------------------------
# reporting
#----------------------------------------------------------------

results = DataFrame(
    t = 𝜏,
    C = value.(C),
    Y = value.(Y),
    K = value.(K),
    I = value.(I)
)

println(results)

println("W: ", objective_value(model))
Log and results

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2.

Number of nonzeros in equality constraint Jacobian...:      200
Number of nonzeros in inequality constraint Jacobian.:        2
Number of nonzeros in Lagrangian Hessian.............:       50

Total number of variables............................:      101
                     variables with only lower bounds:      101
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:       77
Total number of inequality constraints...............:        1
        inequality constraints with only lower bounds:        1
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0 -1.0771592e+00 1.73e+00 7.53e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1 -1.0543359e+00 1.71e+00 7.43e+00  -1.0 7.43e+00    -  1.08e-02 1.27e-02h  1
   2 -5.2424573e-01 1.57e+00 6.93e+00  -1.0 1.50e+00    -  1.48e-02 8.05e-02f  1
   3  5.1546107e+00 1.29e-03 4.05e+00  -1.0 1.61e+00    -  2.10e-01 1.00e+00f  1
   4  5.3304284e+00 1.10e-02 2.36e-01  -1.0 8.53e-01    -  8.00e-01 1.00e+00f  1
   5  5.4552712e+00 3.33e-03 6.12e-02  -1.7 5.62e-01    -  9.50e-01 1.00e+00h  1
   6  5.4664981e+00 4.12e-03 4.10e-03  -2.5 6.58e-01    -  9.85e-01 1.00e+00h  1
   7  5.4544831e+00 1.53e-04 2.29e-04  -3.8 1.43e-01    -  1.00e+00 1.00e+00h  1
   8  5.4534642e+00 4.58e-06 1.74e-05  -5.7 2.21e-02    -  1.00e+00 1.00e+00h  1
   9  5.4534257e+00 2.63e-09 7.16e-08  -5.7 4.86e-04    -  1.00e+00 1.00e+00h  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  5.4534276e+00 2.92e-10 3.59e-10  -8.6 1.86e-04    -  1.00e+00 1.00e+00h  1

Number of Iterations....: 10

                                   (scaled)                 (unscaled)
Objective...............:  -5.4534275726820365e+00    5.4534275726820365e+00
Dual infeasibility......:   3.5908250119155526e-10    3.5908250119155526e-10
Constraint violation....:   2.9231839171472984e-10    2.9231839171472984e-10
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.9982780235857593e-09    2.9982780235857593e-09
Overall NLP error.......:   2.9982780235857593e-09    2.9982780235857593e-09


Number of objective function evaluations             = 11
Number of objective gradient evaluations             = 11
Number of equality constraint evaluations            = 11
Number of inequality constraint evaluations          = 11
Number of equality constraint Jacobian evaluations   = 11
Number of inequality constraint Jacobian evaluations = 11
Number of Lagrangian Hessian evaluations             = 10
Total seconds in IPOPT                               = 3.335

EXIT: Optimal Solution Found.
26×5 DataFrame
 Row │ t      C         Y        K        I         
     │ Int64  Float64   Float64  Float64  Float64
─────┼──────────────────────────────────────────────
   1 │     0  0.95      1.0      3.0      0.05
   2 │     1  0.950963  1.02156  2.99     0.0706008
   3 │     2  0.966444  1.04541  3.0008   0.0789612
   4 │     3  0.983589  1.07052  3.01975  0.086934
   5 │     4  1.00234   1.09692  3.04629  0.0945739
   6 │     5  1.02266   1.12459  3.07993  0.10193
   7 │     6  1.0445    1.15355  3.12026  0.109045
   8 │     7  1.06783   1.18379  3.1669   0.115959
   9 │     8  1.09261   1.21532  3.21952  0.122708
  10 │     9  1.11883   1.24815  3.27784  0.129321
  11 │    10  1.14647   1.28229  3.34161  0.135828
  12 │    11  1.1755    1.31775  3.4106   0.142253
  13 │    12  1.20593   1.35455  3.48464  0.148617
  14 │    13  1.23775   1.39269  3.56357  0.15494
  15 │    14  1.27095   1.43219  3.64724  0.161238
  16 │    15  1.30556   1.47308  3.73553  0.167525
  17 │    16  1.34156   1.51537  3.82834  0.173811
  18 │    17  1.37898   1.55909  3.92559  0.180103
  19 │    18  1.41784   1.60425  4.02718  0.186407
  20 │    19  1.45816   1.65089  4.13304  0.192724
  21 │    20  1.49997   1.69902  4.24311  0.199051
  22 │    21  1.5433    1.74868  4.35729  0.20538
  23 │    22  1.58819   1.79989  4.47553  0.211698
  24 │    23  1.63468   1.85267  4.59772  0.217985
  25 │    24  1.68283   1.90705  4.72375  0.224214
  26 │    25  1.72037   1.96305  4.85349  0.242674
W: 5.4534275726820365

JuMP does automatic differentiation for us. 

Notes:
  • Viewing the complete model is more difficult than what I would expect. print(model) somewhat does the job. However, the result is incomplete:
       [[...82 constraints skipped...]] 
    I found no obvious way to tell print to do a better job. The output is not always correctly ordered:
       1 * log(C[1])) + (6.2013582056529994 * log(C[26])) + (0.31006791028264996 * log(C[25]))
    Obviously the superfluous parentheses are better left out. The developer was a bit lazy here.
    The initial levels are missing (these are very important for NLPs and deserve a prominent place in this report). I just want a complete textual representation of what the solver will receive.
  • As an experiment I dropped the initial values for the non-linear variables. GAMS+Ipopt has no problems in that case. GAMS will project the initial variables to the closest bound (from the default zero). I am not sure what Jump does, but the resulting model did not converge, and Ipopt stopped on an iteration limit. Whatever Jump does, it seems less robust than what GAMS does.

Implementation 7: Python, GAMSPy



This is a bit of strange beast. GAMSPy[8] is a front-end to GAMS: statements are passed on to GAMS for execution. In that respect it is very different from Pyomo or Julia, which are modeling systems in their own right. With GAMSPy, all data (except scalars) need to be converted to GAMS parameters before they can be used in the model. That makes sense as the indexing scheme in GAMS is very different from Python lists or Numpy arrays. In a way, GAMSPy is like using an abstract model in Pyomo.

Python/GAMSPy code
#
# GAMSpy version 
#

import gamspy as gp

#----------------------------------------------------------------
# data
#----------------------------------------------------------------


T = 26     # time periods (0..25)
ρ = 0.05   # discount factor
g = 0.03   # labor growth rate
δ = 0.02   # capital depreciation factor
K0 = 3.00  # initial capital
I0 = 0.05  # initial investment
C0 = 0.95  # initial consumption
L0 = 1.00  # initial labor
b = 0.25   # Cobb-Douglas coefficient
a = (C0 + I0)/ (K0**b * L0**(1-b))     # Cobb-Douglas coefficient
# print(f"{a=}")

# weight factor for future utilities
β = [(1+ρ)**(-t) for t in range(T)]
β[T-1] = (1/ρ) * (1+ρ)**(1-(T-1))

# labor is exogeneously determined using an exponential growth process
L = [(1+g)**t * L0 for t in range(T)]

#----------------------------------------------------------------
# model
#----------------------------------------------------------------

m=gp.Container()

t = gp.Set(m,name="t",records = ["period"+str(t) for t in range(T)])
data = gp.Parameter(m,name="data",domain=[t,"*"],records=   
                         [("period"+str(t),"L",L[t]) for t in range(T)] 
                       + [("period"+str(t),"beta",β[t]) for t in range(T)])

C = gp.Variable(m,domain=t,description="Consumption")
Y = gp.Variable(m,name="Y",domain=t,description="Output")
K = gp.Variable(m,name="K",domain=t,description="Capital")
I = gp.Variable(m,name="I",domain=t,description="Investment")
W = gp.Variable(m,name="W",description="Utility")

C.lo[t] = 0.001
K.lo[t] = 0.001
Y.lo[t] = 0
I.lo[t] = 0

C.fx["period0"] = C0
K.fx["period0"] = K0    
I.fx["period0"] = I0

production = gp.Equation(m,name="production",domain=t,description='Cobb-Douglas production function')
allocation = gp.Equation(m,name="allocation",domain=t,description='household chooses between consumption and saving')
accumulation = gp.Equation(m,name="accumulation",domain=t,description='capital accumulation')
utility = gp.Equation(m,name="utility",description='discounted utility')
final = gp.Equation(m,name="final",domain=t,description='minimal investment in final period')

utility[...]  =                         W == gp.Sum(t,data[t,"beta"]*gp.math.log(C[t]))
production[t] =                         Y[t] == a * (K[t]**b) * (data[t,"L"]**(1-b))
allocation[t] =                         Y[t] == C[t] + I[t]
accumulation[t].where[gp.Ord(t) < T] =  K[t+1] == (1-δ)*K[t] + I[t]
final[t].where[gp.Ord(t) == T] =        I[t] >= (g+δ)*K[t]

model = gp.Model(m,equations=[utility,production,allocation,accumulation,final],
                 objective=W,sense=gp.Sense.MAX,problem=gp.Problem.NLP)

summary = model.solve()
print(summary)


#----------------------------------------------------------------
# reporting
#----------------------------------------------------------------

results = gp.Parameter(m,name="results",domain=[t,"*"])
results[t,"C"] = C[t]
results[t,"Y"] = Y[t]    
results[t,"K"] = K[t]
results[t,"I"] = I[t]
print("")
print(results.pivot().round(3))
w = W.records.at[0,"level"]
print(f"W={w}")
Output
      Solver Status  Model Status  Objective  ...  Model Type   Solver Solver Time
0  NormalCompletion  OptimalLocal   5.453428  ...         NLP  CONOPT4       0.047

[1 rows x 8 columns]

              C      Y      K      I
period0   0.950  1.000  3.000  0.050
period1   0.951  1.022  2.990  0.071
period2   0.966  1.045  3.001  0.079
period3   0.984  1.071  3.020  0.087
period4   1.002  1.097  3.046  0.095
period5   1.023  1.125  3.080  0.102
period6   1.045  1.154  3.120  0.109
period7   1.068  1.184  3.167  0.116
period8   1.093  1.215  3.220  0.123
period9   1.119  1.248  3.278  0.129
period10  1.146  1.282  3.342  0.136
period11  1.176  1.318  3.411  0.142
period12  1.206  1.355  3.485  0.149
period13  1.238  1.393  3.564  0.155
period14  1.271  1.432  3.647  0.161
period15  1.306  1.473  3.736  0.168
period16  1.342  1.515  3.828  0.174
period17  1.379  1.559  3.926  0.180
period18  1.418  1.604  4.027  0.186
period19  1.458  1.651  4.133  0.193
period20  1.500  1.699  4.243  0.199
period21  1.543  1.749  4.357  0.205
period22  1.588  1.800  4.476  0.212
period23  1.635  1.853  4.598  0.218
period24  1.683  1.907  4.724  0.224
period25  1.720  1.963  4.853  0.243
W=5.45342753712209

Notes:
  • Obviously this solves very fast. As it is using GAMS, the total time should be the same as GAMS takes plus a little bit of overhead.
  • On the modeling side it looks very much like GAMS, just a slightly different syntax.
  • By default the solver log is not shown. IMHO, it is better to turn this on. Besides being entertained, the solver log may contain useful feedback about the model. Thinks like scaling, infeasibility of early iterations, etc.
 

Conclusions

This small and simple model turned out to be more interesting when trying to reimplement the GAMS model in other languages and with different tools and solvers. It is a model with a nonlinear objective and linear and nonlinear constraints. In addition, we have bounds, fixed variables, initial values, and some reporting. Unfortunately, we don't have any sparsity in the variables. That would be another big challenge for some systems. I am very much used to what a modeling system like GAMS offers, and how more serious large scale NLP solvers behave. This comparison shows that using some tools and solvers is not at all an effortless and smooth experience, even though this is a small and easy model. Things can be much, much worse if the model is larger and more complex.

I am rather disappointed with scipy and R. The underlying issues are: 
  • the abstraction level: a vector \(x\) as variable is not suitable for practical models,
  • life is not pleasant without automatic differentation, 
  • the quality of the solvers can be lacking,
  • it can take lots of time and effort to make things work smoothly. This takes time away from the more important work: develop models.  

References


  1. Frank P. Ramsey - Wikipedia
  2. F.P. Ramsey, A Mathematical Theory of Saving, Economics Journal, December 1928.
  3. A. Manne: GAMS/MINOS: Three examples, manuscript, Department of Operations Research, Stanford University, 1986
  4. Lau, Morten I. & Pahlke, Andreas & Rutherford, Thomas F., 2002. "Approximating infinite-horizon models in a complementarity format: A primer in dynamic general equilibrium analysis," Journal of Economic Dynamics and Control, Elsevier, vol. 26(4), pages 577-609, April. See also: https://www.mpsge.org/primer/primer.pdf.
  5. https://docs.scipy.org/doc/scipy/tutorial/optimize.html
  6. https://pyomo.readthedocs.io/en/stable/
  7. https://jump.dev/JuMP.jl/stable/
  8. https://gamspy.readthedocs.io/en/latest/index.html

No comments:

Post a Comment