Thursday, April 21, 2016

Dump solution data into a MySQL database I

For a project where I needed to write a solution data set from a GAMS GDX file into MySQL database, I developed a tool gdx2mysql. The call is:

> gdx2mysql –i results.gdx –s tmp –u username –p password

Here tmp is the target schema (or database). The default server is localhost which is what we use in this application. The tool will try to pick the latest ODBC driver installed. When using the trnsport model from the GAMS model library  (yes, transport without the ‘a’, a relic from 8.3 filenames we used to be limited to), we see:

--- Job trnsport.gms Start 04/21/16 11:46:43 24.6.1 r55820 WEX-WEI x86 64bit/MS Windows
GAMS 24.6.1   Copyright (C) 1987-2016 GAMS Development. All rights reserved
Licensee: Erwin Kalvelagen                               G150803/0001CV-GEN
          Amsterdam Optimization Modeling Group                     DC10455
--- Starting compilation
--- trnsport.gms(71) 3 Mb
--- Starting execution: elapsed 0:00:00.047
--- trnsport.gms(45) 4 Mb
--- Generating LP model transport
--- trnsport.gms(66) 4 Mb
---   6 rows  7 columns  19 non-zeroes
--- Executing CPLEX: elapsed 0:00:00.248

IBM ILOG CPLEX   24.6.1 r55820 Released Jan 18, 2016 WEI x86 64bit/MS Windows
Cplex 12.6.3.0

Reading data...
Starting Cplex...
Space for names approximately 0.00 Mb
Use option 'names no' to turn use of names off
Tried aggregator 1 time.
LP Presolve eliminated 1 rows and 1 columns.
Reduced LP has 5 rows, 6 columns, and 12 nonzeros.
Presolve time = 0.00 sec. (0.00 ticks)

Iteration      Dual Objective            In Variable           Out Variable
     1              73.125000    x(seattle.new-york) demand(new-york) slack
     2             119.025000     x(seattle.chicago)  demand(chicago) slack
     3             153.675000    x(san-diego.topeka)   demand(topeka) slack
     4             153.675000  x(san-diego.new-york)  supply(seattle) slack
LP status(1): optimal
Cplex Time: 0.00sec (det. 0.01 ticks)

Optimal solution found.
Objective :         153.675000

--- Restarting execution
--- trnsport.gms(66) 2 Mb
--- Reading solution for model transport
--- Executing after solve: elapsed 0:00:00.625
--- trnsport.gms(70) 3 Mb
--- GDX File C:\Users\Erwin\Documents\Embarcadero\Studio\Projects\gdx2mysql\Win32\Debug\results.gdx
--- trnsport.gms(71) 3 Mb
GDX2MySQL v 0.1
Copyright (c) 2015-2016 Amsterdam Optimization Modeling Group LLC

   GDX Library      24.6.1 r55820 Released Jan 18, 2016 VS8 x86 32bit/MS Windows
   GDX:Input file: results.gdx
   GDX:Symbols: 12
   GDX:Uels: 5
   GDX:Loading Uels
   SQL:Selected driver: MySQL ODBC 5.3 ANSI Driver
   SQL:Connection string: Driver={MySQL ODBC 5.3 ANSI Driver};Server=localhost;User=xxx;Password=xxx
   SQL:RDBMS: MySQL Community Server (GPL)
   SQL:RDBMS version: 5.6.26-log
   -----------------------
   i (2 records)
   j (3 records)
   a (2 records)
   b (3 records)
   d (6 records)
   c (6 records)
   x (6 records)
   supply (2 records)
   demand (3 records)
   scalarparameters (1 records)
   scalarvariables (1 records)
   scalarequations (1 records)

*** Status: Normal completion
--- Job trnsport.gms Stop 04/21/16 11:46:50 elapsed 0:00:06.470

To see what is happening behind the scenes use the –v (verbose) flag:

GDX2MySQL v 0.1
Copyright (c) 2015-2016 Amsterdam Optimization Modeling Group LLC

   GDX Library      24.6.1 r55820 Released Jan 18, 2016 VS8 x86 32bit/MS Windows
   GDX:Input file: results.gdx
   GDX:Symbols: 12
   GDX:Uels: 5
   GDX:Loading Uels
   SQL:Selected driver: MySQL ODBC 5.3 ANSI Driver
   SQL:Connection string: Driver={MySQL ODBC 5.3 ANSI Driver};Server=localhost;User=xxx;Password=xxx
      set autocommit=0
      select @@version_comment
   SQL:RDBMS: MySQL Community Server (GPL)
      select @@version
   SQL:RDBMS version: 5.6.26-log
      select count(*) from information_schema.schemata where schema_name = 'tmp'
   -----------------------
   i (2 records)
      drop table if exists `tmp`.`i`
      create table `tmp`.`i`(`i` varchar(9))
      insert into `tmp`.`i` values (?)
      sqlexecute(2 times)
      commit
   j (3 records)
      drop table if exists `tmp`.`j`
      create table `tmp`.`j`(`j` varchar(8))
      insert into `tmp`.`j` values (?)
      sqlexecute(3 times)
      commit
   a (2 records)
      drop table if exists `tmp`.`a`
      create table `tmp`.`a`(`i` varchar(9),`value` double)
      insert into `tmp`.`a` values (?,?)
      sqlexecute(2 times)
      commit
   b (3 records)
      drop table if exists `tmp`.`b`
      create table `tmp`.`b`(`j` varchar(8),`value` double)
      insert into `tmp`.`b` values (?,?)
      sqlexecute(3 times)
      commit
   d (6 records)
      drop table if exists `tmp`.`d`
      create table `tmp`.`d`(`i` varchar(9),`j` varchar(8),`value` double)
      insert into `tmp`.`d` values (?,?,?)
      sqlexecute(6 times)
      commit
   c (6 records)
      drop table if exists `tmp`.`c`
      create table `tmp`.`c`(`i` varchar(9),`j` varchar(8),`value` double)
      insert into `tmp`.`c` values (?,?,?)
      sqlexecute(6 times)
      commit
   x (6 records)
      drop table if exists `tmp`.`x`
      create table `tmp`.`x`(`i` varchar(9),`j` varchar(8),`level` double,`lo` double,`up` double,`marginal` double)
      insert into `tmp`.`x` values (?,?,?,?,?,?)
      sqlexecute(6 times)
      commit
   supply (2 records)
      drop table if exists `tmp`.`supply`
      create table `tmp`.`supply`(`i` varchar(9),`level` double,`lo` double,`up` double,`marginal` double)
      insert into `tmp`.`supply` values (?,?,?,?,?)
      sqlexecute(2 times)
      commit
   demand (3 records)
      drop table if exists `tmp`.`demand`
      create table `tmp`.`demand`(`j` varchar(8),`level` double,`lo` double,`up` double,`marginal` double)
      insert into `tmp`.`demand` values (?,?,?,?,?)
      sqlexecute(3 times)
      commit
   scalarparameters (1 records)
      drop table if exists `tmp`.`scalarparameters`
      create table `tmp`.`scalarparameters`(`name` varchar(1),`value` double)
      insert into `tmp`.`scalarparameters` values (?,?)
      sqlexecute(1 times)
      commit
   scalarvariables (1 records)
      drop table if exists `tmp`.`scalarvariables`
      create table `tmp`.`scalarvariables`(`name` varchar(1),`level` double,`lo` double,`up` double,`marginal` double)
      insert into `tmp`.`scalarvariables` values (?,?,?,?,?)
      sqlexecute(1 times)
      commit
   scalarequations (1 records)
      drop table if exists `tmp`.`scalarequations`
      create table `tmp`.`scalarequations`(`name` varchar(4),`level` double,`lo` double,`up` double,`marginal` double)
      insert into `tmp`.`scalarequations` values (?,?,?,?,?)
      sqlexecute(1 times)
      commit

There are a few interesting issues we deal with:

  • scalar parameters, variables and equations are not stored as individual tables but rather collected in tables scalarparameters, scalarvariables and scalarequations.
  • I delete and recreate the tables to make sure they have the correct structure. This looks easier than checking the structure (columns and their types) and only doing a truncate (truncate is faster than using delete). Obviously the user needs to have permissions to do so.
  • I try to insert many rows before doing a commit.
  • I try to map GAMS special values to reasonable things we can actually store in the database:
    image
  • All names are protected by ` in case we use a keyword as a name.
  • The string columns are getting a width just enough to store the largest string. E.g. for set ‘i’ this is 9, while for set ‘j’ this is 8.
  • All values are stored using type double.
  • This version uses a prepared statement and then does N calls to sqlexecute. We could speed this up a little by doing one sqlexecute per (small) batch. This would require some arrays to hold the data. Another alternative is to forget about prepare, and just create a long string containing the insert statement (MySQL allows several records to be inserted using one insert statement).
  • For large data: special consideration. Typically we would like to employ some bulk insert method. In MySQL this is load data infile. A next post will have some timings.