In the this model, the main variable is x which is a binary variable indicating if a person is assigned to a facility at time t. The GAMS listing file is not a very convenient way to look at this:
---- VAR x assign provider to facility
LOWER LEVEL UPPER MARGINAL
5 .7 .Hospitalist .04/01/2019 . . 1.0000 3.0000
5 .7 .Hospitalist .04/02/2019 . . 1.0000 3.0000
5 .7 .Hospitalist .04/03/2019 . . 1.0000 3.0000
5 .7 .Hospitalist .04/04/2019 . . 1.0000 9.0000
5 .7 .Hospitalist .04/05/2019 . . 1.0000 9.0000
5 .7 .Hospitalist .04/06/2019 . . 1.0000 9.0000
5 .7 .Hospitalist .04/07/2019 . . 1.0000 9.0000
5 .7 .Hospitalist .04/08/2019 . . 1.0000 9.0000
5 .7 .Hospitalist .04/09/2019 . . 1.0000 9.0000
5 .7 .Hospitalist .04/10/2019 . . 1.0000 9.0000
5 .7 .Hospitalist .04/11/2019 . . 1.0000 3.0000
5 .7 .Hospitalist .04/12/2019 . . 1.0000 3.0000
I just showed a few records: there are more than 40,000 of them. Obviously, this is fairly useless for our application.
The GDX data viewer is showing by default:
To be able get a more compact and meaningful visualization of the results we need to post-process the results. E.g. we prefer names instead of id's. Here I show a visualization in Excel, where we can make create different views. Note: I have anonymized the names in this demo (in the real application we have real names to add immediate context).
|Excel based visualization|
Not only is this effort useful for viewing and understanding the results (important for debugging and improving the optimization model), but it also allows me to communicate results with the client. Although I ship results in the form of CSV files (to be consumed by a database tool), this CSV file is difficult to interpret directly. This Excel spreadsheet allows us to look at our large, complex solution in much more efficient and effective way. Of course, I have automated things, so after an optimization run, I can generate this spreadsheet using a script. Well rather two scripts: one is written in GAMS and generates a large sparse data cube (with about 50k entries), the second one is written in VBA and does the coloring among other things.
The advantage of using Excel instead of using a dedicated tool, is that it makes it easier to share results. In the business world, practically everyone is familiar with Excel. Emailing spreadsheets around is much more accepted than sending say executables.
Although creating this tool is not exactly cheap, being able to inspect results in a meaningful way and communicate about them is a big pay off. It is not unusual these tools develop into a more important decision support tool than just a debugging aid for an optimization model.
As an aside: developing this optimization model turned out to be an excellent tool to improve the underlying database. Optimization models look at all data in context, instead of just looking at it record by record. This will put lots of emphasis on correct data, much more than traditional database applications. Data that initially looks quite decent, may actually contain more errors and inconsistencies than you would think. An optimization is a good tool to unearth these.
Developing a mathematical programming application is often much more than just doing mathematical programming.