2 August 2022 (2,002 words)
In this article we continue the Python Production mix series, using the Pyomo library. Specifically, we build Model 3, which improves Model 2 by:
- Extracting the data from the model, and putting it in an external file.
- Implementing better handling of the solve process.
- Expanding the output to include additional information.
Each of these improvements are important steps in developing a practical design that can be used for operational models.
Articles in this series
Articles in the Python Production mix series:
- Python optimization Rosetta Stone
- Production mix - Model 1, Pyomo concrete
- Production mix - Model 2, Pyomo separate data
- Production mix - Model 3, Pyomo external data
- Production mix - Model 4, Pyomo json file
- Production mix - Model 5, Pyomo using def
- Production mix - Model 6, Pyomo abstract
- Production mix - Model 7, PuLP
- Production mix - Model 8, OR-Tools
- Production mix - Model 9, Gekko
- Production mix - Model 10, CVXPY
- Production mix - Model 11, SciPy
- Production mix - Conclusions
Download the model
The Python code and data for this model are available in the following files:
- Jupyter notebook: production-model-3.ipynb
- Python code: production-model-3.py
- Data: productiondata3.py
The "Jupyter notebook" file contains a formatted combination of Python code and markdown code – this file should be opened and run in Jupyter Lab. We describe setting up our Python environment, including Jupyter Lab and various Python libraries, in the article Set up a Python modelling environment.
The "Python code" file is a plain text file containing only the Python code of this model. Download this file if you have a non-Jupyter environment for running Python programs.
The "Data" file is a plain text file containing the model's data, in Python code format.
The model files are also available on GitHub.
Formulation for Model 3
For this model, we're using the same general formulation that we used for Model 2, as shown in Figure 1.
Model 3 Python code
The first task is to import the libraries that are needed for our program. As shown in Figure 2, in addition to the Pyomo library, we import the pandas library. We'll use pandas when writing the model's output.
In this model we've put the data in an external Python file, as shown in Figure 3. The file contains the data we used for Model 2, plus the model's name and options for the solver.
To read the data, we simply import it as we would any other Python code, as shown in Figure 4.
As shown in Figure 5, we continue to use a Pyomo concrete model. The model's name comes from the external file. Since we put all the data in an external file, we need nothing further in this section.
Define the model
The model definition, as shown in Figure 6, is identical to Model 2.
In the external data file, we specify the solver to use and a time limit option for the solver. Since different solvers may have different options, we need to translate the generic time limit data value into the chosen solver's specific option. We do this in Figure 7, where we first specify the solver to use (via the
Engine object from the data file), then specify the option
tmlim, representing a limit on the solve time, for solvers CBC and GLPK respectively. Note that some solvers specify a time limit in milliseconds, rather than seconds, so we may also need to translate the units to match what the solver is expecting (e.g.
TimeLimit * 1000).
Note that this simply model solves in a fraction of a second, so a 60 second time limit does nothing. But for models that take longer to solve, setting a time limit can be useful.
When we call
Solver.solve, we defer loading the solution by specifying
load_solutions = False. We do to avoid the program crashing if there is no solution to load.
We've also added the option
tee, which displays the solver's output as it runs. This can be useful for seeing how the solver is progressing towards a solution, or for diagnosing problems with the solve process. Try changing the option's value to
True to see what it does.
Before writing any output, we need to process the result of the solve to see what happened, as shown in Figure 8.
In this case, we want to decide if there is a solution to write, and the condition in which the solver stopped – so we create three Boolean variables to help us make decisions about what to output.
We get the solver's condition from its
termination_condition property, from which we can decide whether the result is optimal or whether the solver stopped on either a time limit or the maximum number of iterations. There are other possible stopping conditions, such as the solver encountering an error – in those conditions, we usually won't have a solution to write.
Note that various solvers are not consistent in how they report hitting a time limit or maximum number of iterations. Here we make no distinction, though sometimes it might matter which condition occurs.
If the solver stopped on a time or iterations limit, it may not have found a solution. Therefore, we try to load a solution. If there is no solution to load, then this attempt will fail, so we run the
except clause, where we indicate that there is no solution to write. If the test passes, then we have a solution to write. We also get the lower and upper bounds on the objective function – as these can be helpful for indicating how close a feasible solution is to being optimal.
If the solver stopped with an optimal solution, then it should return a solution for us to output – though we allow for the possibility that this case also fails.
Given information about the outcome of the solve process, we can now decide what output to write, as shown in Figure 9:
- Irrespective of the solver's condition, we output the model's name, the solve status, and the solver we used.
- If the solver stopped on a time or iterations limit, then we can write some potentially useful information about the bounds on the objective function value.
- If there is a solution to write, then we write the same output that we wrote for Model 2.
- If there is no solution available to write, then we say so. We also output the model, as it might be helpful in diagnosing whatever issues prevented the solver from finding a solution (setting the
True, mentioned above, may also be useful).
When we find an optimal solution, the output is shown in Figure 10. This output is the same as for Model 2, apart from the model's name.
We can make this model infeasible by, for example, setting the lower bound on the variables to 30. We do that by changing the data file to say
VarBounds = (30, 100) (and saving the file). If we run the model again, then the output is as shown in Figure 11, where we state that the model is infeasible, that no solution has been loaded, and we output the model (including the bounds of 30 and 100 on the variables).
Remember to change the bounds back to 0 and 100 in the data file.
Evaluation of this model
Model 3 is a significant improvement compared with Model 2.
Having an external data file allows us to easily change the data independently of the model. That is, we can either overwrite the data file with new data, or we can have several data files from which we choose one to solve.
Having attempted to solve the model, this version gives us more control over the output. There may or may not be a solution available to write out, so we need to handle either outcome. We can also choose what information to write, depending on the solver's condition.
There is still more that we can do to improve the model, which we'll continuing exploring in the next article of this series.
In the next article, we'll put the data in an external json file, which we may receive if the data is created by some automated process. We'll also read the data into the Model object, rather than separate objects, and we'll make the model definition more consistent.
This article continues our exploration of optimization modelling in Python. We've improved Model 2, but there's still more to do. In the next article, we'll continue improving the design.
If you would like to know more about this model, or you want help with your own models, then please contact us.