1 December 2023

We continue our article series looking at a common modelling issue: The solution is optimal, but not practical.

In the previous article of this series, we explicitly formulated either/or constraints by manually creating $$BigM$$ constraints. That technique works, though it can be a bit tricky to get right.

An alternative approach, since we're using Pyomo, is to use the Generalized Disjunctive Programming (GDP) extension that is part of Pyomo. GDP can represent a variety of logical implications, like either/or, and automatically translate them into a form that a solver can work with.

In this article, we describe how the GDP extension can be used to represent either/or constraints in a simple linear program. Along the way, we compare $$BigM$$ constraints that we create manually with $$BigM$$ constraints created automatically by the GDP extension. Then we use the GDP extension to build either/or constraints in our paper coverage situation, as variant Model 5c.

Articles in this series

The articles in this "Optimal but not practical" series are:

• First attempt. Model 1: Non-linear model that finds optimal solutions for small data sets, but not for the full data set.
• Linear, partial data set. Model 2: Linearized model using a small sub-set of the full data. Finds local optima.
• Full data set. Model 3: Extends Model 2 to use the full data set. Finds globally optimal solutions, but requires a large amount of memory.
• Column generation. Model 4: Variation of Model 2, with randomly-generated columns. May find globally optimal solutions, though not guaranteed to do so.
• Either/or BigM. Model 5a and 5b: Explicitly models rotation of the items using BigM constraints. Three times larger than Model 3, so difficult to solve.
• Either/or Disjunction. Model 5c: Explicitly models rotation of the items using Pyomo's Generalized Disjunctive Programming (GDP) extension.
• Virtual machine. We set up a Google Compute Engine with 128 GB of RAM to run Model 3 on data sets larger than we can run on our local modelling computer.

Download the model

The models described in this series of articles are built in Python using the Pyomo library.

The files are available on GitHub.

Situation

The situation is the same as for the previous models in this series.

Disjunctions in a simple linear programming

Before we apply GDP to our paper coverage situation, it is useful to illustrate the concepts by building a simple linear program that has either/or constraints.

Model A: Simple linear program

We start with the linear program, Model A, shown in Figure 1. Model A has two variables, $$x_1$$ and $$x_2$$, both within bounds of 0 and 10. There are two $$\le$$ constraints, so the feasible region is as shown by the shaded area. Our objective is to maximize, with objective function contours from 40 to 65 as shown.

We can represent Model A in Python/Pyomo, using a compact notation, as shown in Figure 2.

When we run the Python code, we find that the optimal solution occurs at $$x_1 = 5.56$$ and $$x_2 = 5.67$$, with an objective function value of 53.28, as shown in Figure 1.

Model B: Add either/or constraints

Now we add either/or constraints to our model: $$x_1 \le 2 \text{ OR } x_1 \ge 7$$ as Constraint 3 and Constraint 4 respectively. This condition splits the feasible region into two separate parts, as shown in Figure 3. Note that the optimal solution for Model A is no longer feasible.

It is not possible to represent Model B as a linear program, because the feasible region is disjointed. However, we can use a binary variable to choose between the two parts of the feasible region, then include $$\text{BigM}$$ constraints to represent the either/or condition (like we did in the previous article).

To add $$\text{BigM}$$ constraints manually, we add code for a $$\text{BigM}$$ parameter, a binary variable, and Constraints 3 and 4 using a $$\text{BigM}$$ structure, as shown in Figure 4.

We need to define values for our $$\text{BigM}$$ parameters. This can be tricky, as we want them to be as tight as possible while creating the behaviour we want. In this situation, our logic for choosing the $$\text{BigM}$$ values is:

• Constraint 3: $$x_1 \le 2 + \text{BigM}_0 \times Choose$$ where $$0 \le x_1 \le 10$$. When $$Choose = 0$$, $$x_1$$ is constrained to be $$\le 2$$. When $$Choose = 1$$, if we set $$\text{BigM}_0$$ equal to 8, then the right-hand-side of this constraint equals the upper bound of $$x_1$$ (i.e., 10) so, effectively, the BigM part does nothing.
• Constraint 4: $$x_1 \ge 7 + \text{BigM}_1 \times (1 - Choose)$$ where $$0 \le x_1 \le 10$$. When $$Choose = 1$$, $$x_1$$ is constrained to be $$\ge 7$$. When $$Choose = 0$$, if we set $$\text{BigM}_0$$ equal to -7, then the right-hand-side of this constraint equals the lower bound of $$x_1$$ (i.e., 0) so, effectively, the BigM part does nothing.

In summary, $$\text{BigM}_0 = 8, \text{BigM}_1 = -7$$. Note that, like in the previous article, a $$\ge$$ constraint requires a negative $$\text{BigM}$$.

We should check that the constraints behave as we intend:

• If $$Choose = 0$$, then Constraint 3 becomes $$x_1 \le 2$$ and Constraint 4 becomes $$x_1 \ge 0$$.
• If $$Choose = 1$$, then Constraint 3 becomes $$x_1 \le 10$$ and Constraint 4 becomes $$x_1 \ge 7$$.

Both conditions behave as intended.

Since $$x_1 = 10$$ and $$x_2 = 10$$ are both infeasible, we could make the $$\text{BigM}$$ values even tighter – though they're tight enough for our purpose.

When we run this Python code, we find that the optimal solution occurs at $$x_1 = 7.00$$ and $$x_2 = 3.50$$, with an objective function value of 50.75, as shown in Figure 3. The binary variable has the value 1, indicating that we've chosen the part of the feasible region for $$x_1 \ge 7$$.

Model C: Using the Generalized Disjunctive Programming (GDP) extension

Now we revise Model B to use Pyomo's Generalized Disjunctive Programming (GDP) extension. Instead of manually defining $$\text{BigM}$$ parameters and a binary $$Choose$$ variable, we specify the disjunction relationship and let the GDP extension do the work for us.

To do that, we replace Constraint 3 and Constraint 4 with simple constraints that specify our requirements that $$x_1 \le 2$$ or $$x_1 \ge 7$$ as gdp.Disjunct(). Then we specify the $$\text{OR}$$ relationship as gdp.Disjunction(expr = [Model.d1, Model.d2]).

There are solvers that can work directly with disjunctions, such as GDPopt. However, for solving our paper coverage model, we want to use the HiGHS solver because of the size of the model, so we need to transform the disjunctions into a form that HiGHS can work with. That is done using pyo.TransformationFactory('gdp.bigm').apply_to(Model), which transforms the disjunctions into $$\text{BigM}$$ constraints.

When we run the Python code in Figure 5, Model C finds that the same solution as Model B, as expected.

In the Jupyter notebook for Model C, available on GitHub, we print the model after it has been transformed. Then we interpret the model and demonstrate, via some algebra, that the $$\text{BigM}$$ constraints created by the GDP extensions, and the $$\text{BigM}$$ constraints that we manually created in Model B, are identical. The $$\text{BigM}$$ values automatically created for Model C are also identical to the values we defined, presumedly chosen by the GDP extension via logic similar to how we chose the $$\text{BigM}$$ values for Model B.

Paper coverage: Model 5c

Formulation for Model 5c

We now return to our modelling of the paper coverage situation.

Model 5b, in the previous article, has manually-created $$\text{BigM}_i$$ constraints, like Model B above. We can use the GDP extension to automatically create the $$\text{BigM}_i$$ constraints and binary variables, like Model C above.

That is, as shown in Figure 6, we create rules for portrait and landscape orientation constraints, then a disjunction that says to "OR" those constraints. Finally, we transform the disjunctions into a $$BigM_i$$ structure that the HiGHS solver can solve.

Note that getting the GDP extension's syntax correct can be difficult. Before building Model 5c, we hadn't used the GDP extension. We appreciate the assistance of a user on Stack Exchange: Operations Research in helping us get the details right.

Solution for Model 5c

When we solve Model 5c, the result is the same as for Model 5b – as expected.

Interestingly, the performance of Model 5c is inbetween the performance of Model 5a (single $$\text{BigM}$$) and Model 5b (indexed $$\text{BigM}_i$$) – though Model 5c is still much slower than Model 3. The performance difference is primarily because Model 3 has only one-third as many variables, as it takes advantage of sorting the data rather than explicitly modelling rotation of the items. That is, using the HiGHS solver (total run time, including presolve time):

• Model 3. Starts with 1,010,000 variables, reduced to 339,057 by presolve. 90% of run time in presolve phase. Solves to optimality in 8 minutes.
• Model 5a. Starts with 3,030,100 variables, reduced to 1,041,708 by presolve. 97% of run time in presolve phase. Solves to optimality in 1.5 hours.
• Model 5b. Starts with 3,030,100 variables, reduced to 2,019,960 by presolve. 16% of run time in presolve phase. Solves to optimality in 11.0 hours.
• Model 5c. Starts with 3,030,100 variables, reduced to 1,041,708 by presolve. 99% of run time in presolve phase. Solves to optimality in 3.4 hours.

A key difference between Model 5b and the other models, include Model 5c using the GDP extension, is the proportion of time spent in the presolve phase. For some unknown reason, the HiGHS solver decided not to spend a lot of time doing presolve for Model 5b; consequently, Model 5b has a much longer total run time.

After the presolve phase, Model 5c has exactly the same number of variables as Model 5a – though it took twice as long to get there, compared with Model 5a. The actual solution time for Model 5c, after presolve, was only 140 seconds. Both Model 5b and Model 5c use indexed $$\text{BigM}_i$$ values, which is usually a good idea, though in this case they are slower to solve than using a single $$\text{BigM}$$ value for the disjunction constraints.

Next steps

In an earlier article in this series, we were able to use Model 3 to find globally optimal solutions for our full data set of 100 items. That's fortunate, because data with 100 items is close to the maximum size that we can solve on our modelling computer due to the high memory requirements of Model 3. On our modelling computer, the largest data set we've successfully solved has 115 items.

But what if we need to solve a data set with a larger number of items? What if we need 200 items, or 1,000 items? Model 4 can solve data sets larger than 100 items, but it uses a heuristic random column generation technique that finds very good solutions, though they are not guaranteed to be globally optimal. How can we find guaranteed globally optimal solutions for larger data sets?

In the next article, we return to Model 3 and attempt to solve it with larger data sets. We do that by running Model 3 on a virtual machine in the cloud. The virtual machine can be configured to have much more RAM than our humble modelling computer has. Hopefully, having more memory will be sufficient to enable us to solve larger problems. We describe the process of setting up a virtual machine and attempting to use it to solve Model 3 with larger data sets.

Conclusion

In this article, we describe using Pyomo's Generalized Disjunctive Programming (GDP) extension for a simple linear program that has either/or constraints. The GDP extension automatically transforms the either/or disjunctions to regular mixed integer linear programming $$\text{BigM}$$ constraints.

We also apply the GDP extension to our paper coverage situation. The run time performance of the GDP model is inbetween the performance of the two manually-created model versions that we described in the previous article.

In the next article, we'll look at solving larger data sets by running our models on a virtual machine in the cloud, to take advantage of the higher specification computers that are available via cloud computing.

If you would like to know more about this model, or you want help with your own models, then please contact us.

References

This series of articles addresses a question asked on Reddit:

Require improved & optimized solution, 14 September 2023.