Benefits of Domain Reduction Techniques (Part 1)

Benefits of Domain Reduction Techniques (Part 1)

One of the relatively cheap domain reduction methods is constraint propagation. In general, domain reduction techniques are key for the performance of global optimization solvers. These techniques tighten variable bounds while not changing the set of feasible solutions, can potentially detect infeasibility, reduce the number of nodes generated and, most importantly, provide tighter lower bounds prior to any branching.

In order to quantify its effectiveness, we applied constraint propagation to the CoconutLib set of problems. Without any constraint propagation and with a solution time limit of 30s per problem, Octeract Engine Beta 0.8.02 located the global solution of 284 problems (the largest comprising of 699 variables (gouldqp2)). With constraint propagation enabled, 340 problems (19% more than without constraint propagation) were solved to global optimality and the largest problem solved to global optimality was nearly twice as large (1199 variables, optcdeg2) as the largest problem solved without constraint propagation.

Table 1: Performance statistics with and without constraint propagation
 Without CPWith CP
Found Feasible Solution931 (72%)916 (71%)
Found Global Solution284 (22%)340 (26%)
Feasible Problems that Timed Out258 (45%)188 (40%)
Figure 1: Performance comparison with and without constraint propagation

What is Constraint Propagation?

In a snap, consider the following set of linear constraints and variable bounds:
\begin{equation} \begin{aligned} & \sum_{i=1}^{k} \alpha_{ji}x_i \leq b_j, \quad j=1,…,n \\ & \underline{x}_i \leq x_i \leq \overline{x}_i \\ \end{aligned} \end{equation}
A set of new constraints per variable is implied:
\begin{equation} \begin{aligned} \text{Upper bound candidate: } & \overline{c}_m = \frac{1}{\alpha_{jm}}\left(b_j-\sum_{i\ne m}\text{min}\left(\alpha_{ji}\overline{x}_i,\alpha_{ji}\underline{x}_i\right)\right), \quad \alpha_{jm}>0 \\ \text{Lower bound candidate: } & \underline{c}_m = \frac{1}{\alpha_{jm}}\left(b_j-\sum_{i\ne m}\text{min}\left(\alpha_{ji}\overline{x}_i,\alpha_{ji}\underline{x}_i\right)\right), \quad \alpha_{jm}<0 \\ & \underline{c}_m \leq x_m \leq \overline{c}_m \\ \end{aligned} \end{equation}
If for a variable the implied constraints reduce the existing bounds, (see Figure 1) the new, tighter bounds may be considered. Repeating the procedure may further reduce the bounds therefore such a technique should be applied iteratively.
\begin{equation} \begin{aligned} & \overline{c}_m \leq \overline{x}_m \\ & \underline{x}_m \leq \underline{c}_m \end{aligned} \end{equation}
NOTE: Test was performed on an Ubuntu 18.04 cluster of 3 networked machines with 64GB of RAM and Intel Xeon Gold 6130 CPUs @ 2.10GHz, yielding 32 physical cores per machine, all other Octeract DGO Engine options set to default.

Leave A Comment

Leave a Reply