The idea is to select the variable that has the most relative reduction, in terms of feasibility domain. The motivation behind this is to pick the variable that has not been branched before or the one that has been branched on fewer occasions.
This method does not require the solution of optimisation problems. Rather, it requires well bounded problems. For problems that are unbounded from below, above or both, this method is less likely to perform well.
Let’s assume that we are testing a variable \({x\in[x_{lb},x_{ub}]}\). Now let the variable at hand appear in a Node that we perform branching on with tightened bounds (either due to having applied bounds tightening techniques or previous branching). Let’s say \({x\in[x^N_{lb},x^N_{ub}]}\). The axis reduction score is given by \({\frac{x^N_{ub}-x^N_{lb}}{x_{ub}-x_{lb}}}\).
By repeating the process for all variables, axis reduction scores are calculated. It is then straightforward to find the variable with the largest score.
It is important to note that during this procedure, integer and binary variables can be handled as well. The difference between discrete and continuous variables is ensuring that:
- in the case of binaries, a branched variable is a fixed variable
- in the case of integers, a branched variable remains a variable with integer bounds.
A simple example: consider the continuous variables originally defined as \({x\in[0,10]}\) and \(y\in[0,8]\). In Node \(N\) of the branch-and-bound tree, the bounds have changed to \({x\in[0,7]}\) and \({y\in[1,7]}\). The largest score here represents the least reduced axis. Therefore, between the two variables, we would choose \(y\) to branch on at Node \(N\).