IBM ILOG Scheduler User's Manual > Integrated Applications > A Randomizing Algorithm: the Job-Shop Problem

When we introduced temporal constraints in Part I, Getting Started with Scheduler, we emphasized that under certain conditions, the algorithm that Scheduler uses to search for a solution is complete in the sense that it either reports that the constraints are incompatible so no solution can be found, or it finds a solution that satisfies the temporal constraints. The algorithms that we looked at for the job-shop problems in Chapter 15 and Chapter 16 were complete in this sense: they either found an optimal solution or they failed and reported reliably that no solution could be found.

One disadvantage of such complete optimization algorithms is that their completeness requires a systematic exploration of the entire search space. A consequence of that thoroughness is that if a "mistake" is made at one of the early decisions, then the algorithm explores all of the possible consequences of this "mistake." Of course, as we've emphasized, constraint propagation reduces the amount of exploration that is performed. However, even the best constraint propagation techniques do not prove sufficient to guarantee a perfect resolution of some hard job-shop scheduling problems in a reasonable amount of CPU time.

This observation suggests that perhaps we should abandon the idea of guaranteeing the optimality of a solution. Instead, we could settle for a solution that is "good enough" in some sense relevant to our problem. We can exercise this option--abandoning the quest for optimality and settling for a good enough solution--simply by stopping an algorithm after a given amount of computational time.

Another--complementary--alternative consists of performing a large number of iterations of an algorithm stopped after a very small number of backtracks. By limiting the amount of backtracking, each iteration may consequently fail either because there is no solution with a given makespan, or because the algorithm could not find such a solution in the given number of backtracks. To allow several iterations with the same makespan to take place, we "randomize" the process of constructing a solution.

As a result, several iterations with the same maximal makespan follow different paths in the search space; the nth of these iterations may succeed when the (n - 1) previous iterations failed.