IBM ILOG Solver User's Manual > Local Search > Minimizing Talent Wait Cost Using LNS > Tips on using Large Neighborhood Search > The choice of fragment

The basic rule is that fragments should be chosen with some random element. Doing so will mean an large variety of fragments. This does not mean, however, that fragments should be chosen completely at random. Often, from looking at the problem, you can determine which types of fragments are likely to be better for finding better solutions. You can then bias your randomized process towards certain types of fragment.

One important question is: "How large should a fragment be?" The advantages of a small fragment are that the fragment can be searched more quickly and it is more likely that the completion goal will find an improving solution in a smaller fragment if one exists. The advantage of a larger fragment is that it is more likely that an improving solution does indeed exist in the fragment.

The ideal size will depend on the problem and on the stage of the search. For example, if there are many improving solutions, a small fragment can suffice as there will be a good probability that even small fragments will contain improving solutions. As search progresses and improving solutions become more scarce, then a larger fragment size may be needed. However, this must be traded with the fact that for larger fragments, the chance of finding any improving solution will decrease with the size of the fragment. So, ideally the fragment should be just large enough to have a reasonable chance of containing an improving solution, and no larger.

Some more complex schemes try to learn the best size of fragment by increasing it after a certain number of iterations with no improvement, and decreasing it if improving solutions are being found easily. However, one very simple technique which removes the burden of choice is to choose the size of the fragment randomly (perhaps in some prescribed range) each time a fragment is generated. Certainly, this approach is not optimal, and it could be (or will be) that a good deal of fragments are generated which are too large or too small. However, equally true is that a good deal of fragments will be generated which are in the acceptable range of sizes, and so provide good candidates for finding improved solutions. This simple technique has the advantage of being simple and robust to changes in problem data which may call for larger or smaller fragments than is typical.

One simple way to decide what to place in the fragment is to decide completely at random. That is, each variable has the same probability of being added to the fragment as any other variable. You can try this method for the lesson that you have just completed (add each scene variable with probability 0.5, for example). You should see that although it does not perform as well as the method given, it does not perform very badly. Thus, it can be admissible, in the absence of any other good ideas, to generate the fragment completely at random. However, you will see below how you can do better when the problem admits some structure.

Quite often, for a given solution, it is possible to identify "good" and "bad" parts of the solution. For example, in a technician dispatching application, a measure of quality of the schedule of a technician might be the total time spent working at a customer sites divided by the total travelling time between customers. In this case, you would want to favor the inclusion of the poorest parts of the solution in the fragment. This does not mean that the "best" parts of the solution should never be added to the fragment; indeed their inclusion may be necessary (with poorer parts) in order to produce a better solution mixing the elements of both parts. The idea is merely that the poorer parts of the solution should be easier to optimize and thus a bias to include them in the fragment with higher probability should lead to an improved solution more quickly.

The previous point gave a criterion that can be used for preferring to include certain variables in the fragment (those involved in "bad" parts of the current solution). However, often a judgment about what variables are desirable to add to the fragment depends upon the variables which have already been added. The basic idea is that the set of variables added to the fragment should have some interdependence. Thus, when a variable in the fragment changes its value from that in the current solution, if we are to respect problem constraints and/or improve the objective, then some other part of the fragment will need to change value too. If this property holds, the fragment is said to be compact. If the aforementioned inter-dependence were not present, then a smaller fragment could have been chosen, leaving out the parts which have little or no relationship to all the others as even if they were in the fragment, they are unlikely to need to change value.

As an example, suppose you have a selection of objects of different sizes, and you need to decide which subset of those objects should go in the container with the objective of minimizing the amount of free space left in the container. This problem can be modeled by a 0-1 variable for each object indicating whether that object is in the container or not. For most of the search, solutions found will have very little free space (except perhaps at the beginning, when it is easy to produce a better solution). Now, clearly it is absurd to relax only variables which have value 1 (objects in the container) or of value 0 (objects not in the container). In the former case, the only option would be to reduce the number of objects in the container by setting variables formerly at 1 to 0. This must make the solution worse. In the latter case, the only option is to add objects to the container (without removing others) by setting variables formerly at 0 to 1. However, this is unlikely to be possible as we suppose that for most of the search the free space in the container is limited. After consideration, you can see that you must have both variables of value 0 and of value 1 in the fragment so that some objects can be removed from the container and others added in their place. This leads to a simple rule which says that when the fragment already contains a surplus of variables at one value, the probability of adding a new variable with the same value should be diminished.

In the sequel, some different possibilities for the structure of the fragment depending on the problem type are described.

Sequence-based problems, like the one presented in this chapter, or the car sequencing problem described in Using the Distribute Constraint: Car Sequencing, often have strong interactions (in terms of constraints or contributions to the objective function) between neighbors in the sequence. Therefore, often in this case a good fragment is one which involves a contiguous segment of the sequence (as was used in this lesson), as this allows the greatest flexibility for changing contiguous cells in the sequence.

In geographical problems (where you have a notion of some distance between objects), often one has to minimize some total distance traveled between objects in the solution as one makes single or multiple "tours". In this case, poorer parts of the solution can be seen as objects which are visited consecutively but are far apart. As far as related objects go, a typical method of producing a compact fragment is to add objects to the fragment which are close together. This rationale comes from the observation that if one swaps the positions of two distance objects in some route (or two distinct routes), this will increase the total distance considerably in any reasonable solution.

Problems such as time tabling can have a row- and column-based structure. These problems invariably have constraints on the rows as well as on the columns. Suppose that for such a problem each cell in the matrix corresponds to a variable. Such problems benefit from fragments which include variables from the same column or the same row, and so effective fragment generation methods often include such bias. This can be done by weighting probabilities so that rows and columns already mentioned in the fragment are favored. Another method is to choose a smaller (random) number of rows and columns and then select variables only from these. This ensures a spread of the fragment over only a few rows and columns.

Scheduling problems are best treated via the LNS support in IBM® ILOG® Scheduler. However, to give a flavor of the type of fragment which is useful, one can observe that scheduling is both time and resource constrained. This would indicate that activities forming a compact fragment would start at approximately the same time, or require the same resource or resources. Thus, an effective technique can be to add all variables corresponding to activities lying in a certain time window or on a certain resource. The logic is very similar to that of row/column problems where a resource can be likened to a row and a time window to a column.

In packing and resource allocation problems, there is a limited resource and tasks or objects need to be allocated to resources to minimize resource number or cost. This is simply a more general form of the "container" example presented earlier. Poorer parts of the solution can usually be identified as resources which are bad value for money, for example which are costly and/or under-utilized. This provides one criterion for including variables in the fragment. Another criterion based on how parts of the solution are related may also be employed in order to create a more compact fragment. For example, if the goal is to reduce the number of resources used, then from time to time all tasks allocated to a particular resource must be added to the fragment in order to give a chance of liberating the resource in question. This technique can also be useful for combining the tasks allocated on three resources onto only two, for example.