As mentioned in the previous section, impacts are learned during search from the observation of domain reduction. As the search progresses, more accurate impacts will be found and you may need to restart the search to make better choices at the beginning where this is crucial.
Additionally, when using randomness to break ties between equivalently good choices, depth-first search will try another good choice only when the whole subtree will be explored. It may be worth restarting the search from time to time in order to give different parts of the search tree that are equally promising an equal chance to be visited.
For this purpose, the following function creates a goal that calls g
until the number of fails of this goal reaches failLimit
, then it sets failLimit
to failLimit*factor
and calls g
again:
IloGoal IloRestartGoal(IloEnv env,
IloGoal g,
IlcInt failLimit,
IlcFloat factor = 1.0);
|
For example, in the YourSolverHome\examples\src\magicsq_impact.cpp
file, the default goal can be restarted with the fail limit of 1000 for each run by solving the goal:
case 1: g = IloInitializeImpactGoal(env, -1)
&& IloRestartGoal(env, IloCustomizableGoal(env, square), 1000);
|