Infeasible Assignment Problem Operations

[Previous chapter] [Next chapter]

Infeasibility, Unboundedness and Instability



All users will, generally, encounter an occasion where an instance of the model they are developing is solved and found to be infeasible or unbounded. An infeasible problem is a problem that has no solution while an unbounded problem is one where the constraints do not restrict the objective function and the optimal objective goes to infinity. Both situations arise due to errors or shortcomings in the formulation or in the data defining the problem. When such a result is found it is typically not clear what it is about the formulation or the data that has caused the problem.

Problem instability arises when the coefficient values of the problem are such that the optimization algorithms find it difficult to converge to a solution. This is typically because of large ratios between the largest and smallest coefficients in the constraints or columns and the handling of the range of numerical values in the algorithm is causing floating point accuracy issues. Problem instability generally manifests in either long run times or spurious infeasibilities.

It is often difficult to deal with these issues since it is often difficult to diagnose the cause of the problems. In the Chapter we discuss the various approaches and tools provided by the Optimizer for handling these issues.

Infeasibility

A problem is said to be infeasible if no solution exists which satisfies all the constraints. The FICO Xpress Optimizer provides functionality for diagnosing the cause of infeasibility in the user's problem.

Before we discuss the infeasibility diagnostics of the Optimizer we will, firstly, define some types of infeasibility in terms of the type of problem it relates to and how the infeasibility is detected by the Optimizer.

We will consider two basic types of infeasibility. The first we will call continuous infeasibility and the second discrete or integer infeasibility. Continuous infeasibility is where a non–MIP problem is infeasible. In this case the feasible region defined by the intersecting constraints is empty. Discrete or integer infeasibility is where a MIP problem has a feasible relaxation (note that a relaxation of a MIP is the problem we get when we drop the discreteness requirement on the variables) but the feasible region of the relaxation contains no solution that satisfies the discreteness requirement.

Either type of infeasibility can be detected at the presolve phase of an optimization run. Presolve is the analysis and processing of the problem before the problem is run through the optimization algorithm. If continuous infeasibility is not detected in presolve then the optimization algorithm will detect the infeasibility. If integer infeasibility is not detected in presolve then, in the rare occasion where this happens, a branch and bound search will be necessary to detect the infeasibility. These scenarios are discussed in the following sections.

Diagnosis in Presolve

The presolve processing, if activated (see Working with Presolve), provides a variety of checks for infeasibility. When presolve detects infeasibility, it is possible to "trace" back the implications that determined an inconsistency and identify a particular cause. This diagnosis is carried out whenever the control parameterTRACE is set to 1 before the optimization routine XPRSlpoptimize (LPOPTIMIZE) is called. In such a situation, the cause of the infeasibility is then reported as part of the output from the optimization routine.

Diagnosis using Primal Simplex

The trace presolve functionality is typically useful when the infeasibility is simple. For example, because only a short sequence of bound implications was found to show up an inconsistency on a small set of variables. If, however, this sequence is long or there are a number of sequences on different sets of variables that are causing inconsistencies then it might be useful to try forcing presolve to continue processing and then solve the problem using the primal simplex to get the, so called, "phase 1" solution. To force presolve to continue even when an infeasibility is discovered the user can set the control PRESOLVE to –1. The phase 1 solution is useful because the sum of infeasibilities is minimized in the solution and the resulting set violated constraints and bound violated variables provides a clear picture of what aspect of the model is causing the infeasibility.

Irreducible Infeasible Sets

A general technique to analyze infeasibility is to find a small portion of the matrix that is itself infeasible. The Optimizer does this by finding irreducible infeasible sets (IISs). An IIS is a minimal set of constraints and variable bounds which is infeasible, but becomes feasible if any constraint or bound in it is removed.

A model may have several infeasibilities. Repairing a single IIS may not make the model feasible, for which reason the Optimizer can attempt to find an IIS for each of the infeasibilities in a model. The IISs found by the optimizer are independent in the sense that each constraint and variable bound may only be present in at most one IIS. In some problems there are overlapping IISs. The number of all IIS present in a problem may be exponential, and no attempt is made to enumerate all. If the infeasibility can be represented by several different IISs the Optimizer will attempt to find the IIS with the smallest number of constraints in order to make the infeasibility easier to diagnose (the Optimizer tries to minimize the number of constraints involved, even if it means that the IIS will contain more bounds).

Using the library functions IISs can be generated iteratively using the XPRSiisfirst and XPRSiisnext functions. All (a maximal set of independent) IISs can also be obtained with the XPRSiisall function. Note that if the problem is modified during the iterative search for IISs, the process has to be started from scratch. After a set of IISs is identified, the information contained by any one of the IISs (size, constraint and bound lists, duals, etc.) may be retrieved with function XPRSgetiisdata. A summary on the generated IISs is provided by function XPRSiisstatus, while it is possible to save the IIS data or the IIS subproblem directly into a file in MPS or LP format using XPRSiiswrite. The information about the IISs is available while the problem remains unchanged. The information about an IIS may be obtained at any time after it has been generated. Function XPRSiisclear clears the information already stored about IISs.

On the console, all the IIS functions are available by passing different flags to the IIS console command. A single IIS may be found by command IIS. If further IISs are required (e.g. if trying to find the smallest one) the IISn command may be used to generate subsequent IISs, or the IIS–a to generate all independent IISs, until no further independent IIS exists. These functions display the constraints and bounds that are identified to be in an IIS as they are found. If further information is required, the IISp num command may be used to retrieve all the data for a given IIS, or the IISw and IISe functions to create an LP/MPS or CSV containing the IIS subproblem or the additional information about the IIS in a file.

Once an IIS has been found it is useful to know if dropping a single constraint or bound in the IIS will completely remove the infeasibility represented by the IIS, thus an attempt is made to identify a subset of the IIS called a sub–IIS isolation. A sub–IIS isolation is a special constraint or bound in an IIS. Removing an IIS isolation constraint or bound will remove all infeasibilities in the IIS without increasing the infeasibilities outside the IIS, that is, in any other independent IISs.

The IIS isolations thus indicate the likely cause of each independent infeasibility and give an indication of which constraint or bound to drop or modify. This procedure is computationally expensive, and is carried out separately by function XPRSiisisolations (IIS–i) for an already identified IIS. It is not always possible to find IIS isolations.

After an optimal but infeasible first phase primal simplex, it is possible to identify a subproblem containing all the infeasibilities (corresponding to the given basis) to reduce the IIS work–problem dramatically. Rows with zero duals (thus with artificial of zero reduced cost) and columns that have zero reduced costs may be excluded from the search for IISs. Moreover, for rows and columns with nonzero costs, the sign of the cost is used to relax equality rows either to less then or greater than equal rows, and to drop either possible upper or lower bounds on variables. This process is referred to as sensitivity filter for IISs.

The identification of an IIS, especially if the isolations search is also performed, may take a very long time. For this reason, using the sensitivity filter for IISs, it is possible to find only an approximation of the IISs, which typically contains all the IISs (and may contain several rows and bounds that are not part of any IIS). This approximation is a subproblem identified at the beginning of the search for IISs, and is referred to as the initial infeasible subproblem. Its size is typically crucial to the running time of the IIS procedure. This subproblem is accessible by setting the input parameters of XPRSiisfirst or by calling (IIS–f) on the console. Note that the IIS approximation and the IISs generated so far are always available.

The XPRSgetiisdata function also returns dual multipliers. These multipliers are associated with Farkas' lemma of linear optimization. Farkas' lemma in its simplest form states that if Ax=b, x(0 has no solution, then there exists a y for which yTA(0 and yTb<0. In other words, if the constraints and bounds are contradictory, then an inequality of form dTx < 0 may be derived, where d is a constant vector of nonnegative values. The vector y, i.e., the multipliers with which the constraints and bounds have to be combined to get the contradiction is called dual multipliers. For each IIS identified, these multipliers are also provided. For an IIS all the dual multipliers should be nonzero.

The Infeasibility Repair Utility

In some cases, identifying the cause of infeasibility, even if the search is based on IISs may prove very demanding and time consuming. In such cases, a solution that violates the constraints and bounds minimally can greatly assist modeling. This functionality is provided by the XPRSrepairweightedinfeas function.

Based on preferences provided by the user, the Optimizer relaxes the constraints and bounds in the problem by introducing penalized deviation variables associated with selected rows and columns. Then a weighted sum of these variables (sometimes referred to as infeasibility breakers) is minimized, resulting in a solution that violates the constraints and bounds minimally regarding the provided preferences. The preference associated with a constraint or bound reflects the modeler's will to relax the corresponding right–hand–side or bound. The higher the preference, the more willing the modeler is to relax (the penalty value associated is the reciprocal of the preference). A zero preference reflects that the constraint or bound cannot be relaxed. It is the responsibility of the modeler to provide preferences that yield a feasible relaxed problem. Note, that if all preferences are nonzero, the relaxed problem is always feasible (with the exception of problems containing binary or semi–continuous variables, since because of their special associated modeling properties, such variables are not relaxed).

Note, that this utility does not repair the infeasibility of the original model, but based on the preferences provided by the user, it introduces extra freedom into it to make it feasible, and minimizes the utilization of the added freedom.

The magnitude of the preferences does not affect the quality of the resulting solution, and only the ratios of the individual preferences determine the resulting solution. If a single penalty value is used for each constraint and bound group (less than and greater than or equal constraints, as well as lower and upper bounds are treated separately) the XPRSrepairinfeas (REPAIRINFEAS) function may be used, which provides a simplified interface to XPRSrepairweightedinfeas.

Using the new variables introduced, it is possible to warm start with a basic solution for the primal simplex algorithm. Such a warm start is always performed when the primal simplex algorithm is used. However, based on the value of the control KEEPBASIS, the function may modify the actual basis to produce a warm start basis for the solution process. An infeasible, but first phase optimal primal solution typically speeds up the solution of the relaxed problem.

Once the optimal solution to the relaxed problem is identified (and is automatically projected back to the original problem space), it may be used by the modeler to modify the problem in order to become feasible. However, it may be of interest to know which value the original objective function would take if the modifications suggested by the solution provided by the infeasibility repair function were carried out.

In order to provide such information, the infeasibility repair tool may carry out a second phase, in which the weighted violation of the constraints and bounds are restricted to be no greater than the optimum of the first phase in the infeasibility repair function, and the original objective function is minimized or maximized.

It is possible to slightly relax the restriction on the weighted violation of the constraints and bounds in the second phase by setting the value of the parameter delta in XPRSrepairweightedinfeas, or using the –delta option in the console. If the minimal weighted violation in the first phase is p, a nonzero delta would relax the restriction on the weighted violations to be less or equal than (1+delta)p. While such a relaxation allows considering the effect of the original objective function in more detail, on some problems the trade–off between increasing delta to improve the objective can be very large, and the modeler is advised to carefully analyze the effect of the extra violations of the constraints and bounds to the underlying model.

Note, that it is possible that an infeasible problem becomes unbounded in the second phase of the infeasibility repair function. In such cases, the cause of the problem being unbounded is likely to be independent from the cause of its infeasibility.

Integer Infeasibility

In rare cases a MIP problem is found to be infeasible although its LP relaxation was found to be feasible. In such circumstances the feasible region for the LP relaxation, while nontrivial, contains no solutions which satisfy the various integrality constraints. These are perhaps the worst kind of infeasibilities as it can be hard to determine the cause. In such cases it is recommended that the user try introducing some flexibility into the problem by adding slack variables to all of the constraints each with some moderate penalty cost. With the solution to this problem the user should be able to identify, from the non–zero slack variables, where the problem is being overly restricted and with this decide how to modify the formulation and/or the data to avoid the problem.

Unboundedness

A problem is said to be unbounded if the objective function may be improved indefinitely without violating the constraints and bounds. This can happen if a problem is being solved with the wrong optimization sense e.g., a maximization problem is being minimized. However, when a problem is unbounded and the problem is being solved with the correct optimization sense then this indicates a problem in the formulation of the model or the data. Typically, the problem is caused by missing constraints or the wrong signs on the coefficients. Note that unboundedness is often diagnosed by presolve.

Instability

Scaling

When developing a model and the definition of its input data users often produce problems that contain constraints and/or columns with large ratios in the absolute values of the largest and smallest coefficients. For example:

maximize:106x+7y=z
subject to:106x+0.1y100
107x+8y500
1012x+106y50*106

Here the objective coefficients, constraint coefficients, and RHS values range between 0.1 and 1012. We say that the model is badly scaled.

During the optimization process, the Optimizer must perform many calculations involving subtraction and division of quantities derived from the constraints and objective function. When these calculations are carried out with values differing greatly in magnitude, the finite precision of computer arithmetic and the fixed tolerances employed by FICO Xpress result in a build up of rounding errors to a point where the Optimizer can no longer reliably find the optimal solution.

To minimize undesirable effects, when formulating your problem try to choose units (or equivalently scale your problem) so that objective coefficients and matrix elements do not range by more than 106, and RHS and non–infinite bound values do not exceed 108. One common problem is the use of large finite bound values to represent infinite bounds (i.e., no bounds) — if you have to enter explicit infinite bounds, make sure you use values greater than 1020 which will be interpreted as infinity by the Optimizer. Avoid having large objective values that have a small relative difference — this makes it hard for the dual simplex algorithm to solve the problem. Similarly, avoid having large RHS/bound values that are close together.

In the above example, both the x–coefficient and the last constraint might be better scaled. Issues arising from the first may be overcome by column scaling, effectively a change of coordinates, with the replacement of 106x by some new variable. Those from the second may be overcome by row scaling.

FICO Xpress also incorporates a number of automatic scaling options to improve the scaling of the matrix. However, the general techniques described below cannot replace attention to the choice of units specific to your problem. The best option is to scale your problem following the advice above, and use the automatic scaling provided by the Optimizer.

The form of scaling provided by the Optimizer depends on the setting of the bits of the control parameter SCALING. To get a particular form of scaling, set SCALING to the sum of the values corresponding to the scaling required. For instance, to get row scaling, column scaling and then row scaling again, set SCALING to 1+2+4=7. The scaling processing is applied after presolve and before the optimization algorithm.

BitValueType of Scaling
01Row scaling.
12Column scaling.
24Row scaling again.
38Maximin.
416Curtis–Reid.
5320– scale by geometric mean;
1– scale by maximum element
(not applicable if maximin or Curtis–Reid is specified).
7128Objective function scaling.
8256Exclude the quadratic part of constraint when calculating scaling factors.

The default value of SCALING is 35, so row and column scaling are done by the maximum element method. If scaling is not required, SCALING should be set to 0.

If the user wants to get quick results when attempting to solve a badly scaled problem it may be useful to try running customized scaling on a problem before calling the optimization algorithm. To run the scaling process on a problem the user can call the routine XPRSscale(SCALE). The SCALING control determines how the scaling will be applied.

Note that if user is applying customized scaling to their problem and they are subsequently modifying the problem then it is important to note that the addition of new elements in the matrix can cause the problem to become badly scaled again. The user can avoid this by reapplying their scaling strategy after completing their modifications to the matrix.

Finally, note that the scaling operations are determined by the matrix elements only. The objective coefficients, right hand side values and bound values do not influence the scaling. Only continuous variables (i.e., their bounds and coefficients) and constraints (i.e., their right–hand–sides and coefficients) are scaled. Discrete entities such as integer variables are not scaled so the user should choose carefully the scaling of these variables.

Accuracy

The accuracy of the computed variable values and objective function value is affected in general by the various tolerances used in the Optimizer. Of particular relevance to MIP problems are the accuracy and cut off controls. The MIPRELCUTOFF control has a non–zero default value, which will prevent solutions very close but better than a known solution being found. This control can of course be set to zero if required.

When the LP solver stops at an optimal solution, the scaled constraints will be violated by no more than FEASTOL and the variables will be within FEASTOL of their scaled bounds. However once the constraints and variables have been unscaled the constraint and variable bound violation can increase to more than FEASTOL. If this happens then it indicates the problem is badly scaled. Reducing FEASTOL can help however this can cause the LP solve to be unstable and reduce solution performance.

However, for all problems it is probably ambitious to expect a level of accuracy in the objective of more than 1 in 1,000,000. Bear in mind that the default feasibility and optimality tolerances are 10–6. And you are lucky if you can compute the solution values and reduced costs to an accuracy better than 10–8 anyway (particularly for large models). It depends on the condition number of the basis matrix and the size of the RHS and cost coefficients. Under reasonable assumptions, an upper bound for the computed variable value accuracy is 4xKxRHS/1016, where RHS denotes the L–infinity norm of the RHS and K is the basis condition number. The basis condition number can be found using the XPRSbasiscondition (BASISCONDITION) function.

You should also bear in mind that the matrix is scaled, which would normally have the effect of increasing the apparent feasibility tolerance.


[Next] [Index]


If you have any comments or suggestions about these pages, please send mail to support@fico.com.

© Copyright 1983-2013 Fair Isaac Corporation. All rights reserved.

MBA-H2040 Quantitative Techniques for Managers The element cij represents the measure of effectiveness when i

th person is assigned jth job. Assume that the overall measure of effectiveness is to be minimized. The element xij represents the number of

ith individuals assigned to the jth job. Since ith person can be assigned only one job and jth job can be assigned to only one person we have the following

xi1 + xi2 + ……………. + xin = 1, where i = 1, 2, . . . . . . . , n

x1j + x2j + ……………. + xnj = 1, where j = 1, 2, . . . . . . . , n

and the objective function is formulated as

Minimize c11x11 + c12x12 + ……….. + cnnxnn

xij ≥ 0

The assignment problem is actually a special case of the transportation problem where m = n

and ai = bj = 1. However, it may be easily noted that any basic feasible solution of an assignment

problem contains (2n – 1) variables of which (n – 1) variables are zero. Because of this high degree of

degeneracy the usual computation techniques of a transportation problem become very inefficient. So,

hat a separate computation technique is necessary for the assignment problem.

The solution of the assignment problem is based on the following results:

“If a constant is added to every element of a row/column of the cost matrix of an assignment

problem the resulting assignment problem has the same optimum solution as the original assignment

problem and vice versa”. – This result may be used in two different methods to solve the

assignment problem. If in an assignment problem some cost elements cij are negative, we may have to

convert them into an equivalent assignment problem where all the cost elements are non-negative by

adding a suitable large constant to the cost elements of the relevant row or column, and then we

look for a feasible solution which has zero assignment cost after adding suitable constants to the

cost elements of the various rows and columns. Since it has been assumed that all the cost elements

are non-negative, this assignment must be optimum. On the basis of this principle a

computational technique known as Hungarian Method is developed. The Hungarian Method is

discussed as follows.

Hungarian Method:

The Hungarian Method is discussed in the form of a series of computational steps as follows, when

the objective function is that of minimization type.

0 Thoughts to “Infeasible Assignment Problem Operations

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *