On integer linear programming.

A survey of the methods of solving the integer program, n max I j=l c .x. subject to J J n .I J=l a . . x. 1] J =b. 1 (i=l, . .. ,m) and xj ~ 0 and integer (j=l, ... ,n) is presented. Emphasis is placed on methods developed since 1960 with many as yet unpublished methods presented. Examples are given for the unpublished methods. .. ,


Introduction and Brief History
This is a survey of the progress that has been made in the solving of linear programming problems when variables must take on integer values. The field is divided into pure integer programming, when all the variables must be integers, and mixed integer programming, when only specified variables must be integers. The subject is also known as discrete programming or integer linear programming abbreviated to integer programming which appears to be the preferred term.
The integer programming problem is to x . > 0 and integer (j=l, ... ,n), If a solution of (1) as a linear program does not have · the required integer properties then methods for achieving it must be invoked. A basic approach common to most methods is to successively deduce supplementary linear constraints from the linear constraints of (1) and the integer requirements until a new linear program is obtained whose solution satisfies the integer requirements.
The idea of new constraint generation appears to have been first advanced by Dantzig, Fulkerson, and Johnson [7] in 1954 in their work on the traveling salesman problem.
In 1958 Gomory [11] developed a systematic method for new constraint generation in his "Method of Integer Forms" for solving pure integer programming problems in which all variables are required to be integer valued. This method guarantees that an integer solution is found (if any exist) in a finite number of steps. In 1960 Gomory devised another method, the "All Integer Method", which requires only addition and subtraction in computation provided the a .. and c . 1] J in (1) are integer valued. The above algorithms are ''dual methods" and as such, no feasible solution to the problem of interest is obtained until an optimal solution is found.
There is, however, a method which provides primal integer solutions . This method is attributed to Gomory but literature on this is scarce. Young [21] has also presented a similar algorithm for primal integer solutions.
The field of mixed integer programming is less far advanced. Gomory [12] has extended his method of integer 6 . .
forms to deal with continuous as well as integer variables.
A dual decomposition approach due to Benders [4] has been used to remedy the situation of a mixed program. In this approach the problem is partitioned and every stage of the computation involves the solution of two subproblems, a pure integer problem an¢ a linear programming problem.
Other approaches to solving pure and mixed problems have been-proposed. In 1960 Land and Doig [20] developed a branch and bound technique.
In 1968 Greenberg and Hegerich [19] developed a branch and exclude algorithm for the special ''knapsack problem" which was extended by Greenberg [15] to the more general problem.
Partial enumeration techniques have also been devised by Balas [1] and [2] in his "Additive Algorithm" and "Discrete Programming by Filter Method", Goeffrion [8] in his "Reformulation of Balas' Algorithm for Integer Programming", and by Glover [10] in "A Multiphase Dual Algorithm for the Zero-One Integer Programming Problem". An enumerative scheme for computation of knapsack functions by Greenberg [18] is presented in this paper.
Dynamic programming procedures have also been devised to solve linear programming problems . Two methods by Greenberg [16] and [17] are presented; one for the knapsack problem and the second a more generalized problem of integer programming. 2.

Cutting Plane Methods
Before d i scussing the particular algorithms of Gomory it is appropri a t e to define some terms which will be used \throughout the discussion of the algorithms. A "feasible solution" is a solution where all variables that are required to be integer a r e integer and all variables that have to be non-negative are non-negative. Since Gomory 's methods are expressed more neatly in terms of an objecti ve function to be maximized t h e objective function will be This expression is negative at the current trial solution but is must be non-negative for any integral value of X • s This is because if x -n 0 < 0, then s s - 3.

Primal Methods
The primal approach involves progress i ng from one feasible solution to another with a larger va l ue of the objective function or else proving that no better solution can exist. A method, attributed to Gomory, and conveyed to the author in private conversation with Greenberg is a method of solution to the general problem Rewriting ( 6) gives us Since the left side of ( 7) is > 0 then the cut is 2-1 1 3x2 + 3S _.:: 3 · Using (6), we take the greatest integer < the elements of the pivot row giving us We must always keep the slack variable, S in our case, and the original variables, x 1 and x 2 , in the basis.
We now eliminate our pivot variable x 1 from the equations in our tableau by using (8).
Next we determine if this is an optima l solution. If not we must repeat this procedure. If our pivot element is 1 at any step, we must modify the procedure by substituting dummy variables for the variable we want to keep in the basis.
Continuin-g with our example and using (8)   They present a branch and bound algor ithm and a branch and exclude procedure which is used to solve the problem ( 9 ) Maximi ze Subject to It is assumed that v . and w . are positive integers and The problem is first solved as shown b y Dantzig [6] where the 0 < x . < 1. Th is solution i s g iven b y where r i s the l eas t i nteger ( 0 < r < n ) f o r wh i ch If no r e xists t hen all x . = 1. 1 then we have the optimal solution to ( 9 ). 2(a) . Find the terminal node with the largest value of Z(n). This is the node at which the next branching will take place. Any node (except one) contains the effect of assigning values to variables and solving (9) with the assigned values of the variables added as constraints . (b).
If the solution at node n has all variables assigned an integer value, this is the optimal solution to (9) . I f not, proceed to step 3.
3(a) . Set n = n + 1 and some unassigned variable, say xt to 0 . Solve (9) wi th all assigned variables added as constraints. Label node n with the value Z(n) and proceed to . Set n = n + 1 and xt = 1. Solve (9) with all assigned variables added as constraints.
Label node n with the value Z(n) and go to step 2.
The criteri a used to select xt in ste p 3(a) is to take the variable tha t is fractional at node n.
As is common with most branch and bound techniques they indicates the vari able is unassigned . An assigned variable will have the value zero or one .
Define R (L) as the index of the Lth assigned variable.
Defi ne Z (L) as t he value of the ob j ective function at th the L level of a branch .
The solution to ( 9 ) , with L assigned components of X added as constraints is where the set M is given by and r is the least integer (0 < r < N) for which A lower bound to the solution of (9) is given by The algorithm is as follows: 1. Set L = 1 and all components of X to two .
2. Solve (9) with 0 < x . < 1 . I f t he solution is all 1 -integer then th i s is the optimal solution . I f not, the -sol ution is x 1 = 1, x 2 = l, •.. ,xr-l = 1 and x r is f r a ctional. If the solution is all intege r t he problem is solved.
Otherwise go to 2 maintaining the canonical form of the solution to (11).
Balas ' [2 ] f i lter method is an accele r ated version of his earlier "addit i ve algorithm" .
In the filter method a two phase approach is used where in phase I an auxiliary . .
problem is constructed that, in phase II, i s used to The procedure may be summarized in the following four steps: 1 .
List the values of the problem as follows:

.
Given the li s t fin d a r == min a , for co l umns in all a , > m J J sections .
If ar == L go t o 4. Otherwis e set m = a r and go to 3 .

.
Add a new section of columns to the list , i f possible, as follows: Calculate at == ar +at and ct. = cr + ct fort== 1, . .. ,n . Add a column headed b y t if: at is not on the list .
at is on the list and has a corresponding ct value that is smaller than ct .
Underneath t he secti on added write the x .
values fr om the secti on whe r e m == a was  To illustrate thi s me t hod the following problem i s considered: when 2x 1 + 3x 2 + 4x 3 = 6 .
1. We list the values of the problem as 2. 3.
Set m = 0. We thus have F(6) = 7 I X -1 -1 I x 3 = 1 as optimal. A problem which is given in [3], to illustrate Gomory's algorithm, is solved below by the method just described. In the first row we consider the greatest common divisor of (-2,4) which is 2. The first row produces the first constraint equation in (21).
In the second row we cons i der· t he g r eatest common divisor o f ( 3,-1) which is un ity . Thus the requi red congruence may be obtai ned from the second constra i n t i~ ( 41 ) . S i mi larly , the required congruenc e may be obtained f r om t he third c onstrain t and the objective funct ion . Us ing the seco n d constraint we obtain t he congruence 7x 4 + x 5 = 8 (mod 1 0 ) The equivalen t knapsack problem becomes min i mize 7x 4 + llx 5 when 7x 4 + x 5 = 8 (mod 10 ) x 4 , x 5 > 0 and i nt ege r .