Tải bản đầy đủ - 0 (trang)
Chapter 22. Parametric Integer Programming: the Righthand-Side Case

# Chapter 22. Parametric Integer Programming: the Righthand-Side Case

Tải bản đầy đủ - 0trang

Vertex generation methods

463

can be characterized as dual algorithms. In contrast, our procedure considers only

logically feasible vertices of F and can be characterized as a primal algorithm.

4. The zero-one integer program

We consider the problem

max cTx

subject to Dx s d

Ix

ie

x integer,

where D is a real ( m - n ) x n matrix, d is a real ( m - n ) X 1 vector, I is t h e n x n

identity matrix and e is a vector of n ones. Introducing slack variables s and t to the

constraints Dx < d and Zx s e, respectively, our integer program can be viewed as

a linear program with logical constraints:

L,={m-n+I,m+l},

q-1

f o r f = 1 , 2,..., n.

The initial tableau for the algorithm is

0

Lemma 5. At all stages of the process u,-,+k.,+ h, = l n + , , ,in each column j , for all

k = 1 , 2 ,..., n.

Proof. Clearly the condition holds in the initial tableau. It follows by linearity and

induction that it holds for all columns subsequently produced.

The import of the lemma is that there is no need t o carry along those rows of L

corresponding to the initial identity matrix. They can always be reconstructed from

the last n rows of U and the final row of L.

Lemma 6.

We may assume without loss of generality

164

D.S. Rubin

(a) d I , the first row of D, is strictly positive,

(b) d , , the first component of d, is strictly positive,

(c) for each component d l j of the first row of D we haue d , , G d ,

Proof. By interchanging the names of the pair ( x , , t , ) , if necessary, we can

guarantee that the first nonzero component of each column of D is strictly positive.

(If any column of D contains all zero entries, we may eliminate that variable from

the problem.) By taking appropriate nonnegative weights for the rows of D, we can

create a surrogate constraint with strictly positive coefficients. Listing this constraint first gives us part (a). If d , s 0, then F is empty or else F = (0). In either case

the problem is uninteresting, which proves (b). If d , , > d , , then x, = 0 in any

feasible solution, and so it may be eliminated from the problem, proving (c).

Let us initiate the algorithm by processing row 1. Thus column n

and each column y, for j = 1,.. ., n is replaced by

+ 1 is retained,

In particular we now have lntl,,= 1 for all j and hence by Lemma 5, u m - - n r k . , + 1 = lk,

for each column j and all k = 1,. . ., n. Furthermore, it follows from part (c) of

Lemma 6 that each entry in the last n rows of U either is negative o r else is equal to

+ 1. (In fact the only negative entries are urn-,+,,,for j = 1 , 2 , . . ., n, but we shall not

use this fact.) The remark in the first paragraph of Section 2 now tells u s that all

subsequent columns produced will be convex combinations of two other columns,

and so it follows by induction that

(1) All entries in row n + 1 of L will always be + 1, and hence we may discard

the entire L matrix.

(2) All entries in the last n rows of U will always be at most + 1.

In the statement of Chernikova’s algorithm and its modifications, it was

convenient to assume that the rows of A were processed sequentially from the top

down. However, it is clear that they can b e processed in any order. The amount of

work needed on any given problem can vary greatly, depending on the order in

which the rows are processed, but there seems to be n o a priori way to determine an

efficient order. A myopic heuristic is given in [lo]. Since the logical constraints in

the 0-1 integer program involve the x and t variables, we cannot use the logical

constraints to eliminate columns until we process some of the last n rows of U.

Then after we have processed any of those rows, Theorem 1 can be rephrased as

Vertex generation methods

465

Theorem 2. After row rn - n + k of U has been processed, all columns with

0 < u ,,-,, + k . , < 1 can be discarded.

The remaining columns can be divided into two sets, those with u m - - n + k , J = 0 and

those with U m - n + k , J = 1. Theorem 2 now tells us that n o column in one of these sets

will ever be combined with any column in the other set. This is perhaps best

understood in terms of the logically feasible faces discussed in Section 3. Each

logical constraint in this problem defines a set of two logically feasible faces which

are parallel to each other, and hence no convex combination of two points, one on

each face, can itself be a feasible point for the problem. This result is not specific t o

the 0-1 integer program, but will hold in any problem whose logical constraints give

rise to a set of disjoint logically feasible faces such that each feasible vertex must lie

on at least one of the faces in the set.

Once row rn - n + k has been processed, there are now two polyhedra of interest

F~ = F n {y

1

Xk

=

I},

I

F,, = F n {y xk

= 0).

Furthermore, we may, if we wish, work exclusively o n F1or Fo, thereby reducing

the active storage required to implement the procedure. Then the only information

about FI that will be used in working on Fo will be information about the objective

function as discussed in Lemma 4 and the subsequent comments. It should also be

remarked that the splitting of F into Fo and F1(and an irrelevant part between Fo

and F,) and the subsequent separate processing of Fo and F , will result in an

algorithm that is similar in spirit to standard implicit enumeration algorithms.

5. Other logical constraints

We will conclude with a few brief remarks about extending the results of Section

2 to logical constraints of the forms 1 y ' = q, and 1 y ' / + 3 qr. First of all we note

that such constraints may give rise to problems which fail to have optimal solutions

even though they are feasible and bounded. Consider the example

)+

max y l + y z

subject to y l + y3 = 1

y2+y4= 1

y 30

L1= {3,4}, q1 = 1.

If the logical constraint is I y l l + = 1, then feasible points with objective value

arbitrarily close to 2 lie on the segments y l = 1 and y 2 = 1, but the point (1,1,0,0) is

infeasible. A similar result holds if the logical constraint is 1 y l l + 2 1. Clearly vertex

generation methods will be useless for such problems.

D.S. Rubin

466

Let us then consider the more restricted problem o n finding the best vertex of F

subject to these new logical constraints. Clearly Lemmas 2 and 3 and Theorem 1

apply as stated for constraints I y ' = ql. However, since columns with I y I+ 3 qr

can be constructed from columns with I y ' < q1 it does not appear that Theorem 1

can be strengthened for constraints I y ' I+ = 4,. Similarly we can see that there are no

results analogous to Theorem 1 for constraints I y I+ 3 ql. For such constraints, the

best we can do is to use Chernikova's algorithm to'generate all the vertices of F, and

this is admittedly not particularly efficient.

I+

I+

References

[ 11 E. Balas, Intersection cuts from disjunctive constraints, Management Sciences Research Report

No. 330, Carnegie-Mellon University, February 1974.

[2] E. Balas, Disjunctive programming: Properties of the convex hull of feasible points, Management

Sciences Research ReFort No. 348, Carnegie-Mellon University, July 1974.

(31 A.V. Cabot, On the generalized lattice point problem and nonlinear programming, Operations

Res., 23 (1975) 565-571.

[4] N.V. Chernikova, Algorithm for finding a general formula for the nonnegative solutions of a system

of linear equations, U.S.S.R. Computational Mathematics and Mathematical Physics, 4 (1964)

151-158.

[5] N.V. Chernikova, Algorithm for finding a general formula for the nonnegative solutions of a system

of linear inequalities, U.S.S.R. Computational Math. and Marh. Phys., S (1965) 22S233.

[6] C.B. Garcia, On the relationship of th e lattice point problem, the complementarity problem, and

the set representation problem, Technical Report No. 145, Department of Mathematical Sciences,

Clemson University, August 1973.

(71 F. Glover and D . Klingman, Th e generalized lattice-point problem, Operations Res., 21 (1973)

141-155.

[8] F. Glover, D. Klingman and J. Stutz, Th e disjunctive facet problem: Formulation and solution

techniques, Operations Res., 22 (1974) 582-601.

[9] P.G. McKeown and D.S. Rubin, Neighboring vertices o n transportation polytopes, to appear in

Naval Res. Logistics Quarterly, 22 (1975) 365-374.

[ 101 D.S. Rubin, Vertex generation and cardinality constrained linear programs, Operations Rex, 23

(1975) 555-565.

[ I l l D.S. Rubin, Vertex generation and linear complementarity problems, Technical Report No. 74-2,

Curriculum in Operations Research, University of North Carolina at Chapel Hill, December 1974.

[ 121 K. Tanahashi and D . Luenberger, Cardinality-constrained linear programming, Stanford University, 1971.

Annals of Discrete Mathematics 1 (1977) 467-477

@ North-Holland Publishing Company

SENSITIVITY ANALYSIS IN INTEGER PROGRAMMING*

Jeremy F. SHA P I R O

Operations Research Center, Massachusetts Institute of Technology, Cambridge, MA 02139,

U.S.A.

This paper uses an IP duality Theory recently developed by the authors and others to derive

sensitivity analysis tests for IP problems. Results are obtained for cost, right hand side and matrix

coefficient variation.

1. Introduction

A major reason for the widespread use of L P models is the existence of simple

procedures for performing sensitivity analyses. These procedures rely heavily on

LP duality theory and the interpretation it provides of the simplex method. Recent

research has provided a finitely convergent IP duality theory which can be used to

derive similar procedures for IP sensitivity analyses (Bell and Shapiro [ 3 ] ;see also

Bell [l], Bell and Fisher [2], Fisher and Shapiro [6], Fisher, Northup and Shapiro

[7], Shapiro [18]). Th e I P duality theory is a constructive method for generating a

sequence of increasingly strong dual problems to a given IP problem terminating

with a dual producing an optimal solution t o the given IP problem. Preliminary

computational experience with the I P dual methods has been promising and is

reported in [7]. From a practical point of view, however, it may not be possible

when trying to solve a given IP problem to pursue the constructive procedure as far

as the I P dual problem which solves the given problem. The practical solution t o

this difficulty is to imbed the use of IP duality theory in a branch and bound

approach (see [7]).

The IP problem we will study is

u = min cx

(1)

s.t. Ax

+ Is = b

x, = O or 1, s, = 0 , 1 , 2,..., U,,

where A is an m x n integer matrix with coefficients a,, and columns a,, b is an

m x 1 integer vector with components b,, and c is a 1 X n real vector with

components c,. For future reference, let F = { x p ,sP};=, denote the set of all feasible

solutions to (1).

* Supported in part by the U.S. Army Research Office (Durham) under Contract No.

DAHC04-73-C-0032.

467

J.F. Shapiro

36X

We have chosen to add the slack variables explicitly to (1) because they behave in

a somewhat unusual manner unlike the behavior of slack variables in LP. Suppose

for the moment that we relax the integrality constraints in problem (1); that is, we

allow 0 c x, < 1 and 0 c s, < U,. Let u T denote an optimal dual variable for the ith

constraint in this LP, and let sT denote an optimal value of the slack. By LP

complementary slackness, we have u T < 0 implies s t = 0 and u T > 0 implies

s T = U,. In the LP relaxation of (I), it is possible that 0 < s T < U, only if u T = 0. On

the other hand, in IP we may have a non-zero price u T and 0 < s T < U, because the

discrete nature of the IP problem makes it impossible for scarce resources to be

given in Section 2.

2. Review of IP duality theory

A dual problem to (1) is constructed by reformulating it as follows. Let G be any

finite abelian group with the representation

-

G = Z,, @ Z a @ . .@ Zq,

1

where the positive integers q, satisfy q13 2 , q, qltl, i = 1,. . ., r - 1, and Zq,is the

cyclic group of order q,. Let g denote the order of G ; clearly g = fl:=,q, and we

enumerate the elements as uo,u,, . . ., ug-' with uo= 0. Let

. . ., E , be any

elements of this group and for any n-vector f, define the element +(f) = c E G by

+

The mapping

naturally partitions the space of integer m -vectors into g equivalence classes So,S , , . . ., Sg-l where f', f'E SK if and only if cK = +(f') = +cf'). The

element aK of G is associated with the set S K ;that is, 4 ( f )= UK for all integer

m-vectors f E SK.

It can easily be shown that (1) is equivalent to (has the same feasible region as)

(24

(2b)

(2d)

u = min cx,

s.t. Ax

+ Is = b,

x, = O or 1,

s, = 0 , 1 , 2 ) . . . )

u,,

where a, = + ( a , ) and /3 = + ( b ) . The group equations (2c) are a system of r

congruences and they can be viewed as an aggregation of the linear system

Ax + Is = 6. Hence the equivalence of (1) and (2). For future reference, let Y be

the set of (x, s) solutions satisfying (2c) and (2d).Note that F C Y .

Sensitivity analysis in integer programming

469

The I P dual problem induced by G is constructed by dualizing with respect to the

constraints Ax + Is = b. Specifically, for each u define

(3)

L ( u ) = ub + min {(c - uA)x - u s } .

(X.

S)E Y

The Langrangean minimization (3) can be perfoqmed in a matter of a few seconds

or less for g up to 5000; see Glover [lo], Gorry, Northup and Shapiro [ll]. The

ability to d o this calculation quickly is essential to the efficacy of the I P dual

methods. If g is larger than 5000, methods are available to try to circumvent the

resulting numerical difficulty (Gorry, Shapiro and Wolsey [ 121). However, there is

no guarantee that these methods will work, and computational experience has

shown that the best overall strategy is to combine these methods with branch and

bound.

Sensitivity analysis on IP problem (1) depends to a large extent on sensitivity

analysis with respect to the group G and the Langrangean L. Let

m

g ( a ; u ) = min

C (c, - ua,)x, + C - UJ,

,==I

x,

=0

,=l

or 1,

s, = O , l , 2 ) . . . )

u,.

Then L ( u ) = ub + g ( p ; u ) . Moreover, the algorithms in [lo] and [ll] can be used

to compute g ( a ; u ) for all (+ E g without a significant increase in computation time.

It is well known and easily shown that the function L is concave, continuous and

a lower bound on u. The IP dual problem is to find the greatest lower bound

w = maxL(u)

(5)

s.t. u E R".

If w = +=, then the IP problem (1) is infeasible.

The desired relation of the IP dual problem (5) t o the primal IP problem (1) is

summarized by the following:

Optimality Conditions : The pair of solutions ( x *, s *) E Y and u * E R" is said to

satisfy the optimality conditions if

(i) L ( u * )= u * b + ( c - u * A ) x *- u * s

(ii) Ax* + Is* = 6.

It can easily be shown that a pair satisfying these conditions is optimal in the

respective primal and dual problems. For a given IP dual problem, there is n o

guarantee that the optimality conditions can be established, but attention can be

restricted to optimal dual solutions for which we try to find a complementary

optimal primal solution. If the dual IP problem cannot be used to solve the primal

J.F. Shapiro

470

problem, then u > w and we say there is a duality g a p ; in this case, a stronger IP

dual problem is constructed.

Specifically, solution of the IP problem (1) by dual methods is constructively

sets { Y k } f = ,and

,

IP

achieved by generating a finite sequence of groups {Gk}f==o,

dual problems analogous t o (5) with maximal objective function value wk.The

group G" = Z1, Y o= {(x, s) x, = 0 or 1, s, = 0,1,2,. . ., U , } and the corresponding

IP dual problem can be shown to be the linear programming relaxation of (1). The

groups here have the property that G kis a subgroup of G'", implying directly that

Y k +C

' Y k and therefore that u 3 w * + I 2 w '. Sometimes we will refer to G k + 'as a

supergroup of G k .

The critical step in this approach t o solving the IP problem (1)is that if an optimal

solution t o the k f h dual does not yield an optimal integer solution, then we are able

to construct the supergroup G k + 'so that Yk+'S:

Yk.Moreover, the construction

eliminates the infeasible IP solutions (x, s ) E Y kwhich are used in combination by

the IP dual problem to produce a fractional solution to the optimality conditions.

Since the set Y ois finite, the process must converge in a finite number of IP dual

problem constructions t o an IP dual problem yielding an optimal solution to (1) by

the optimality conditions, or prove that (1) has no feasible solution. Details are

given in [3].

The following theorem exposes how this IP duality theory extends the notion of

complementary slackness to IP.

I

Theorem 1. Suppose that ( x * , s*) E Y and u * E R" satisfy the optimality conditions. Then

(i) u f < 0 and s: > 0 implies E, # 0.

(ii) u f > O and s: < U, implies F , # O .

Proof. Suppose u T < 0 and sT > 0 but

Recall that (x *, s *) E Y implies that

E , = 0, we

can reduce the value of s, to 0 and still have a solution in Y. But this new solution in

thz Lagrangean has a cost of L ( u * )+ u : s * , * < L ( u * ) contradicting the optimality

of (x *, s *). The proof of case (ii) is similar. 0

E,

= 0.

c,"=,

a , x ? 4x:,"=,

s,sT = ,B and L ( u *) = u * b + ( c - u * A ) x * - u*s. Since

The IP dual problem (5) is actually a large scale linear programming problem. Let

Y=

sf}:=, be an enumeration of Y. The LP formulation of (5) is

{XI,

w = max v

(6)

v

ub + ( C - u A ) x '- U S '

t = 1 , . . .) T.

The linear programming dual to (6) is

Sensitivity analysis in integer programming

47 1

T

w

=

min

C (cx‘)~,,

,=I

T

(7)

s.t.

C ( A x ‘ + Zsf)wI = b,

,=I

The number of rows T in (6), or columns in (7), is enormous. The solution methods

given in Fisher and Shapiro [6] generate columns as needed by ascent algorithms

for solving (6) and (7) as a primal-dual pair. The columns are generated by solving

the Lagrangean problem (3).

The formulation (7) of the IP dual has a convex analysis interpretation.

Specifically, the feasible region in (7) corresponds to

1

{ ( x , s) A x

+ Is = b, 0 s x,

1,0

s, s Ut}n [ Y ]

where the left hand set is the feasible region of the L P relaxation of the I P problem

(1) and “[ 1” denotes convex hull. Thus, in effect, the dual approach approximates

the convex hull of the set of feasible integer points by the intersection of the L P

feasible region with the polyhedron [ Y ] .When the IP dual problem (5) solves the

IP problem (l), then [ Y ] has cut away enough of the LP feasible region to

approximate the convex hull of feasible integer solutions in a neighborhood of an

optimal IP solution.

3. Sensitivity analysis of cost coefficients

Sensitivity analysis of cost coefficients is easier than sensitivity analysis of right

hand side coefficients because the set F of feasible solutions remains unchanged. As

described in the previous section, suppose we have constructed an IP dual problem

for which the optimality conditions are satisfied by some pair (x *, s*) E Y and u *.

The first question we wish to answer is

In what range of values can cI vary without changing the value of the zero-one

variable x1 in the optimal solution ( x * , s*)?

We answer this question by studying the effect of changing c1 on the Lagrangean.

Theorem 2. Let ( x *, s *) and u * denote optimal solutions to the primal and dual ZP

problems, respectively, satisfying the optimality conditions. Suppose the zero-one

variable x 7 = 0 and we consider varying its cost coefficient c1to c1+ Acl. Then (x *, s *)

remains optimal i f

J.F. Shapiro

472

(8)

Acr 2 min (0, g ( p ; u *) - (cI - u * a r )- g ( p - aI ; u *)},

where g ( . , u *) is defined in (4).

Proof. Clearly, if x T = 0 and (8) indicates that Acl 3 0, or cI increases, then x *

remains optimal with x T = 0. Thus we need consider only the case when g ( p ; u *) (cI - u * a r ) g ( p - aI ; u *) < 0. Let g ( a ; u * x r = k ;A c r ) denote the minimal cost

in (4)if we are constrained to set xI = k when the change in the 1‘” cost coefficient is

Acr. If Acl satisfies (8), then

1

(9)

1

1

= cr + Acr - u *ar+ g ( p - a I ;u * 1 xI = 0; 0)

g ( P ; u * xI = 1; A c l ) = cr + Acr - u * a r+ g ( p

-

a I ;u * xI = 0; A c r )

+ Acr - u*ar + g ( p

-

a r ;u * )

2 cI

g ( P ; *)

1

= g ( p ; u * X I = 0;O)

= g ( p ; u * l x l =O;Acr),

1

where the first equality follows from the definition of g ( p ; u * x = 1; A c r ) , the

second equality because the value of Acl is of no consequence if x I = 0, the first

inequality because g ( p - cur; u * ) may or may not be achieved with xI = 0, the

second inequality by our assumption that (8) holds, the third equality because

g ( p ; u *) = (c - u * A ) x *- u *s and x: = 0, and the final equality by the same

reasoning as the second equality. Thus, as long as Acl satisfies (8), it is less costly to

set xI = 0 rather than xI = 1. 0

On the other hand, marginal analysis when X T = 1 is not as easy because the

variable is used in achieving the minimal value g ( p ; u * ) . Clearly x * remains

optimal if cI is decreased. As cI is increased, xi should eventually be set to zero

unless it is uniquely required for feasibility.

Theorem 3. Let (x *, s *) and u * denote optimal solutions to the primal and dual IP

problems, respectively, satisfying the optimality conditions. Suppose the zero -one

variable x T = 1 and we consider varying its cost coefficient cI to cr + Acl. Then (x *, s *)

is not optimal in the Lagrangean if

(10)

I

Acr > min{c, - u *a, j E J ( a I ) and x T = 0) - (cr - u * a r )

where

W e assume there is at least one x:

meaningless.

=0

for j E J ( a r ) because otherwise the result is

Sensitivity analysis in integer programming

473

Proof. Note that we can order the elements jl,j 2 , . . .,jv in J ( a r )by increasing cost

c, - x*a, with respect to u * such that x: = 1, j = 1 ,. . ., j", x: = 0, j = ju + 1,. . .,jv.

This is because all these variables x, have the same effect in the constraints in (4).

By assumption, there is an x 7 = 0, and if cf+ Ac, - u * a r> c, - u *a,, then x, = 1 will

be preferred to xf = 1 in (4). In this case, ( x * , s*) is no longer optimal in the

Lagrangean and the optimality conditions are destroyed. 0

The inequality (10) can be a gross overstatement of when ( x * , s*) ceases to be

optimal in the Lagrangean. Systematic solution of g(j3; u *) for increasing values of

cr is possible by the parametric methods we discuss next.

A more general question about cost coefficient variation in the IP problem (1) is

the following

How does the optimal solution change as the objective function c varies in the

interval [c', c']?

Parametric IP analysis of this type has been studied by Nauss [15], but without the

IP duality theory, and by the author in [21] in the context of multicriterion IP. W e

give some of the relevant results here. The work required to do parametric IP

analysis is greater than the sensitivity analysis described above which is effectively

marginal analysis.

For 8 E [0,1] define the function

(11)

u ( e ) = min((1- e ) c o + 8c1)x,

Ax

+ Is = b,

x, = 0 or 1,

s, = 0 , 1 , 2 )...)u,.

It is easy to show that u ( 8 ) is a piecewise linear concave function of 8. The IP dual

objective function can be used to approximate u ( 8 ) from below. Specifically,

suppose (11) is solved by the I P dual at 8 = 0 and we consider increasing it. From

(7), we have

i-

(12)

s.t.

'=I

wt

(Ax' + Ist)wl = b

3 0

where w ( 8 ) is also a piecewise linear concave function of 8, and w(O)= u(0)

because we assume an IP dual has been constructed which solves the primal.

### Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Chapter 22. Parametric Integer Programming: the Righthand-Side Case

Tải bản đầy đủ ngay(0 tr)

×