Chapter 29. Vertex Generation Methods for Problems with Logical Constraints
Tải bản đầy đủ  0trang
545
Partial orders related to Boolean optimization
(3.5)
C xi
x =
e,
t=1
with 0 . e , : = 0 and 1 * e, = e, for i E N.
I
 0) (if the set is
(3.6) Definition. Let 5 be a preorder on B", k := min { j E N e, <_
empty put k : = n + 1). Then
x+:=
C xi * e i
ick
Let T ( x )= { j l ,j 2 , . . ., j,} with jl < j z < * . . < j,. Then x(') is given
(3.7) Definition.
by
for i E N.
T(x"')= { j l , j Z , . . .,jmt"(r,,)}
The special subvectors defined by (3.6) respectively (3.7) have some useful
properties in preordered semigroups. Let be 11 x I/:=/ T ( x ) ( .
(3.8) Proposition. Let (B", + , 5
 ) be a preordered semigroup. r = IIx 1). Then
(3.8.1)
o
(3.8.2)
y L'X
yzx+
(3.8.3)
y  XI')
(1 y I( = i A y L I X
(3.8.4) there exists t E N such that x(') = x+,
a
X(')
=
=
=
x'''< X ( r  l ) < .
=

<
.
z
=
. . < x('+l)s
x+.

=
(3.8.1)(3.8.4) follow by the monotonicity property (M) and (3.2'). Let us consider
for example (3.8.2). Let cp correspond to the definition of YL'X. Then, by repeated
application of (M)
y
for c p ( i )
=
C y,
*
C y, . e , , , )=: yf
ezs
i implies e, 5
 e,+,(,)by (3.2)'. Analogously
with k as defined in (3.6).
The application of the greedy algorithm to S yields step by step the sequence
(3.9)
x('),x(2) x ( 3,..,
)
x(')= x ( S )
9
thus by (3.8.4) it is possibl(3.8.2) and (2.6) is
&
determine x(S)'. An immediate consequence of
U. Zimmermnnn
546
c
(3.10) Theorem. Let 5 be a preorder on B" and S B". If
(3.10.1) (B", + , 5 ) a preordered semigroup,
(3.10.2) there exists the maximum x of S with regard to L',
then y 5 x + , V y E 9.
15
The two assumptions describe a class of problems which can be solved by the
application of the greedy algorithm. After the determination of X + one has to check
whether X + E S or not. If x t E 9, S, then it is only an upper bound. If S = 9, then
x + is a solution of (P2). The following theorem implies by (3.8.4) Theorem (3.10).
Let usdenote S , : = { x E S [ I l x I ( = i } .
(3.11) Theorem.
max{llx Ib E S )
y <, X(I),
The assumptions of
(3.10) yield for ail
1 S i s I( S
/I:=
vy E (S),.
The theorem follows from (3.8.3) and (2.6). The two theorems refer to different
combinatorial structures.
(3.12) Corollary. Let B C B". If M
defined by (3.4), then
=
M ( N , T ( B ) )is a matroid by (2.7) and
5
 is
x ( B ) E max s ( B ) .
(3.13) Corollary. Let S
defined by (3.4) then
c B". If M = M ( N , T ( S ) )is a matroid by
(2.8) and
 is
5
[x(S)]+ E max z ( S ) .
The two corollaries follow from (3.11) respectively (3.10) and (2.10) respectively
(2.11). We consider the following class of functions in view of (Pl):
(3.14) Definition. Let F denote the set of all functions f : B" + H with
(3.14.1) ( H , c )is a n ordered set,
(3.14.2) f(e,) s f(e,,) S . . . c f ( e l ) ,
(3.14.3) (B", +
) is a preordered semigroup with regard to
the preorder induced by f .
,s

c
(3.15) Corollary. Let S B" with S = 9. If (3.10.2) holds, then regardless of the
choice of the objective f E F, [x(S)]+ is a solution of the problem maxxEsf(x).
This follows from (3.10). Clearly there is an analogous corollary corresponding to
(3.11).
(3.16) Corollary. Let B C B" with 11 y 11 = IIB 1) V y E B. If (3.10.2) holds then
regardless of the choice of the objective f E F x ( B ) is a solution of the problem
maxxEsf(x).
These corollaries reflect t h e fact that the greedy algorithm only considers the
values of the objective function for the unit vectors.
Partial orders related to Boolean optimization
547
At the end of Section 2 we introduced regular sets with regard to L'. As shown by
(2.16) in this case the assumption (3.10.2) implies that M = M ( N , T ( S ) ) is a
matroid. Results for more general regular sets with regard to C and L' are given
by Hammer, Johnson and Peled in [7]. If the objective "agrees" with the partial
order R , that is
+ f(x)
xR y
(3.17)
f(y),
vx, Y E B",
then
max 5 S
(3.18)
max 5 SR
clearly holds for S CB", S R : = {x E B" I 3 y E S : x R y } .
If distinct vectors in (3.17) imply distinct function values, then equality holds in
(3.18). In this case the BOP (Pl) is equivalent to
(P3)
max f(x).
XCSR
As shown in [7] Sc can be described by the restrictions of a covering problem, that
means all restrictions are of the form
(lxj)zl,
with J C N .
j€J
In the case R = L' a further simplification is possible and developed in [7].
In connection with covering problems the partial order i ' h a s been considered by
Bowman and Starr [l]. They present an enumerative algorithm for the problem of
maximizing a partial order on B", which fulfills (3.2)' and (M) in (3.3). If in this
section 5 denotes only a partial preorder, then under the additional assumption to
(3.2')

e , < 0 or there exists k E N,(1) such that e k z O <
=
+=
all results hold which refer to 5.
( 3 . 2 ) 0 5 en
or

ek]
4. Dual partial orders
(4.1) Definition.
Let R be a partial order on B". Then the dual partial order of R is
R ' , defined by
x R' y :
G(y)RG(x)
with a E P n , a ( i ) : = n  i + l for I E N .

Partial orders and their duals may coincide more or less
(4.2) Proposition. (1) x C ' y C y C x,
(2) x L b ' y
x LbY.
U. Zimmermann
548
In view of proposition (2.4) the dual partial orders of those partial orders defined
by (2.1) and (2.3) have analogous properties.
*
*
(4.3) Proposition. (1) y C x
(2) X L b Y
XLI'Y,
(3) x L I ' y
x <'y.
=+
x
L~
y,
In connection with dual partial orders we consider a modified greedy algorithm
(A')
(1) x:=O; j : = n ;
(2) if x + e, E S, set x:= x + ei;
(3) if j = 1, stop,
otherwise set j : = j  1 and return to (2).
The output vector of this algorithm applied to S C B" shall be denoted by x ' ( S ) .
The application of (A') to S * := (1  x x E S} is called dual greedy algorithm.
I
(4.4) Proposition. x'(S) is the minimum of S with regard to
<'.
Proof. The application of (A') to S is equivalent to the application of (A) to & ( S ) .
Hence x'(S) = x(&(S)). (4.4) follows by (4.1).
For an arbitrary set S the four vectors representing the maxima respectively the
minima of S with regard t o < respectively to <' may be pairwise distinct. For
example, take S = {x, y, u, v } with
x = (1 0 0 lo),
maximum with regard to
= (0 1100),
maximum with regard to
y
<,
<',
u = (0 1 0 0 l), minimum with regard to =S',
minimum with regard to
u = (00 1 lo),
<.
, L", 6 ,< '}. Then
(4.5) Proposition. Let R E { C , _> ,L ~L',
XRY
(ly)R(lX).
Let us show this for example in the case of R = L'. Equivalent to the left side
there is
I Tk(x)( I Tk(y)(,
v1
k
n
and this is equivalent to
1
Tk
(1  x ) I 3 I
Tk
(1  y ) 1,
v1
k s n.
The rest follows analogously to the first equivalence. An immediate consequence of
(4.5) is the next proposition.
549
Partial orders related to Boolean optimization
(4.6) Proposition. 1 x'(S*) is the maximum of S with regard to
the minimum of S with regard to < .
<'. 1 x ( S * ) is
The lexicographical maximum or minimum of S as well as the dual lexicographical maximum or minimum of S can be computed by the application of (A) or (A') to
S or S*.
B". The following statements are equivalent :
(4.7.1) there exists the maximum x E B with regard to L~
(4.7) Theorem. Let B
(4.7.2)
there exists the common maximum x E B with regard to L~and L'.
An implication of (4.7.1) or (4.7.2) is
(4.7.3) x ( S ) = 1 x'(S*).
Proof. (4.7.1) implies (4.7.2) by (2.4) and (4.3). Reversely, if y L " X
then follows by definition I(x 11 6 1) y 11 < 1) x (1 and therefore y L~
x.
(4.7.2) implies (4.7.3) by (2.6), (4.3) and (4.6).
and y
L'X,
If (4.7.1) or (4.7.2) hold, the dual greedy algorithm yields the complement of the
lexicographical maximum of S. This may be impdrtant in view of problem (Pl). The
crucial point
in the application of the greedy algorithm is the test whether x E or
not. If x E ( S * ) is easier t o check, then one will prefer the dual greedy algorithm.
5. Remarks
The combinatorial structure of problems for which the greedy algorithm is valid
is closely related to matroids. The corresponding algorithm for the intersection of
two matroids, namely the weighted intersection algorithm of Lawler [9], has not yet
been considered in this way, but similar studies have been published by Burkard,
Hahn, and Zimmermann [3] as well as Burkard [2] about the assignment problem
which is a special example of the intersection of two matroids. Already in this
special case it turned out that similar results as in (3.15) cannot be attained, yet an
algorithm is stated in [3] which solves the assignment problem with generalized
objectives.
References
[l] V.J. Bowman, and J.H. Starr, Set covering by ordinal cuts 1/11, Management Sciences Research
Reports No 321/322, 1973 CarnegieMellon University Pittsburgh, Pennsylvania.
[Z] R. Burkard, Kombinatorische Optimierung in Halbgruppen in: R. Bulirsch, W. Oettli, J. Stoer,
eds., Optimization and Optimal Control, Lecture notes in mathematics, 477 (Springer, Berlin, 1975)
pp. 117.
550
U. Zimmermann
[3] R. Burkard, W. Hahn and U. Zimmermann, An algebraic approach to assignment problems,
Report 19741, Mathematisches Institut der Universitat Koln 1974.
[4] F.D.J. Dunstan and D.J.A. Welsh, A greedy algorithm for solving a certain class of linear
programmes, Math. Programming 5 (1973) 338353.
[5] J. Edmonds, Matroids and the greedy algorithm, Math. Programming 1 (1971) 127136.
[6] D. Gale, Optimal assignments in an ordered set: an application of matroid theory, J. Comb. Theory
4 (1968) 176180.
[7] P.L. Hammer, E.L. Johnson and U.N. Peled, Regular 01programs, Research Report CORR
7318, University of Waterloo.
[8] J. B. Kruskal, On the shortest spanning subtree of a graph and the travelling salesman problem,
Proc. Am. Math. Soc. 7 (1956) 4850.
[9] E.L. Lawler, Matroid intersection algorithms, Math. Programming 9 (1975) 3156.
[lo] M.J. Magazine, G.L. Nemhauser and L.E. Trotter, When the greedy solution solves a class of
knapsack problems, MRC Technical Summary Report No. 1421 (1974), Mathematics Research
Center University of Wisconsin, Madison, Wisconsin, U.S.A.
[ll] L.A. Wolsey, Faces for a linear inequality in 01Variables, Math. Programming 8 (1975) 165178.
Annals of Discrete Mathematics 1 (1977) 551562
@ NorthHolland Publishing Company
INTEGER LINEAR PROGRAMMING WITH
MULTIPLE OBJECTIVES*
Stanley ZIONTS
School of Management, State University of New York ar Buffalo, Buffalo, N Y , U.S.A.
Although it may seem counterintuitive, a method for solving multiple criteria integer linear
programming problems is not an obvious extension of methods that solve multiple criteria linear
programming problems. The main difficulty is illustrated by means of an example. Then a way of
extending the ZiontsWallenius algorithm [6] for solving integer problems is given, and two types
of algorithms for extending it are briefly presented. An example is presented for one of the two
types. Computational considerations are also discussed.
1. Introduction
In [6] a method was presented for solving multiple criteria linear programming
problems. Because integer programming is a generalization of linear programming
in that a subset of variables may be required to take on integer values, it is
reasonable to ask if multicriteria integer problems can be solved by an obvious
extension to the method: solving the multicriteria linear programming problem
using that method and then using the associated multipliers to solve the integer
problem. In general, unfortunately, such a procedure is not valid. Assuming that
the implicit utility function of the decision maker is a linear additive function of
objectives, the general idea can be modified into a workable algorithm for solving
mixed or all integer programming problems involving multiple objectives.
Numerous approaches to various problems involving multiple objective functions
have been proposed. B. Roy [3] discusses a number of them. He also develops a
typology of methods [3, p. 2401:
“1. aggregation of multiple objective functions in a single function defining a
complete preference order;
2. progressive definition of preferences together with exploration of the
feasible set;
3. definition of a partial order stronger than the product of the n complete
orders associated with the n objective functions;
4. maximum reduction of uncertainty and incomparability.”
To put things into perspective, the approach of [6] is a combination of 1 and 2 in
that an aggregation of the functions is accomplished by an interactive process in
* An earlier version of this paper has also been issued as Working Paper 7532 of the European
Institute for Advanced Studies in Management in Brussels.
551
552
S. Zionts
which preferences are expressed. The use of multiple criteria in an integer
framework has been mentioned in [6] and more recently in [1] and [4].
The plan of this paper is to first indicate why noninteger methods cannot be
extended in an obvious way to solve multiple criteria integer problems. Then two
extensions of the method of [6] for solving integer problems are developed, an
example is solved, and some considerations for implementation are given. In an
appendix the method of [6] is briefly overviewed.
2. Some considerations for solving multiple criteria integer problems
The problem to be considered is a mixed integer linear programming problem.
Let the decision variables be a vector x of appropriate order where some or all of
the variables are required to take on integer values. Denote the set of integer
variables as J. The constraint set is then
Ax
=
b
x30
x,, j E J integer,
where A and b are, respectively, a matrix and vector of appropriate order. In
addition we have a matrix of objective functions C where row i of C gives the ith
objective C,. Each objective of u is to be maximized and we may thus write
Iu  c x
6 0.
(2)
The formulation (1) (2) is the most general formulation of the multiple criteria
integer programming problem if one grants that any nonlinearities are already
represented in the constraints (1) using piecewise linearizations and integer
variables as necessary. If we accept that the implicit utility function is a linear
function (as was done originally in [6]) of the objectives u, we may therefore say
that our objective is to maximize Au where A is an unknown vector of appropriate
order. Were A known, the problem of maximizing Au subject to (1) and (2) would
be an ordinary integer programming problem. Such a problem could be solved
using any method for solving integer linear programming problems. The problem is
that h is not known.
In an earlier paper [6] Wallenius and I developed a method for solving linear
programming problems having multiple objectives. That method is briefly summarized in the appendix. The method has been extensively tested and seems to work in
practice. A natural extension of that method would appear to be an extension for
solving problems involving integer variables:
1. Solve the continuous multiple criteria problem according to the method of [6];
2. Using the multipliers obtained in step 1, solve the associated integer linear
programming problem.
Integer linear programming with multiple objectives
553
Unfortunately as the following simple example shows, that extension does not
necessarily work.
Given the constraints:
XI
+ 4x2 c 3;
xl,xz 5 0 and integer
with objectives u1= xl, and u z = x2 then provided that the true multipliers A, and A 2
( >0) satisfy the following relationships
h 1> fh2
hl
< 3Az
then the continuous solution x1 = 2.34, xz = 2.34 is optimal. However, for this
problem there are three optimal integer solutions corresponding to the same
continuous optimum depending on the true weights:
If 3Az> A l > 2Az, then x1 = 3, xz = 0 is optimal;
If 2Az> h l > O S A , , then xI = x2 = 2 is optimal;
If 0.5Az> A l > !A2, then x1 = 0, xz = 3 is optimal.
The example could readily be made more complicated, but it serves to show that
further precision may be required in the specification of the multipliers than only to
identify the multiplier valid at a noninteger optimal solution. (Further precision is
not always required; change the constraint value of the problem from 3.125 to 2.99.)
3. Adapting the ZiontsWallenius method for solving integer programming
problems
To further specify the multipliers A to find the optimal integer solution, it is
necessary to ask additional questions of the decision maker. There are numerous
ways in which this may be done, and we shall explore two cf them. Both of these
proposals represent untested procedures.
3.1. A branch and bound approach
We first consider branch and bound algorithms. The multiple criteria method can
be altered to work in a branch and bound integer framework. T o do this we first
present a flow chart of a simple branchandbound algorithm, [5, p. 4161 in Fig. 1.
As usual, [y] is the largest integer not exceeding y. The idea is to solve a sequence
of linear programming problems thereby implicitly enumerating all of the possible
integer solutions. The best one found is optimal. The procedure of Fig. 1 cannot be
554
Tc
Halt
optimum
S. Zionts
Choose an integer
variable Xk whose
solution value Vk
is not an integer.
Select solution with the maximum
objective function value from list,
I f l i s t is empty halt: there is no
feasible integer solution t o the problem.
FIG.1. Flow Chart ofa Simple Branch a n d Bound A l g o r i t h m
Taken f r o m [S,page 4161.
555
Integer linear programming with multiple objectives
used directly, but must be modified. The modifications wh'ich are to be made are
based on the following theorem.
Theorem. A solution can be excluded from further consideration (not added to the
list) provided the following two conditions hold :
(1) the decision maker prefers an integer solution to it,
Solve multicriteria linear programming
problem obtained by relaxing integer
constraints. If solution satisfies
integer constraints, stop.
Yes
the conditions of

Discard
the
Solution
I
Choose an integer variable xk whose
solution value y k is not integer.
Solve two problems, each having adjoined
one of the following constraints:
Xk 5 [Ykl
Xk 2 [Ykl + 1
Exclude any infeasible solutions
from further consideration.
FIG.2.
Flow Chart of a Branch and Bound Multicriteria
Integer Linear Programming Method