5 SRC/P Partial Differential System for the Short-Run Approach
Tải bản đầy đủ - 0trang
3.5 SRC/P Partial Differential System for the Short-Run Approach
41
and of the derivative property of CSR as the optimal value of (3.2.3), i.e., into the
system
r 2 @O k …SR . p; k; w/
(3.5.1)
p 2 @y CSR .y; k; w/
(3.5.2)
v 2 @O w CSR .y; k; w/ .
(3.5.3)
It shall be called the SRC/P Partial Differential System because it uses the partial
sub/super-differentials @y and @O w of CSR (the SRC) as a saddle (convex-concave)
function of y and w, in addition to using the partial superdifferential @O k of …SR (the
SRP). A similar use of CSR , but as a saddle function of .k; w/, arises later in the
L/SRC Partial Differential System (3.9.8)–(3.9.10); the affices “P” and “L” in the
two names stand for “profit” and “long-run”.
Comments (Absorption of a No-Gap Condition in a Differential Condition)
• The system (3.5.1)–(3.5.3) can be derived also from the Split SRP Optimization
System (3.2.2)–(3.2.5). The FOC for (3.2.2) and the derivative property of CSR
as the value function for (3.2.3) are used just as before. But, instead of the
FOC for (3.2.1), this time the third condition is the derivative property of …SR
as the value function for (3.2.4) or (3.3.6), i.e., that r 2 @O k …SR . p; k; w/. This
and (3.2.5) together mean exactly that r 2 @O k …SR , since (3.2.5) means that
…SR D …SR , at . p; k; w/.
• The last argument is a case of absorbing a no-gap condition in a subdifferential
condition by changing the derivative from Type Two (here, @O k …SR ) to Type
One (@O k …SR ). In this, the value function is changed either from the dual to the
primal (if the parameter in question is primal like the k here), or vice versa . The
optimal solution is always equal to the marginal value of the programme being
solved; this is a derivative of Type Two (see the Shephard-Hotelling Lemmas in
Sect. 3.11). The derivative is actually of Type One—i.e., it is the marginal value
of the programme dual to that being solved—if there is no duality gap. But if
there is a gap, then the Type One derivative does not exist (i.e., the sub/superdifferential is empty). In the context of fixed-input valuation: the set of solutions,
for r, of (3.2.4) or (3.3.6) is always identical to @O k …SR . p; k; w/, which is a
derivative of Type Two—this is the Dual of Hotelling’s Lemma (Lemma 3.11.2).
The solution set equals @O k …SR (a derivative of Type One) if …SR D …SR , at
the given . p; k; w/: see Remark 3.11.8. But if SR Ô …SR at . p; k; w/, then
@O k … . p; k; w/ D ; (the empty set); so if r 2 @O k …SR then …SR D …SR (at the
given p, k and w), i.e., there is no duality gap (between SRP maximization and
FIV minimization).
42
3 Characterizations of Long-Run Producer Optimum
3.6 Other Differential Systems
Applied to the Split SRC Optimization System (3.4.5)–(3.4.8), the same methods
yield the partial differential system that consists of the FOC for (3.4.7) and the
derivative properties of CSR and …SR as the value functions for (3.4.5) and (3.4.8),
with @w CSR changed to @w C SR to absorb the no-gap condition (3.4.6)—i.e., the
system:
r 2 @O k …SR . p; k; w/
(3.6.1)
y 2 @p …SR . p; k; w/
(3.6.2)
v 2 @O w CSR .y; k; w/ .
(3.6.3)
It shall be called the O-FIV Partial Differential System because it uses the partial
sub/super-differentials @p and @O k of …SR (the FIV) as a saddle function of p and k,
in addition to using the partial superdifferential @O w of CSR (the OFIV). Thus it uses
only the dual value functions (…SR and CSR ), whilst the system (3.5.1)–(3.5.3) uses
only the primal value functions (…SR and CSR ).
The derivative property of the optimal value can be used also to transform the
“unsplit” optimization systems of Sect. 3.4 into differential systems. For example,
by the derivative property applied twice, the SRP Optimization System (3.4.1)–
(3.4.3) is equivalent to:
.y; v/ 2@p;w …SR . p; k; w/ ; r 2 @O k …SR . p; k; w/ and …SR . p; k; w/ D…SR . p; k; w/ .
The no-gap condition can be absorbed in either subdifferential condition by
changing …SR to …SR or vice versa . This produces the SRP Saddle Differential
System
.y; v/ 2 @p;w …SR . p; k; w/
r 2 @O k …SR . p; k; w/
(3.6.4)
(3.6.5)
which is so named because it uses the (joint) subdifferential @p;w and the superdifferential @O k of …SR as a saddle (convex-concave) function of . p; w/ and k. It produces
also the FIV Saddle Differential System
.y; v/ 2 @p;w …SR . p; k; w/
(3.6.6)
r 2 @O k …SR . p; k; w/ .
(3.6.7)
Similarly, the SRC Optimization System (3.4.4)–(3.4.6) is equivalent to:
v 2 @O w CSR .y; k; w/ ; . p; r/ 2 @y;k CSR .y; k; w/ and CSR .y; k; w/ D CSR .y; k; w/
3.7 Transformations of Differential Systems by Using SSL or PIR
43
and, hence, also to the SRC Saddle Differential System
v 2 @O w CSR .y; k; w/
(3.6.8)
. p; r/ 2 @y;k CSR .y; k; w/
(3.6.9)
as well as to the OFIV Saddle Differential System
v 2 @O w CSR .y; k; w/
. p; r/ 2 @y;k CSR .y; k; w/ .
(3.6.10)
(3.6.11)
Comments (on the Terminology)
• As in the names of valuation programmes, the qualifiers “FIV” and “OFIV” in the
systems’ names are used only for brevity, i.e., without actually assuming c.r.t.s.
• The derivative properties of profit and cost as functions of prices—i.e., characterizations of optimality such as (3.6.4) and (3.6.8)—are known as the
Shephard-Hotelling Lemmas; their proofs are detailed in Sect. 3.11. Similarly,
long-run profit maximization is equivalent to: .y; k; v/ 2 @p;r;w …LR . p; r; w/.
3.7 Transformations of Differential Systems by Using SSL
or PIR
So far, all the differential systems have been derived from optimization systems—
and this has to be so in convex analysis because it uses the FOC for minimization as
the very definition of the subdifferential: see (B.3.2). But this definition can be used
also to transform one subdifferential condition into another. Once formulated, such
results can be applied to transform one differential system into another directly, i.e.,
without passing through the FOC explicitly. In particular, the partial differential
systems can be derived from the saddle differential systems, which use joint
subdifferentials: a condition involving a subdifferential taken jointly in two groups
of variables—such as @y;k CSR in (3.6.9) or @p;w …SR in (3.6.6)—can be recast in
terms of partial subdifferentials (@y , @k , @p , @w ). This cannot, however, be achieved
simply by splitting the joint derivative into the partials (as in the differentiable case)
because a joint subdifferential does not usually factorize into the Cartesian product
of the partials: it is a general convex set, not a product set. In other words, the
obvious inclusions17
@y;k CSR .y; k/ Â @y CSR .y; k/
@p;w …SR . p; w/ Â @p …SR . p; w/
17
@k CSR .y; k/
@w …SR . p; w/
Being fixed, the third variable (w or k) is suppressed from (3.7.1) and (3.7.2).
(3.7.1)
(3.7.2)
44
3 Characterizations of Long-Run Producer Optimum
are usually strict: see Sect. B.8 of Appendix B for further explanation and examples.
But the two variables of differentiation can be split from each other in a different
way—one that parallels, and derives from, the staged approach to optimization
(introduced in Sect. 3.2). First, the joint subdifferential is used to formulate a FOC
for simultaneous optimization over the two variables. This programme is then split
into two successive optimization programmes with one variable each—and each of
the two has a separate FOC that uses a partial subdifferential. In the case of @y;k CSR ,
this argument consists in stating (i) the FOC for maximizing the LRP over y and k
simultaneously and (ii) the FOCs for maximizing it over y and k successively. The
FOC for a maximum of h p j yi hr j ki CSR over .y; k/ is that . p; r/ 2 @y;k CSR .
The FOC for a maximum of h p j yi CSR .y; k; w/ over y is that p 2 @y CSR ; the
maximum value is …SR , and the FOC for a maximum of …SR . p; k; w/ hr j ki over
k is that r 2 @O k …SR . Since the “joint” FOC is equivalent to the two “partial” FOCs
together,18
Á
. p; r/ 2 @y;k CSR .y; k; w/ , p 2 @y CSR .y; k; w/ and r 2 @O k …SR . p; k; w/ .
(3.7.3)
This is the Subdifferential Sections Lemma (SSL) for this context; it requires
bringing in another function (…SR ), which is linked to the original function (CSR )
by partial conjugacy. This result is formalized fully in Appendix B (Lemma B.7.2).
The SSL is the basic tool for “splitting” joint subdifferentials, but there is also
a couple of derived techniques, viz., the Partial Inversion Rule and its dual variant
(PIR and DPIR, i.e., Corollaries B.7.3 and B.7.5). These can be applied to the joint
subdifferentials of Sect. 3.6:
1. With k fixed, the DPIR applies to CSR . ; k; / as a saddle function on Y W which
is a partial conjugate of the 0-1 indicator of the short-run production set YSR .k/,
defined formally by (6.2.1). The indicator is a convex function on Y V, and its
total conjugate is …SR . ; k; /, a convex function on P W. It follows that the
condition .y; v/ 2 @p;w …SR can be replaced by: p 2 @y CSR and v 2 @O w CSR .
Thus the SRP Saddle Differential System (3.6.4)–(3.6.5) can be transformed into
the SRC/P Partial Differential System (3.5.1)–(3.5.3).19
The SRC/P Partial Differential System, (3.5.1)–(3.5.3), can be derived also
from the SRC Saddle Differential System (3.6.8)–(3.6.9). This is what (3.7.3)
shows: with w fixed, the SSL applies to …SR . ; ; w/ as a saddle function on P K
which is (by definition) a partial conjugate of CSR . ; ; w/, a convex function on
Dually, (3.6.6) is equivalent to (3.6.2)–(3.6.3), i.e., .y; v/ 2 @p;w …SR if and only if both y 2
@p …SR and v 2 @O w CSR .
19
The PIR would give the same result, but it would require establishing first that CSR . ; k; w/ is
l.s.c. to invert the conjugacy relationship (3.1.13), i.e., to show that the saddle function CSR . ; k; /
is indeed a partial conjugate of the bivariate convex function …SR . ; k; /. This can be problematic
(as is noted in the Comment after Corollary B.7.5).
18
3.8 Summary of Systems Characterizing Long-Run Producer Optimum
45
Y K. So the condition . p; r/ 2 @y;k CSR can be replaced by: p 2 @y CSR and
r 2 @O k …SR .
2. Similarly, with w fixed, the DPIR applies to …SR . ; ; w/ as a saddle function
on P K which is a partial conjugate of …LR . ; ; w/. When Y is a cone, the
latter function is the indicator of Yıw , the section of Yı through w. In any case,
it is a convex function on P R, and its total conjugate is C SR . ; ; w/, a convex
function on Y K. This shows that the condition . p; r/ 2 @y;k CSR .y; k; w/ can
be replaced by: y 2 @p …SR and r 2 @O k …SR . Thus the OFIV Saddle Differential
System (3.6.10)–(3.6.11) can be transformed into the O-FIV Partial Differential
System (3.6.1)–(3.6.3).20
The O-FIV Partial Differential System (3.6.1)–(3.6.3) can be derived also
from the FIV Saddle Differential System (3.6.6)–(3.6.7). This is because, with
k fixed, the SSL applies to C SR . ; k; / as a saddle function on Y W which is (by
definition) a partial conjugate of …SR . ; k; /, a convex function on P W. So the
condition .y; v/ 2 @p;w …SR can be replaced by: y 2 @p …SR and v 2 @O w CSR .
Comment (Partial Subdifferentials as Projections of the Joint One) Although
the inclusion (3.7.1)—that @y;k CSR Â @y CSR @k CSR —is generally strict, it is usually
tight in the sense that @y CSR .y; k/ is equal to the projection of @y;k CSR .y; k/ onto P
if (and only if) every p 2 @y CSR .y; k/ can be complemented to some . p; r/ 2
@y;k CSR .y; k/. A similar result holds for @k CSR (if each of its elements can be so
complemented). For the existence of a complementary subgradient, see Sect. B.8 of
Appendix B.
3.8 Summary of Systems Characterizing Long-Run
Producer Optimum
The ten duality-based systems presented so far (Sects. 3.2, 3.4, 3.5 and 3.6) and the
proofs of their equivalence (detailed in Sect. 3.11) are summarized in Tables 3.1
and 3.2. Since the top right entry of the one table is identical to the bottom right of
the other, the twelve entries include two repetitions. The ten distinct entries are all
the duality-based systems given so far. Seven more systems, to appear in Sect. 3.9,
use the LRC programme and its dual or their value functions. They are mirror
images of the systems shown in the two tables, from which they can be obtained
by replacing …SR . p; k/ with CLR .y; r/ and changing the signs where needed. Thus
three of the seven correspond to the systems on the left in Table 3.1, and the other
The PIR would give the same result, but it would require establishing first that …SR . ; k; w/ is
l.s.c. to invert the conjugacy relationship (3.3.8), i.e., to show that the saddle function …SR . ; ; w/
is indeed a partial conjugate of the bivariate convex function CSR . ; ; w/. This can be problematic
(as is noted in the Comment after Corollary B.7.5).
20
46
3 Characterizations of Long-Run Producer Optimum
four come from the distinct systems on the right in Tables 3.1 and 3.2.21 In other
words, Tables 3.1 and 3.2 deal explicitly with the values and programmes in the
left halves of the conjugacy diagrams (3.1.12) and (3.3.7), but the analysis applies
equally to the right halves.
In the differential systems, the Type One derivatives whose existence rules out
duality gaps are identified. In optimization systems, the various dual programmes
are referred to as “optimization of the fixed quantities’ value”, although this name
fully fits only the case of c.r.t.s. (which need not be assumed). The constraint sets
(Y, and Yı under c.r.t.s.) are not shown in the summarizing tables.
Comments (Partition into a Short-Run Subsystem and a Valuation Condition)
• All but three of the ten systems shown in Tables 3.1 and 3.2—all except for the
three on the left in Table 3.2, viz., (3.4.4)–(3.4.6), (3.6.8)–(3.6.9) and (3.6.10)–
(3.6.11)—contain a valuation condition on r and . p; k; w/ that, together with
the no-gap condition …SR D …SR , is equivalent to the investment being at a
profit maximum (3.2.1), i.e., to k being a profit-maximizing investment at the
output/fixed-input/variable-input prices . p; r; w/ for the outputs and the fixed
and variable inputs. The condition in question is either “r minimizes the FIV”,
or r 2 @O k …SR , or r 2 @O k …SR (the last of which by itself rules out the duality
gap). Put together, the system’s other conditions (on p, y, w, v and k) are then
equivalent either to (3.2.2)–(3.2.3), or to (3.2.2)–(3.2.3) with (3.2.5)—i.e., to
.y; v/ being a short-run profit-maximizing input-output bundle at prices . p; w/
and given capital inputs k (either with or without …SR being equal to …SR ). This
short-run subsystem is to be solved for v and either y or p—given w and either
p or y, as well as k. The subsystem may be so simple that, as in Chap. 2, it
can be solved on its own, separately from the remaining valuation condition and
without recourse to duality. In such a case, calculation of the imputed values
can be postponed until, in the last stage of the short-run approach to long-run
equilibrium, they are to be equated to the fixed-input prices. As well as being
handy in such simple cases, the system’s partition (into a short-run subsystem
and a valuation condition involving r) is worth examining in detail to clarify the
various ways in which the complete systems rule out duality gaps. Most do so
within the short-run subsystem, but some rely on the valuation condition—when
it takes the form r 2 @O k …SR (a Type One derivative). Therefore, the various shortrun subsystems describe two “grades” of short-run profit maximum: the “plain”
one, and the one without a duality gap. Only the latter kind can be a long-run
profit maximum (for some choice of capital-input prices).
• More formally: given . p; w/ and k, a potential long-run profit-maximizing bundle
is a .y; v/ such that .y; k; v/ maximizes long-run profit at . p; w/ and some r.
The three systems on the left in Table 3.2 do not yield new ones (when …SR is replaced by
CLR ) simply because they do not involve …SR at all. So there are not ten but seven of the “mirror
images”.
21
3.9 Extended Wong-Viner Theorem and Other Transcriptions from SRP to LRC
47
Every system can of course be formally turned into a characterization of potential
long-run optimality by binding r with an existential quantifier. But in the three
excepted systems—viz., (3.4.4)–(3.4.6), (3.6.8)–(3.6.9) and (3.6.10)–(3.6.11)—
the condition on r involves also y (in addition to p, k, w), and it expresses
optimality not only of k but also of y: e.g., (3.6.9) is exactly equivalent to (3.2.1)
and (3.2.2) together (by the SSL and the FOCs). That is why these three systems
cannot be partitioned by detaching the investment-optimality condition (or the
valuation condition). By contrast, in each of the other seven systems in Tables 3.1
and 3.2 the condition on r involves only p, k and w (apart from r itself). The
subsystem consisting of all the other conditions describes either (i) a plain SRP
maximum, in the case of subsystem (3.5.2)–(3.5.3) or subsystem (3.6.4), or (ii) an
SRP maximum without a duality gap, in all the other five cases. A plain SRP
maximum can have a duality gap (see Appendix A)—in which case it is not a
potential LRP maximum. Where the short-run subsystem does rule out a gap
between SRP maximization and its dual, it may do so either explicitly by the
condition that …SR D …SR at . p; k; w/, or implicitly by condition(s) involving
one or two subdifferentials of Type One (@p;w …SR , or @p …SR and @O w CSR together).
In one case, only the entire subsystem, (3.4.5)–(3.4.7), rules out the gap.22
3.9 Extended Wong-Viner Theorem and Other
Transcriptions from SRP to LRC
The preceding analysis can be re-applied to SRC minimization as a subprogramme
of LRC minimization instead of SRP maximization. As part of this, the Subdifferential Sections Lemma (SSL, i.e., Lemma B.7.2) can be applied to CSR as the bivariate
convex parent function of the saddle function CLR , instead of the saddle function
…SR as in (3.7.3). That is, when both …SR and CLR are viewed as partial conjugates
of CSR , the SSL shows that, with w fixed and suppressed from the notation,
p 2 @y CSR .y; k/
r 2 @O k …SR . p; k/
)
SSL
SSL
” . p; r/ 2 @y;k CSR .y; k/ ”
p 2 @y CLR .y; r/
.
r 2 @k CSR .y; k/
(3.9.1)
The subsystem’s condition that CSR D CSR at .y; k; w/ rules out a different duality gap, and
on its own it does not imply that …SR D …SR at . p; k; w/ when y maximizes short-run profit at
. p; k; w/: see Appendix A for an example (in which CSR D CSR trivally because the technology
has no variable input).
22
48
3 Characterizations of Long-Run Producer Optimum
This is the Extended Wong-Viner Theorem . Note that the condition that r 2
is the FOC for k to yield the infimum in the definitional formula
CLR .y; r; w/ D inf fhr j ki C CSR .y; k; w/g
k
@k CSR
(3.9.2)
(which means that CLR is, as a function of r, the concave conjugate of CSR as a
function of k, with y and w fixed).
For comparison, the usual Wong-Viner Envelope Theorem for differentiable
costs gives
rD
p D r y CSR .y; k/
r k CSR .y; k/ i.e., k yields the inf in (3.9.2)
H) p D r y CLR .y; r/ .
(3.9.3)
Comparisons with the two “outer” systems in (3.9.1) show that their equivalence is
indeed an extension of (3.9.3). This is because
@O k …SR . p; k/ Â
@k CSR .y; k/
when p 2 @y CSR .y; k/
(3.9.4)
i.e., when y yields the supremum in (3.1.13).23 In the differentiable case, the
inclusion (3.9.4) reduces to the equality r k …SR D r k CSR (when p D r y CSR ),
and thus (3.9.1) becomes:
if r D
r k CSR .y; k/ then
p D r y CSR .y; k/ , p D r y CLR .y; r/
(3.9.5)
which is the usual Wong-Viner Theorem.
Comments (Failure of the Naive Extension)
• The Wong-Viner Theorem cannot be extended to the general, subdifferentiable
case simply by transcribing the r’s to @’s in (3.9.5) or (3.9.3) because, even when
r 2 @k CSR .y; k/,
p 2 @y CSR .y; k/ » p 2 @y CLR .y; r/ .
(3.9.6)
It is the reverse inclusion that always holds, i.e.,
if r 2
@k CSR .y; k/ then @y CLR .y; r/ Â @y CSR .y; k/
(3.9.7)
but the inclusion is generally strict (i.e., @y CLR ¤ @y CSR ).24
23
The inclusion (3.9.4) follows directly from (3.1.13) by Remark B.7.4 (applied to the saddle
function …SR as a partial conjugate of CSR ).
24
The inclusion (3.9.7) follows directly from (3.9.2) by Remark B.7.4 (applied to the saddle
function CLR as a partial conjugate of CSR ).
3.9 Extended Wong-Viner Theorem and Other Transcriptions from SRP to LRC
49
• By contrast, the extension (3.9.1) succeeds because it strengthens the insufficient
fixed-input optimality condition r 2 @k CSR in (3.9.6) to the valuation condition
r 2 @O k …SR (which is stronger because the inclusion (3.9.4) is usually strict, when
CSR is nondifferentiable).
• The peak-load pricing example of Chap. 2 provides a simple and extreme
illustration: when r > 0, the condition r 2 @k CSR .y; k; w/ means merely
that k D supt y .t/;
R it says nothing at all about r. For comparison, the condition
r D @…SR =@k D . p .t/ w/C dt links r to p and w, in addition to implying that
Sup .y/ D k (if r > 0 and p 2 @y CSR .y; k; w/, i.e., if: y .t/ D k when p .t/ > w,
and y .t/ D 0 when p .t/ < w). Thus the valuation condition narrows down the
range of possible p’s (given r, w and y); indeed, it narrows the range down enough
to ensure that if p 2 @y CSR .y; k; w/ then actually p 2 @y CLR .y; r; w/. This is an
instance of (3.9.1) but, for this example, it can be verified also by calculating both
subdifferentials explicitly.
It follows from the right-hand equivalence in (3.9.1) that LRP maximization—
being equivalent to the SRC Saddle Differential System (3.6.8)–(3.6.9)—is equivalent also to the system
p 2 @y CLR .y; r; w/
(3.9.8)
r 2 @k CSR .y; k; w/
(3.9.9)
v 2 @O w CSR .y; k; w/
(3.9.10)
which shall be called the L/SRC Partial Differential System because it uses the
partial sub/super-differentials @k and @O w of CSR (the SRC) as a saddle (convexconcave) function of k and w, in addition to using the partial subdifferential @y
of CLR (the LRC). It is the “mirror image” or transcript of the SRC/P Partial
Differential System (3.5.1)–(3.5.3), with SRP replaced by LRC and with the
variables suitably swapped.25
When the producer is a public utility, LRMC pricing and LRC minimization—
i.e., Conditions (3.9.8) to (3.9.10)—are often taken as the definition of a long-run
producer optimum. If the SRC function is simpler than the LRC function (as is
usually the case), and the SRP function is simple too, then the Extended WongViner Theorem (3.9.1) can facilitate the short-run approach by characterizing
long-run optimality in terms of the SRC and SRP functions—and this has been
used in the introductory peak-load pricing example (Chap. 2). In that problem, the
cost-minimizing inputs were obvious, but the question was how to ensure, by a
simple condition put in terms of a short-run value function, that an SRMC output
price p was actually an LRMC price, i.e., that p met (3.9.8). This was achieved
25
In detail, the transcript is obtained by swapping p with r and y with k, and by replacing
the function . p; k/ 7! …SR . p; k/ with the function .y; r/ 7! CLR .y; r/: compare (3.1.13)
with (3.9.2).
50
3 Characterizations of Long-Run Producer Optimum
by employing the break-even condition (2.1.2), which is a case of the valuation
condition r 2 @O k …SR , i.e., of (3.5.1). Thus the argument was a special case of
the Extended Wong-Viner Theorem—i.e., of the equivalence of (3.9.8)–(3.9.10)
to (3.5.1)–(3.5.3).
Comment (Stronger Version of the Inclusion Between LRMCs and SRMCs)
The obvious inclusion (3.9.7)—that @y CLR .y; r/ Â @y CSR .y; k/ for every r 2
@k CSR —is usually tight in the sense that it turns into an equality when the union
of its l.h.s.’s is taken, over r, if (and only if) the partial subgradient on the r.h.s. has
a complement to a joint one: if every p 2 @y CSR .y; k/ can be complemented to a
. p; r/ 2 @y;k CSR .y; k/ then
[
@y CSR .y; k/ D
@y CLR .y; r/ .
(3.9.11)
r2 @k CSR .y;k/
The corresponding result for …SR instead of CLR shows that the inclusion (3.9.4) is
tight in the same sense, i.e.,
@k CSR .y; k/ D
[
@O k …SR . p; k/
p2@y CSR .y;k/
if (and only if) every r 2 @k CSR .y; k/ can be complemented to a . p; r/ 2
@y;k CSR .y; k/. For the existence of a complementary subgradient, see Sect. B.8 of
Appendix B.
Like (3.5.1)–(3.5.3), also the other differential and optimization systems of
Sects. 3.2 and 3.4–3.6 can be transcribed into equivalent characterizations of longrun producer optimum by replacing the SRP with the LRC; the transcripts can be
derived (from LRP maximization and from each other) by re-applying the same
arguments (with LRC instead of SRP).
The three systems shown on the left in Table 3.1 transcribe into the following
three.
The LRC Optimization System (transcript of the SRP Optimization System (3.4.1)–(3.4.3)), which is: .k; v/ minimizes the long-run cost at prices .r; w/,
and p maximizes the value of output y (less maximum LRP under d.r.t.s.), and the
two optimal values are equal (i.e., under c.r.t.s., minimum LRC equals maximum
OV). Formally, it is: given .y; r; w/,
.k; v/ solves the primal LRC programme (3.1.8)–(3.1.9).
(3.9.12)
p solves the dual OV programme (3.3.5), or (3.3.11)–(3.3.12) under c.r.t.s.
(3.9.13)
CLR .y; r; w/ D CLR .y; r; w/ .
(3.9.14)
3.9 Extended Wong-Viner Theorem and Other Transcriptions from SRP to LRC
51
This system, (3.9.12)–(3.9.14), is equivalent to:
.k; v/ 2 @O r;w CLR .y; r; w/ ; p 2 @y C LR .y; r; w/ and CLR .y; r; w/ D CLR .y; r; w/
and, hence, also to the LRC Saddle Differential System (transcript of the SRP
Saddle Differential System (3.6.4)–(3.6.5)), which is:
.k; v/ 2 @O r;w CLR .y; r; w/
p 2 @y CLR .y; r; w/
(3.9.15)
(3.9.16)
as well as to the OV Saddle Differential System (transcript of the FIV Saddle
Differential System (3.6.6)–(3.6.7)), which is:
.k; v/ 2 @O r;w C LR .y; r; w/
(3.9.17)
p 2 @y CLR .y; r; w/ .
(3.9.18)
Finally, just as (3.5.1)–(3.5.3) transcribes into (3.9.8)–(3.9.10), so the other three
systems shown on the right in Tables 3.1 and 3.2 transcribe into:
The Split LRC Optimization System (transcript of the Split SRP Optimization
System (3.2.2)–(3.2.5)), which is26 :
k minimizes hr j i C CSR .y; ; w/ on K (given y; r and w).
(3.9.19)
v solves (3.1.10)–(3.1.11), given .y; k; w/ .
(3.9.20)
p solves (3.3.5), given .y; r; w/ .
(3.9.21)
C LR .y; r; w/ D CLR .y; r; w/ .
(3.9.22)
The FI-OV Partial Differential System (the transcript of the O-FIV Partial
Differential System (3.6.1)–(3.6.3)), which is:
p 2 @y CLR .y; r; w/
(3.9.23)
k 2 @O r CLR .y; r; w/
(3.9.24)
v 2 @O w CSR .y; k; w/ .
(3.9.25)
Here, two-stage solving means first minimizing hw j vi over v (subject to .y; k; v/ 2 Y)
L as functions of .y; k; w/, and then
to find the solution vL and the minimum value CSR D hw j vi
minimizing hr j ki C CSR .y; k; w/ over k to find the solution kL .y; r; w/. This
Á gives the complete
solution (in terms of y, r and w) as the pair kL .y; r; w/ and vL y; kL .y; r; w/ ; w .
26