Tải bản đầy đủ - 0 (trang)
3 Moments, cumulants and diagram formulae

# 3 Moments, cumulants and diagram formulae

Tải bản đầy đủ - 0trang

4.3 Moments, cumulants and diagram formulae

95

Properties (i) and (ii) follow immediately from (4.23). To see how to deduce

(iii) from (4.23), just observe that, if Xb has the structure described in (iii), then

log gXb (z1 , .., zk ) = log gXb z : j ∈ b + log gXb

z : j ∈b

(by independence), so that

∂k

log gXb (z1 , .., zk )

∂z1 · · · ∂zk

∂k

∂k

log gXb z : j ∈ b +

log gXb

=

∂z1 · · · ∂zk

∂z1 · · · ∂zk

z : j ∈b

= 0.

Finally, property (iv) is proved by using the fact that, if X[n] is obtained by juxtaposing n ≥ 3 elements of a Gaussian family (even with repetitions), then

log gXb (z1 , .., zk ) has necessarily the form l a (l) zl + i, j b (i, j) zi z j , where

a (k) and b (i, j) are coeﬃcients not depending on the zl ’s.

When |b| = n, one says that the cumulant Cum (Xb ), given by (4.23), has

order n. When X[n] is such that X j = X, ∀ j = 1, ..., n, where X is a random

variable in Ln (P), we write

Cum X[n] = Cumn (X) ,

and we say that Cumn (X) is the nth cumulant (or the cumulant of order n) of

X. Note that, if X, Y ∈ Ln (P) (n ≥ 1) are independent random variables, then

(4.23) implies that

Cumn (X + Y) = Cumn (X) + Cumn (Y) ,

since Cumn (X + Y) involve the derivative of E exp i (X + Y) Σnj=1 z j with respect to z1 , ..., zn .

We recall that a partition of a finite set b is a collection of r subsets of b,

say {b1 , ..., br }, such that each b j is not empty and ∪ j=1, ...,r b j = b. The next

result contains two crucial relations (known as Leonov-Shiryaev identities),

linking the cumulants and the moments associated with a random vector X[n] .

See Peccati and Taqqu [159] for a proof.

Proposition 4.13 For every non empty b ⊆ [n], denote by P (b) the class of

all partitions of b. Then,

(1)

Cum Xb1 · · · Cum Xbk ;

E Xb =

π={b1 , ...,bk }∈P(b)

(4.24)

96

Background Results in Probability and Graphical Methods

(2)

Cum (Xb ) =

(−1)r−1 (r − 1)!E (Xa1 ) · · · E (Xar ) .

(4.25)

σ={a1 , ...,ar }∈P(b)

4.3.1 Diagram formulae

To complete our background, we need a quick review of some diagram formulae for the computation of moments and cumulants (see for instance [159]

or [186]). The diagrams we are interested in are essentially mnemonic devices

used for the computation of the moments and cumulants associated with polynomial forms in Gaussian random variables. See [159, Chapters 2–4] for a

self-contained presentation using integer partitions and Măobius inversion formulae. We start with formal definitions. Fig. 4.1 will provide an illustration.

Let p and li ≥ 1, i = 1, ..., p, be given integers. A diagram γ of order

(l1 , ..., l p ) is a set of points {( j, l) : 1 ≤ j ≤ p, 1 ≤ l ≤ l j } (called vertices and

represented as a table W = l1 ⊗ · · · ⊗ l p ) and a partition of these points into

pairs

{(( j, l), (k, s)) : 1 ≤ j ≤ k ≤ p; 1 ≤ l ≤ l j , 1 ≤ s ≤ lk },

called edges, such that ( j, l)

(k, s) (that is, no vertex can be linked with

itself ), and each pair (a, b) appears in one and only one edge. Plainly, if the

integer l1 + · · · + l p is odd, then W does not admit any diagram. We denote by

Γ(l1 , ..., l p ) the set of all diagrams of order (l1 , ..., l p ). If the order is such that

l1 = · · · = l p = q, for simplicity, we also write Γ(p, q) instead of Γ(l1 , ..., l p ).

With this notation, one has that Γ(p, q) = ∅ whenever pq is odd.

Remark 4.14 The table W described above is composed of p rows, the jth

( j = 1, ..., p) row being composed of the pairs ( j, 1), ( j, 2), ..., ( j, l j ). One can

graphically represent W by arranging l1 + · · · + l p dots into p rows, the jth row

containing l j dots. The lth dot (from left to right) of the jth row corresponds

to the pair ( j, l). Once the table W has been drawn, the edges of the diagram

are represented as (possibly curved) lines connecting the two corresponding

dots. In the language of graph theory, the resulting graph is called a perfect

matching. See Fig. 4.1 for some examples.

We say that:

- A diagram has a flat edge if there is at least one pair ((i, j), (i , j )) such that

i = i . We write ΓF for the set of diagrams having at least one flat

edge, and ΓF for the collection of all diagrams with no flat edges. For

4.3 Moments, cumulants and diagram formulae

97

example, the diagrams in Fig. 4.1-(b, d) have no flat edges. The diagram appearing in Fig. 4.1(c) has two flat edges, namely ((1, 1), (1, 2))

and ((4, 1), (4, 2)).

- A diagram γ is connected if it is not possible to partition the rows l1 , · · · , l p

of the table W into two non-connected subdiagrams. This means that

one cannot find a partition K1 ∪ K2 = {1, ..., p} such that, for each

member Vk of the set of edges (V1 , ..., Vr ) in a diagram γ, either Vk

links vertices in ∪ j∈K1 l j , or Vk links vertices in ∪ j∈K2 l j . We write

ΓC for the collection of all connected diagrams, and ΓC for the class

of all diagrams that are not connected. For instance, the diagrams in

Fig. 4.1(b,c) are not connected (for both, one can choose the partition

K1 = {1, 4}, K2 = {2, 3}). The diagram in Fig. 4.1-(d) is connected.

- A diagram is paired if, considering any two edges ((i1 , j1 ), (i2 , j2 )), and

((i3 , j3 ), (i4 , j4 )), then i1 = i3 implies i2 = i4 ; in words, the rows

are completely coupled two by two. We write ΓP for the set of diagrams for paired diagrams, and ΓP for the set of diagrams that are not

paired. For example, the diagrams in Fig. 4.1(b, c) are paired (in both

diagrams, the first row is coupled with the fourth, and the second row

is coupled with the third).

(a)

(b)

(c)

(d)

Figure 4.1 A table (a) and three diﬀerent diagrams (b, c, d). (a) A representation of a table W of order (4, 2, 2, 4), that is, p = 4, l1 = l4 = 4 and l2 = l3 = 2.

(b) A representation of ((1, j), (4, j)), j = 1, ..., 4; ((2, k), (3, k)), k = 1, 2 ,

where the diagram is paired, non-flat and not connected. (c) A representation of ((i, 1), (i, 2)), i = 1, 4; ((1, 4), (4, 3)); (1, 3), (4, 4); ((2, k), (3, k)), k =

1, 2 , where the diagram is paired, with two flat edges and not connected. (c) A representation of ((1, j), (2, j)), j = 1, 2; ((1, k), (4, k)), k =

3, 4; ((3, l), (4, l)), l = 1, 2 , where the diagram is not paired, non-flat and

connected.

98

Background Results in Probability and Graphical Methods

The next statement provides a well-known combinatorial description of the

moments and cumulants associated with Hermite transformations of (possibily

correlated) Gaussian random variables. In view of Proposition 4.9, one can

see such a result of the diagram formulae associated with general multiple

stochastic integrals. See [159, Section 7.3] for statements and proofs.

Proposition 4.15 (Diagram formulae for Hermite polynomials) Let (Z1 , ..., Z p )

be a centered Gaussian vector, and let γi j = E[Zi Z j ], i, j = 1, ..., p be its covariance. Let Hl1 , ..., Hl p be Hermite polynomials of degrees l1 , ..., l p (≥ 1) respectively. As above, let ΓF (l1 , ..., l p ) (resp. ΓC (l1 , ..., l p )) be the collection of

all diagrams with no flat edges (resp. connected diagrams) of order l1 , ..., l p .

Then,

E[Π pj=1 Hl j (Z j )] =

η (G)

Π1≤i≤ j≤p γi ji j

(4.26)

G∈ΓF (l1 ,...,l p )

η (G)

Cum(Hl1 (Z1 ), ..., Hl p (Z p )) =

G∈ΓF (l1 ,...,l p )∩ΓC (l1 ,...,l p )

Π1≤i≤ j≤p γi ji j

(4.27)

where, for each diagram G, ηi j (G) is the exact number of edges between the ith

row and the jth row of the diagram G.

Example 4.16 (1) Consider the quantity EZ 2p , where Z ∼ N(0, σ2 ). Proposition 4.15 implies that this expected value can be written as a sum over

all diagrams with 2p rows and one column. Standard combinatorial arguments show that there exist (2p−1)!! = (2p−1)×(2p−3)×· · ·×1 diﬀerent

diagrams of this type, so that, selecting for Zi = Z j = Z, (1 ≤ i ≤ j ≤ 2p)

in (4.26) we deduce:

EZ 2p = E[Π pj=1 H1 (Z j )] =

Π1≤i≤ j≤p EZi Z j

G∈Γ(1,...,1)

= (2p − 1)!!σ2p .

(2) Analogously, we see that EZ 2p+1 = 0, where again Z ∼ N(0, σ2 ) and p is

any nonnegative integer. Indeed, as already observed, there cannot be any

diagrams with an odd number of vertices.

(3) Now consider the quantity EHm (Z1 )Hn (Z2 ), Zi ∼ N(0, σ2i ), i = 1, 2. This

corresponds to the case p = 2, l1 = m, l2 = n. If p q, then ΓF (l1 , ..., l p ) =

∅. If or p = q then ΓF (l1 , ..., l p ) contains p! diﬀerent diagrams (one for

every permutation of {1, ..., p}). Formula (4.26), therefore yields

EH p (Z1 )Hq (Z2 ) = δqp p! {EZ1 Z2 } p .

This is consistent with the content of Remark 4.10.

4.4 The simplified method of moments on Wiener chaos

99

(4) Consider EZ12 Z22 Z32 , where Zi ∼ N(0, σ2i ), i = 1, 2, 3. Using the fact that

H2 (x) = x2 − 1 and (4.26), we obtain

EZ12 Z22 Z32 = E {H2 (Z1 )H2 (Z2 )H2 (Z3 )}

+E {H2 (Z2 )H2 (Z3 )}

+E {H2 (Z1 )H2 (Z2 )} + E {H2 (Z1 )H2 (Z3 )}

+E {H2 (Z1 )} + E {H2 (Z2 )}

+E {H2 (Z3 )} + 1

2

2

2

+ γ23

+ γ13

= 8γ12 γ23 γ31 + 2 γ12

+1,

γ12 = EZ1 Z2 , γ23 = EZ2 Z3 , γ13 = EZ1 Z3 .

4.4 The simplified method of moments on Wiener chaos

4.4.1 Real kernels

The following statement provides a classic application of the method of moments (and cumulants) to the derivation of multivariate CLTs on Wiener chaos.

Proposition 4.17 (The method of moments and cumulants) Fix integers k ≥ 1

and 1 ≤ d1 , ..., dk < ∞, and let Z = (Z1 , ..., Zk ) ∼ Nk (0, V) be a k-dimensional

centered Gaussian vector with nonnegative covariance matrix

V = {V (i, j) : i, j = 1, ..., k} .

Suppose that

Fn = F1(n) , ..., Fk(n) = Id1 f1(n) , ..., Idk fk(n) , n ≥ 1,

(4.28)

is a sequence of k-dimensional vectors of chaotic random variables such that

f j(n) ∈ L2s [0, 1]d j and, for every 1 ≤ i, j ≤ k,

lim E Fi(n) F (n)

= V (i, j) .

j

n→∞

(4.29)

Then, the following three conditions are equivalent, as n → ∞.

(1) The vectors Fn converge in law to Z.

(2) For every choice of nonnegative integers p1 , ..., pk such that

⎡ k

⎡ k

⎥⎥⎥

⎢⎢⎢

⎢⎢⎢

pj⎥

p

(n)

j

⎥⎥⎥⎦ −→ E ⎢⎢⎢⎣

E ⎢⎢⎢⎣

Zi ⎥⎥⎥⎦ .

Fi

j=1

j=1

pj ≥ 3

(4.30)

100

Background Results in Probability and Graphical Methods

(3) For every r ≥ 3 and for every vector ( j1 , ..., jr ) ∈ {1, ..., k}r (note that such

a vector may contain repeated indices)

(n)

Cum F (n)

j1 , ..., F jr → Cum Z j1 , ..., Z jr = 0.

(4.31)

Proof To prove that Point (1) and Point (2) are equivalent, combine (4.29)

with (4.13) and use uniform integrability, as well as the fact that the Gaussian

distribution is determined by its moments. To prove the equivalence of Point

(3), use the Leonov-Shyriaev relation (4.25).

The use of results analogous to Proposition 4.17 in the proof of CLTs for

non-linear functionals of Gaussian fields, is classic. See e.g. Breuer and Major

[27] or Chambers and Slud [38]. It can be however technically quite demanding to verify conditions (4.30)–(4.31), since they involve an infinity of asymptotic relations of increasing complexity. In recent years, great eﬀorts have been

made in order to obtain drastic simplifications of the convergence criteria implied by Proposition 4.17. The most powerful achievements in this direction

(at least, in the case of one-dimensional CLTs) are given in the subsequent

statement.

We recall that the total variation distance, between the law of two realvalued random variables X and Y, is given by the quantity

dT V (X, Y) = sup |P (X ∈ A) − P (Y ∈ A)| ,

A

where the supremum is taken over the class of all Borel sets. Note that the

topology induced by dT V (on the class of all probability measures on R) is

strictly stronger than the topology of convergence in distribution (see e.g. [56,

Chapter 11]).

Recall the notion of “contraction” of order p, and the associated operator

⊗ p , as introduced in Definition 4.4.

Theorem 4.18 (Total variation bounds – see [147]) Let σ2 > 0 and let Z ∼

N 0, σ2 be a centered Gaussian random variable with variance σ2 . For d ≥ 2,

let F = Id ( f ) ∈ Cd be an element of the dth chaos of W. Then, Cum4 (F) =

2

E F 4 − 3E F 2 , and

dT V (Z, F) ≤

2 2

σ − E F2

σ2

+

2

σ2

d−1

4

(p − 1)!2

d2

p=1

(4.32)

d−1

(2d − 2p)! f ⊗ p f

p−1

2

L2 ([0,1]2(d−p) )

4.4 The simplified method of moments on Wiener chaos

⎥⎥⎥

2 ⎢⎢⎢⎢ 2

q−1

2

≤ 2 ⎢⎢⎣ σ − E F +

Cum4 (F)⎥⎥⎥⎦ .

3q

σ

101

In particular, if {F n : n ≥ 1} ⊂ Cd is a sequence of chaotic random variables

such that Fn = Id ( fn ) and E Fn2 → σ2 , then the following three conditions

are equivalent as n diverges to infinity

(1)

(2)

(3)

(4)

F n converges in law to Z.

Fn converges to Z in the total variation distance.

Cum4 (Fn ) → 0 (or, equivalently, E Fn4 → 3σ4 ).

For every p = 1, ..., d − 1, fn ⊗ p fn L2 ([0,1]2(d−p) ) → 0.

Remark 4.19 (1) The first proof of the “fourth moment” bound in (4.32) appears in Nourdin, Peccati and Reinert [147]. It is based on Malliavin calculus, and the so-called Lindeberg principle and Stein’s method for probabilistic approximations. This result also builds on previous estimates by

Nourdin and Peccati [148].

(2) Note that the bound (4.32) provides an immediate proof of the equivalence

of Points (1)–(4) in the statement of Theorem 4.18. Indeed, if Point (3) is in

order, then (4.32) yields that Points (4) and (2) necessarily hold, which also

gives Point (1), since convergence in total variation implies convergence in

law. On the other hand, if E Fn2 → 1 and Fn converges to Z ∼ N 0, σ2

in law, then one can use (4.13) together with a uniform integrability argu2

ment, in order to deduce that Cum4 (Fn ) = E Fn4 − 3E Fn2 → 0.

(3) The equivalence between Points 1, 3 and 4 in the statement of Theorem

4.18 was first proved by Nualart and Peccati in [152], by means of stochastic calculus techniques.

(4) Observe that fn ⊗ p fn L2 ([0,1]2(d−p) ) ≤ fn ⊗ p fn L2 ([0,1]2(d−p) ) . In [152] it is

also proved that fn ⊗ p fn

only if fn ⊗ p fn

L2 ([0,1]2(d−p) )

L2 ([0,1]2(d−p) )

→ 0 for every p = 1, ..., d − 1 if and

→ 0 for every p = 1, ..., d − 1

The next result, first proved in [160] allows to deduce joint CLTs from one

dimensional convergence results.

Theorem 4.20 (Joint CLTs on Wiener chaos – see [160]) Keep the assumptions and notation of Proposition 4.17 (in particular, the sequence {Fn : n ≥ 1}

verifies (4.29)). Then, the vectors Fn converge in law to Z if and only if F (n)

j

converges in law to Z j for every j = 1, ..., k.

102

Background Results in Probability and Graphical Methods

The original proof of Theorem 4.20, as given in [160], used stochastic timechanges and other tools from continuous-time stochastic calculus. A more direct proof is now available, using the following estimate, taken again from

[147].

Theorem 4.21 (Bounds on vectors)

Let

F = Id1 ( f1 ) , ..., Idk ( fk )

be a k-dimensional vector of multiple integrals, where d1 , ..., dk ≥ 1 and f j ∈

L2s [0, 1]d j . Let V denote the (nonnegative definite) covariance matrix of F.

Let Z be a centered k-dimensional Gaussian vector with the same covariance

V. Then, for every twice diﬀerentiable function φ : Rk → R,

k d j −1

E φ (F) − E φ (Z) ≤ C φ

f j⊗p f j ,

(4.33)

j=1 p=1

where C = C (k; d1 , ..., dk ) is a strictly positive constant depending uniquely on

k and d1 , ..., dk , and φ ∞ is defined as

φ

:=

sup

α;x1 , ...,xk

∂|α|

φ(x1 , ..., xk ),

∂xα1 1 · · · ∂xαd k

(4.34)

and the symbol α runs over all integer-valued multindex α = (α1 , ..., αk ) such

that |α| := α1 + · · · + αk = 2.

It is also clear the combination of Theorem 4.18 and Theorem 4.20 yields an

important simplification of the method of moments and cumulants, as stated in

Proposition 4.17. This fact is so useful that we prefer to write it as a separate

statement.

Proposition 4.22 (The simplified method of moments) Keep the assumptions

and notation of Proposition 4.17 (in particular, the sequence {Fn : n ≥ 1} verifies (4.29)). Then, the following two conditions are equivalent:

(1) The vectors Fn converge in law to Z.

(2) For every j = 1, ..., k,

E F (n)

j

4

→ 3σ4 .

(4.35)

We conclude this section with a useful estimate on moments, taken from the

survey paper [149].

4.4 The simplified method of moments on Wiener chaos

103

Proposition 4.23 Let d ≥ 2 be an integer, and let F = Id ( f ) be a multiple integral of order d of some kernel f ∈ L2s ([0, 1]d ). Assume that Var(F) = E(Z 2 ) =

1, and let Z ∼ N(0, 1). Then, for all integer k ≥ 3,

E(F k ) − E(Z k ) ≤ ck,d E(F 4 ) − E(Z 4 ),

where the constant ck,d is given by

ck,d = (k − 1)2

k− 52

d − 1 ⎜⎜⎜⎜

⎜⎜

3d ⎝

(4.36)

⎟⎟

kd

(2k − 4)!

−d ⎟

⎟⎟⎟ .

2

+

(2k

5)

2k−2 (k − 2)!

4.4.2 Further results on complex kernels

Since we will work with harmonic decompositions based on complex-valued

Hilbert spaces, we will sometimes need criteria for CLTs involving random

quantities taking values in C. Note that, in general, CLTs for complex-valued

random variables are simply joint CLTs for the real and imaginary parts, so

that the subsequent results can be regarded as corollaries of Theorem 4.20

of the previous section. However, the direct formulation in terms of complex

quantities is very useful, and we believe that it deserves a separate discussion.

For every integer d ≥ 1, LC2 [0, 1]d and L2s,C [0, 1]d are the Hilbert spaces,

respectively, of square integrable and square integrable and symmetric complexvalued functions with respect to the product Lebesgue measure. For every

g ∈ L2s,C [0, 1]d with the form g = a + ib, where a, b ∈ L2s [0, 1]d , we set

Id (g) = Id (a) + iId (b). Note that, by isometry (4.7),

E Id ( f ) Id (g)

⎨ 0

=⎪

⎩ d! [0,1]d f (s1 , ...sd ) g (s1 , ..., sd )ds1 · · · dsd

(4.37)

if d d

if d = d .

Also, a random variable such as Id (g) is real-valued if and only if g is real

valued. For every pair and gk = ak + ibk ∈ L2s,C [0, 1]d , k = 1, 2, and every

q = 1, ..., d − 1, we set

g1 ⊗q g2 = a1 ⊗q a2 − b1 ⊗q b2 + i a1 ⊗q b2 + b1 ⊗q a2 .

(4.38)

The following result has been proved in [134, Proposition 5].

Proposition 4.24

is such that

lim d! al

l→+∞

Suppose that the sequence gl = al + ibl ∈ L2s,C μd , l ≥ 1,

2

L2 ([0,1]d )

= lim d! bl

l→+∞

2

L2 ([0,1]d )

1

and

2

al , bl

Then, the following conditions are equivalent: as l → ∞,

L2 ([0,1]d )

= 0.

(4.39)

104

Background Results in Probability and Graphical Methods

law

(1) Id (gl ) → N + iN , where N, N ∼ N (0, 1/2) are independent;

(2) gl ⊗q gl → 0 and gl ⊗q gl → 0 in LC2 [0, 1]2(d−q) for every q = 1, ..., d − 1;

(3) gl ⊗q gl → 0 in LC2 [0, 1]2(d−q) for every q = 1, ..., d − 1;

(4) al ⊗q al → 0, bl ⊗q bl → 0 and al ⊗q bl → 0 in L2 [0, 1]2(d−q) for every

q = 1, ..., d − 1;

(5) al ⊗q al → 0, bl ⊗q bl → 0 in L2 [0, 1]2(d−q) for every q = 1, ..., d − 1;

(6) E Id (al )4 → 3/4, E Id (bl )4 → 3/4 and E Id (al )2 Id (bl )2 → 1/4;

(7) E Id (al )4 → 3/4, E Id (bl )4 → 3/4.

4.5 The graphical method for Wigner coeﬃcients

We now focus on some combinatorial results, known as “the graphical method”,

involving the class of Clebsch-Gordan and Wigner coeﬃcients introduced in

Section 3.5. These formulae enter quite naturally into the computation of moments and cumulants associated with non-linear transformations of Gaussian

fields defined on the sphere S 2 .

Graphical methods are well-known to the physicists’ community, and go

far beyond the results described in this section. An exhaustive discussion of

these techniques can be found in textbooks such as [195, Chapter 11] and [17];

see also [23]. Here, we shall simply present the main staples of the graphical method, in a form which is suitable for the applications developed in the

subsequent chapters.

4.5.1 From diagrams to graphs

Recall that a graph is a pair (I, E), where I is a set of vertices and E is a

collection of edges, that is, of unordered pairs {x, y}, where x, y ∈ I. Considering unordered pairs {x, y} makes the graph not directed, that is, {x, y} and

{y, x} identify the same edge. Also, we allow for repetitions, meaning that the

edge {x, y} may appear more than once into E (or, equivalently, every edge

is counted with a multiplicity possibly greater than one). Due to this circumstance, the term multigraph might be more appropriate, but we shall avoid this

terminology.

In what follows, we will exclusively deal with graphs that are obtained by

“compression” of diagrams of the class Γ(p, 3), as defined in Section 4.3.1.

Recall that the elements of Γ(p, 3) are associated with a table W composed

of three columns and p rows. In standard matrix notation, we shall denote by

(i, j), (i = 1, ..., p, j = 1, 2, 3) the element of W corresponding to the ith row

4.5 The graphical method for Wigner coeﬃcients

105

and jth column. Since we are focussing on diagrams with a specific structure,

in this section we switch from G to γ, in order to indicate a generic element of

Γ(p, 3).

Now fix an even integer p ≥ 2. Given a diagram γ ∈ Γ(p, 3) on a p × 3

table W, we build a graph γˆ with p vertices and 3p/2 edges by implementing

the following procedure: (i) identify the ith row of W with the ith vertex of γˆ ,

i = 1, ..., p and (ii) draw one edge linking the vertex i1 and the vertex i2 for

every pair of the type ((i1 , j1 ), (i2 , j2 )) appearing in γ. If one counts loops twice

(recall that a loop is an edge connecting one vertex to itself), then γˆ is such that

there are exactly three edges that are incident to each vertex.

Example 4.25 Fig. 4.2 provides two illustrations of the construction described above. For instance, on the left of (a) one has γ = {((1, 1), (1, 3)),

((1, 2), (2, 1)), ((2, 2), (3, 1)), ((2, 3), (3, 2))((3, 3), (4, 2))((4, 1), (4, 3))}, which is

in Γ(4, 3). The graph γˆ (on the right) is given by

γˆ = {{1, 1}, {1, 2}, {2, 3}, {2, 3}, {3, 4}, {4, 4}}.

Analogously, the diagram on the left of (b) is γ = {((1, j), (2, j)), ((3, j), (4, j)) :

j = 1, 2, 3}, and

γˆ = {{1, 2}, {1, 2}, {1, 2}, {3, 4}, {3, 4}, {3, 4}}.

(a)

(b)

Figure 4.2 (a) The diagram on the left is flat and connected, and generates a

graph (right) with two loops and one two-loop. (b) The diagram on the left is

paired and not connected, and so is the generated graph on the right.

### Tài liệu bạn tìm kiếm đã sẵn sàng tải về

3 Moments, cumulants and diagram formulae

Tải bản đầy đủ ngay(0 tr)

×