Tải bản đầy đủ
4 Familial (Random Effects Based) Bivariate Multinomial Regression Model

4 Familial (Random Effects Based) Bivariate Multinomial Regression Model

Tải bản đầy đủ

310

5 Multinomial Models for Cross-Sectional Bivariate Categorical Data

wi1 = (wiy : 1 × p1 , wic : 1 × p2 ) : p × 1, wi2 = (wiz : 1 × q1 , wic : 1 × p2 ) : q × 1,
(5.104)
where wiy and wiz are individual response specific covariates and wic is a common
covariate vector influencing both responses of the ith individual. For example, the
so-called WESDR (Wisconsin Epidemiologic Study of Diabetic Retinopathy) data
set (see Williamson et al. 1995, for example) contains diabetic categorical retinopathy status of left and right eyes (two response variables) of K = 996 individuals
along with their associated covariates. This data set did not have any individual
response variable specific covariates, but there were seven important common
covariates, namely: (1) duration of diabetes (DD), (2) glycosylated hemoglobin
level (GHL), (3) diastolic blood pressure (DBP), (4) gender, (5) proteinuria (Pr),
(6) dose of insulin per day (DI), and (7) macular edema (ME). Thus, in notation
of (5.104), p1 = q1 = 0, and p2 = 7. Note that these covariates wi1 and wi2 are
considered to be fixed and known. However, as the bivariate categorical responses
Yi and Zi are collected from the same ith individual, they are likely to be correlated,
and it may be reasonable to assume that this bivariate correlation is caused by a
common individual latent effect shared by both responses. Thus, conditional on
such latent/random effects, we may modify the marginal probabilities given in (5.1)
and (5.2) to incorporate the covariates and write these new marginal probabilities
conditional on the random effects as

( j)

P[yi = yi |ξi∗ , wi1 ] = π(i)
j· (ξi , wi1 )

=







exp(β j0 +β j wi1 +σξ ξi )
1+∑J−1
u=1 exp(βu0 +βu wi1 +σξ ξi )
1
1+∑J−1
u=1 exp(βu0 +βu wi1 +σξ ξi )

for j = 1, . . . , J − 1
(5.105)
for j = J,

and




exp(αr0 +αr wi2 +σξ ξi )
R−1
1+∑h=1
exp(αh0 +αh wi2 +σξ ξi )
(r) ∗

P[zi = zi |ξi , wi2 ] = π(i)·r (ξi , wi2 ) =

1

R−1
1+∑h=1
exp(αh0 +αh wi2 +σξ ξi )

for r = 1, . . . , R − 1
(5.106)
for r = R,

where

β j = (β j1 , . . . , β j , . . . , β jp ) , and αr = (αr1 , . . . , αrm , . . . , αrq ) ,
for j = 1, . . . , J − 1, and r = 1, . . . , R − 1.
Similar to Sect. 2.1 (from Chap. 2), define

β j∗ = [β j0 , β j ]

θ1∗ = [β1∗ , . . . , β j∗ , . . . , βJ−1
];

αr∗ = [αr0 , αr ]

θ2∗ = [α1∗ , . . . , αr∗ , . . . , αR−1
].

(5.107)

5.4 Familial (Random Effects Based) Bivariate Multinomial Regression Model

311

Also define
w∗i1 = [1, wi1 ]
w∗i2 = [1, wi2 ] .

(5.108)

Using the notations from (5.107) and (5.108), re-express the marginal probabilities
conditional on the random effects from (5.105)–(5.106) as

P[yi =

( j)
yi |ξi∗ , w∗i1 ]



= π(i)
j· (ξi , wi1 ) =





exp(β j∗ w∗i1 +σξ ξi )

∗ ∗
1+∑J−1
u=1 exp(βu wi1 +σξ ξi )
1
∗ ∗
1+∑J−1
u=1 exp(βu wi1 +σξ ξi )

for j = 1, . . . , J − 1
(5.109)
for j = J,

and
P[zi =

(r)
zi |ξi∗ , w∗i2 ]





= π(i)·r
(ξi , w∗i2 ) =



exp(αr∗ w∗i2 +σξ ξi )

R−1
1+∑h=1
exp(αh∗ w∗i2 +σξ ξi )
1
R−1
1+∑h=1
exp(αh∗ w∗i2 +σξ ξi )

for r = 1, . . . , R − 1
(5.110)
for r = R,
iid

where w∗i1 and w∗i2 are fixed and known covariates, whereas ξi ∼ N(0, 1) as in (5.1).
Hence, the (unconditional) marginal and joint probabilities have the formulas
( j)

π(i) j· (w∗i1 ) = P(yi = yi ) =
(r)

π(i)·r (w∗i2 ) = P(zi = zi ) ==
( j)





π(i)
j· (ξi , wi1 ) f N (ξi )d ξi ;

−∞


−∞


π(i)·r
(ξi , w∗i2 ) fN (ξi )d ξi ;

(5.111)
(5.112)

(r)

π(i) jr (w∗i1 , w∗i2 ) = P[yi = yi , zi = zi ]
=
exp(

−ξi2


−∞





[π(i)
j· (ξi , wi1 )π(i)·r (ξi , wi2 )] fN (ξi )d ξi ,

(5.113)

)

where fN (ξi ) = √2π2 . Notice that the integrations in (5.111)–(5.113) for the
computation of marginal and joint probabilities may be computed by using the
binomial approximation similar to that of (5.48)–(5.50). For example,
π(i) j· (w∗i1 ) =


−∞



π(i)
j· (ξi , wi1 ) f N (ξi )d ξi



exp(β ∗j w∗i1 +σξ ξi (vi ))
V

V

]
(1/2)vi (1/2)V −vi for j = 1, . . . , J − 1

∗ ∗
⎨ ∑vi =0 [ 1+∑J−1
vi
u=1 exp(βu wi1 +σξ ξi (vi ))

(5.114)

V
⎪ V
1
vi (1/2)V −vi for j = J,

[
]
(1/2)


J−1
⎩ vi =0 1+∑h=1 exp(β ∗j w∗i1 +σξ ξi (vi )) v
i

312

5 Multinomial Models for Cross-Sectional Bivariate Categorical Data

where, for vi ∼ binomial(V, 1/2) with a user’s choice large V,

ξi (vi ) =

vi −V (1/2)
.
V (1/2)(1/2)

The parameters of the model (5.111)–(5.113), namely


θ1∗ = (β1∗ , . . . , β j∗ , . . . , βJ−1
) ; θ2∗ = (α1∗ , . . . , αr∗ , . . . , αR−1
) ; and σξ2 ,

may be estimated by using the MGQL, JGQL, or ML approach discussed in the last
section. In the next section, we, however, demonstrate how one can construct the
MGQL estimating equations for these parameters. The formulas for the estimating
equations under other two approaches may be obtained similarly.

5.4.1 MGQL Estimation for the Parameters
Using the marginal probabilities from (5.111) and (5.112) we first write
π(i)y (θ1∗ , σξ , w∗i1 ) = E[Yi ] = [π(i)1· (w∗i1 ), . . . , π(i) j· (w∗i1 ), . . . , π(i)(J−1)· (w∗i1 )] ,

(5.115)

π(i)z (θ2∗ , σξ , w∗i2 ) = E[Zi ] = [π(i)·1 (w∗i2 ), . . . , π(i)·r (w∗i2 ), . . . , π(i)·(R−1) (w∗i2 )] ,

(5.116)

var(Yi ) = diag[π(i)1· (w∗i1 ), . . . , π(i) j· (w∗i1 ), . . . , π(i)(J−1)· (w∗i1 )] − π(i)y (θ1∗ , σξ , w∗i1 )π(i)y (θ1∗ , σξ , w∗i1 )
= Σ(i)yy (θ1∗ , σξ , w∗i1 );

(5.117)

var(Zi ) = diag[π(i)·1 (w∗i2 ), . . . , π(i)·r (w∗i2 ), . . . , π(i)·(R−1) (w∗i2 )] − π(i)z (θ2∗ , σξ , w∗i2 )π(i)z (θ2∗ , σξ , w∗i2 )
= Σ(i)zz (θ2∗ , σξ , w∗i2 ),

(5.118)

and
cov(Yi , Zi ) = Σ(i)yz (θ1∗ , θ2∗ , σξ , w∗i1 , w∗i2 )
= (π(i) jr (θ1∗ , θ2∗ , w∗i1 , w∗i2 )) − π(i)y (θ1∗ , σξ , w∗i1 )π(i)z (θ2∗ , σξ , w∗i2 ),

(5.119)

where

π(i) jr (θ1∗ , θ2∗ , w∗i1 , w∗i2 ) ≡ π(i) jr (w∗i1 , w∗i2 )
is the joint probability with its formula given in (5.113).
Next, use ψ ∗ = (θ1∗ , θ2∗ ) and construct the MGQL for this vector parameter as
in the next section.

5.4 Familial (Random Effects Based) Bivariate Multinomial Regression Model

5.4.1.1

313

MGQL Estimation for ψ ∗ = (θ1∗ , θ2∗ )

For known σξ2 , we write the MGQL estimating equation for ψ ∗ as
f (ψ ∗ ) =

K



∂ (π(i)y (θ1∗ , σξ , w∗i1 ), π(i)z (θ2∗ , σξ , w∗i2 ))
∂ ψ∗

i=1

−1
× Σ(i)11
(ψ ∗ , σξ , w∗i1 , w∗i2 )

yi − π(i)y (θ1∗ , σξ , w∗i1 )
zi − π(i)z (θ2∗ , σξ , w∗i2 )

= 0,

(5.120)

(Sutradhar 2004) where Σ(i)11 (ψ ∗ , σξ , w∗i1 , w∗i2 ) has the formula given by

Σ(i)yy (θ1∗ , σξ , w∗i1 )
Σ(i)yz (θ1∗ , θ2∗ , σξ , w∗i1 , w∗i2 )
.




Σ(i)yz (θ1 , θ2 , σξ , wi1 , wi2 )
Σ(i)zz (θ2∗ , σξ , w∗i2 )
(5.121)
In (5.120), the first order derivative matrix is computed as follows.
Σ(i)11 (ψ ∗ , σξ , w∗i1 , w∗i2 ) =

∂ (π

(θ1∗ ,σξ ,w∗i1 ),π

(θ2∗ ,σξ ,w∗i2 ))

(i)y
(i)z
: ((J − 1)(p + 1) +
Computation of the derivative
∂ ψ∗
(R − 1)(q + 1)) × (J + R − 2)
For the purpose, we first re-express the marginal probabilities in (5.109)–(5.110) as
functions of θ1∗ , θ2∗ , σξ , as follows.



π(i)
j· (ξi , wi1 ) =





exp(θ1∗ xi∗j +σξ ξi )
∗ ∗
1+∑J−1
u=1 exp(θ1 xiu +σξ ξi )
1
∗ ∗
1+∑J−1
u=1 exp(θ1 xiu +σξ ξi )

for j = 1, . . . , J − 1
for j = J,

(5.122)

with

01( j−1)(p+1)
⎠,
xi∗j = ⎝
w∗i1
01(J−1− j)(p+1)


for j = 1, . . . , J − 1; and

π(i)·r
(ξi , w∗i2 ) =





exp(θ2∗ x˜ir +σξ ξi )

R−1
1+∑h=1
exp(θ2∗ x˜ih +σξ ξi )
1
R−1
1+∑h=1
exp(θ2∗ x˜ih +σξ ξi )

for r = 1, . . . , R − 1
for r = R,

with

01(r−1)(q+1)
⎠,
x˜ir = ⎝
w∗i2
01(R−1−r)(q+1)


for r = 1, . . . , R − 1.

(5.123)

314

5 Multinomial Models for Cross-Sectional Bivariate Categorical Data

Because ψ ∗ = (θ1∗ , θ2∗ ) , the desired derivative matrix may be computed as
follows:

∂ π(i)y (θ1∗ , σξ , w∗i1 )
∂ θ1∗

=

∂ π(i)1·
∂ π(i) j·
∂ π(i)(J−1)·
,
,...,
,...,
∂ θ1∗
∂ θ1∗
∂ θ1∗

(5.124)

where, for j = 1, . . . , J − 1,

∂ π(i) j·
=
∂ θ1∗

∗ (ξ , w∗ )
∂ π(i)
j· i i1



∂ θ1∗

−∞


=

−∞

fN (ξi )d ξi
J−1




∗ ∗

π(i)
j· (ξi , wi1 )[xi j − ∑ xig π(i)g· (ξi , wi1 )] f N (ξi )d ξi , (5.125)
g=1

where xi∗j = [01( j−1)(p+1) , w∗i1 , 01(J−1− j)(p+1) ] is the 1 × (J − 1)(p + 1) row vector
as defined in (5.122). For convenience, we re-express the (J − 1)(p + 1) × 1 vector
in (5.125) as


∂ π(i) j·
∂ θ1∗


∗ (ξ , w∗ )π ∗ (ξ , w∗ )w∗
−π(i)
j· i i1 (i)1· i i1 i1


..


.



∗ (ξ , w∗ )π ∗
∗ )w∗ ⎟
−π(i)
(
ξ
,
w


i
i
i1
i1
i1

(i)( j−1)·
∞ ⎜

⎜[π ∗ (ξi , w∗i1 )(1 − π ∗ (ξi , w∗i1 ))]w∗i1 ⎟ fN (ξi )d ξi
=
(i) j·
(i) j·


−∞ ⎜
∗ (ξ , w∗ )π ∗
∗ )w∗ ⎟
(
ξ
,
w
⎜ −π(i)
i
i
i1 (i)( j+1)·
i1 i1 ⎟



..




.
∗ (ξ , w ∗ )π ∗
∗ )w∗
−π(i)
(
ξ
,
w
i1
j· i i1 (i)(J−1)· i i1
=


−∞







{π(i)
j· (ξi , wi1 )1J−1 − π(i) j· (ξi , wi1 )π(i)y (θ1 , σξ , wi1 )}

⊗w∗i1 ] fN (ξi )d ξi ,

(5.126)

and write the formula for the derivative as

∂ π(i)y (θ1∗ , σξ , w∗i1 )
∂ θ1∗

=


−∞

[Σ(i)yy (θ1∗ , σξ , w∗i1 ) ⊗ w∗i1 ] fN (ξi )d ξi ,

(5.127)

where Σ(i)yy (θ1∗ , σξ , w∗i1 ) is the (J − 1) × (J − 1) covariance matrix of yi as given
by (5.117). By calculations similar to that of (5.127), one obtains

∂ π(i)z (θ2∗ , σξ , w∗i2 )
∂ θ2∗

=


−∞

[Σ(i)zz (θ2∗ , σξ , w∗i2 ) ⊗ w∗i2 ] fN (ξi )d ξi ,

(5.128)

5.4 Familial (Random Effects Based) Bivariate Multinomial Regression Model

315

where Σ(i)zz (θ2∗ , σξ , w∗i2 ) is the (R − 1) × (R − 1) covariance matrix of zi as given
by (5.118). Consequently, we obtain

∂ (π(i)y (θ1∗ , σξ , w∗i1 ), π(i)z (θ2∗ , σξ , w∗i2 ))
∂ ψ∗
=


−∞

Σ(i)yy (θ1∗ , σξ , w∗i1 ) ⊗ w∗i1
0
0
Σ(i)zz (θ2∗ , σξ , w∗i2 ) ⊗ w∗i2

fN (ξi )d ξi . (5.129)

Remark that applying the aforementioned formulas for the mean vectors
[(5.115)–(5.116)], associated covariance matrix (5.121), and the derivative
matrix (5.129), one may now solve the MGQL estimating equation (5.120) for
ψ ∗ = (θ1∗ , θ2∗ ) by using the iterative equation
ψˆ∗ (m + 1) = ψˆ∗ (m) +

K



∂ (π(i)y (θ1∗ , σξ , w∗i1 ), π(i)z (θ2∗ , σξ , w∗i2 ))
∂ ψ∗

i=1

×

∂ (π(i)y (θ1∗ , σξ , w∗i1 ), π(i)z (θ2∗ , σξ , w∗i2 ))
∂ ψ∗

−1
× Σ(i)11
(ψ ∗ , σξ , w∗i1 , w∗i2 )

5.4.1.2

−1

K



∂ (π(i)y (θ1∗ , σξ , w∗i1 ), π(i)z (θ2∗ , σξ , w∗i2 ))
∂ ψ∗

i=1

yi − π(i)y (θ1∗ , σξ , w∗i1 )
zi − π(i)z (θ2∗ , σξ , w∗i2 )

−1
Σ(i)11
(ψ ∗ , σξ , w∗i1 , w∗i2 )

|ψ ∗ =ψˆ∗ (m) .

(5.130)

MGQL Estimation for σξ2

Similar to Sect. 5.3.1.1.2, we exploit the pair-wise products of the bivariate
responses to estimate this random effects variance component parameter σξ2 . Let
gi = (yi1 zi1 , . . . , yi j zir , . . . , yi,J−1 zi,R−1 )

(5.131)

which has the mean
E[Gi ] = (π(i)11 (w∗i1 , w∗i2 ), . . . , π(i) jr (w∗i1 , w∗i2 ), . . . , π(i),J−1,R−1 (w∗i1 , w∗i2 ))
= π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 ),
where, by (5.113),

π(i) jr (w∗i1 , w∗i2 ) =
for j = 1, . . . , J; r = 1, . . . , R.


−∞





[π(i)
j· (ξi , wi1 )π(i)·r (ξi , wi2 )] f N (ξi )d ξi ,

(5.132)

316

5 Multinomial Models for Cross-Sectional Bivariate Categorical Data

Next following (5.60), one writes
cov(Gi ) = diag[π(i)11 (w∗i1 , w∗i2 ), . . . , π(i) jr (w∗i1 , w∗i2 ), . . . , π(i)J−1,R−1 (w∗i1 , w∗i2 )]
− π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )
= Σ (i)22 (ψ ∗ , σξ , w∗i1 , w∗i2 ), (say).

(5.133)

Furthermore, we compute the gradient function, that is, the derivative of E[Gi ] with
respect to σξ2 as
∂ π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )
∂ E[Gi ]
=
2
∂ σξ
∂ σξ2
=(

∂ π(i)11 (w∗i1 , w∗i2 )
∂ σξ2

,...,

∂ π(i) jr (w∗i1 , w∗i2 )
∂ σξ2

,...,

∂ π(i)J−1,R−1 (w∗i1 , w∗i2 )
∂ σξ2

),

(5.134)

where, for example,

∂ π(i) jr (w∗i1 , w∗i2 )
=
∂ σξ2
=



∗ (ξ , w∗ )π ∗ (ξ , w∗ )
∂ π(i)
j· i i1 (i)·r i i2

−∞

∂ σξ2

1
2σξ


−∞

fN (ξi )d ξi





ξi π(i)
j· (ξi , wi1 )π(i)·r (ξi , wi2 )



(ξi , w∗i1 ) + π(i)·R
(ξi , w∗i2 ) fN (ξi )d ξi ,
× π(i)J·

(5.135)

for all j = 1, . . . , J − 1; r = 1, . . . , R − 1.
By using (5.131)–(5.134), one may now construct the MGQL estimating equation for σξ2 as
K

∂ π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )

i=1

∂ σξ2



−1
Σ(i)22
(ψ ∗ , σξ , w∗i1 , w∗i2 )[gi − π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )] = 0,

(5.136)

which, for known ψ ∗ = (θ1∗ , θ2∗ ) may be solved iteratively by using the formula
σˆ ξ2 (m + 1) = σˆ ξ2 (m)




K ∂π
∂ π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )
(i)yz (ψ , σξ , wi1 , wi2 ) −1
Σ(i)22 (ψ ∗ , σξ , w∗i1 , w∗i2 )
+⎣ ∑
2
∂ σξ
∂ σξ2
i=1
×

K

∂ π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )

i=1

∂ σξ2



−1

−1
Σ(i)22
(ψ ∗ , σξ , w∗i1 , w∗i2 )[gi − π(i)yz (ψ ∗ , σξ , w∗i1 , w∗i2 )]

.
|σξ2 =σˆ ξ2 (m)

(5.137)

5.5 Bivariate Normal Linear Conditional Multinomial Probability Model

317

5.5 Bivariate Normal Linear Conditional Multinomial
Probability Model
In Sect. 5.4, the correlation between two multinomial variables is modeled in a
natural way through a random effect shared by these variables. Recently in an
unpublished Ph.D. thesis, Sun (2013) (see also Sun and Sutradhar 2014) has used a
bivariate normal type linear conditional multinomial probability (BNLCMP) model
to explain the correlations between two multinomial variables. This model is simpler
than the random effects based model discussed in the last section. However, the
ranges for the correlations under such a linear conditional probability model may be
narrow because of the restriction that the conditional probability of one variable is
linear in other variable. We discuss this simpler model in the following section.

5.5.1 Bivariate Normal Type Model and its Properties
In this approach, unlike in random effects approach (5.105)–(5.110), we assume
that Yi and Zi marginally follow the multinomial distributions with marginal
probabilities as
( j)

P[yi = yi |wi1 ] = π(i) j· (wi1 ) =





exp(β j0 +β j wi1 )
1+∑J−1
u=1 exp(βu0 +βu wi1 )
1
J−1
1+∑u=1 exp(βu0 +βu wi1 )

for j = 1, . . . , J − 1
(5.138)
for j = J,

and
P[zi =

(r)
zi |wi2 ]

= π(i)·r (wi2 ) =





exp(αr0 +αr wi2 )
R−1
1+∑h=1
exp(αh0 +αh wi2 )
1
R−1
1+∑h=1 exp(αh0 +αh wi2 )

for r = 1, . . . , R − 1
(5.139)
for r = R,

where

β j = (β j1 , . . . , β j , . . . , β jp ) , and αr = (αr1 , . . . , αrm , . . . , αrq ) ,
for j = 1, . . . , J − 1, and r = 1, . . . , R − 1. Equivalently, writing

β j∗ = [β j0 , β j ] , w∗i1 = [1, wi1 ]
αr∗ = [αr0 , αr ] , w∗i2 = [1, wi2 ] ,

(5.140)

318

5 Multinomial Models for Cross-Sectional Bivariate Categorical Data

these marginal probabilities in (5.138)–(5.139) may be re-expressed as

P[yi =

( j)
yi |w∗i1 ] = π(i) j· (w∗i1 )

=





exp(β j∗ w∗i1 )

∗ ∗
1+∑J−1
u=1 exp(βu wi1 )
1
∗ ∗
1+∑J−1
u=1 exp(βu wi1 )

for j = 1, . . . , J − 1
(5.141)
for j = J,

and
P[zi =

(r)
zi |w∗i2 ]

= π(i)·r (w∗i2 ) =





exp(αr∗ w∗i2 )
R−1
1+∑h=1
exp(αh∗ w∗i2 )
1
R−1
1+∑h=1
exp(αh∗ w∗i2 )

for r = 1, . . . , R − 1
for r = R,

(5.142)

respectively. Note that these marginal probabilities in (5.140)–(5.141) are quite
different and simpler than those random effects based marginal (unconditional)
probabilities given in (5.111)–(5.112).
Now to develop a correlation model for {yi , zi }, Sun and Sutradhar (2014) have
used a conditional regression approach. More specifically, to write a conditional
regression function of yi given zi , these authors have used the aforementioned
simpler marginal probabilities but assume a bivariate normal type correlation
structure between yi and zi . Note that if yi and zi were bivariate normal responses,
one would then relate them using the conditional mean of yi given zi , that is, through
−1
(zi − μz ),
E[Yi |Zi = zi ] = μy + Σy|z

(5.143)

where μy and μz are the marginal mean vectors corresponding to yi and zi and Σ y|z is
the conditional covariance matrix of yi given zi . However, as yi and zi in the present
setup are two multinomial responses, we follow the linear form (4.20) [see Sect. 4.3]
used in Chap. 4, for example, to model the conditional probabilities, i.e., Pr(yi |zi ),
and write
( j)

( j)

(r)

λiy|z (r; w∗i1 , w∗i2 ) = Pr(yi = yi |zi = zi )
(r)

R−1
π(i) j· (w∗i1 ) + ∑h=1
ρ jh (zih − π(i)·h (w∗i2 )), j = 1, . . . , J − 1; r = 1, . . . , R,
J−1 ( j)
1 − ∑ j=1 λiy|z (r), j = J; r = 1, . . . , R,

=

(5.144)
(r)

(r)

(r)

where zih is the hth (h = 1, . . . , R − 1) component of zi , with zih = 1 if h = r, and
( j)

(h)

0 otherwise; ρ jh is referred to as the dependence parameter relating yi with zi .
Note that this conditional model (5.144) may not be symmetric especially when
J = R. However, this does not cause any problems in inferences as we will use all
unconditional product responses (see also Sect. 5.5.2.2) to estimate the dependence
parameters.

5.5 Bivariate Normal Linear Conditional Multinomial Probability Model

5.5.1.1

319

Means, Variance, and Covariances of Marginal Variables

Using the marginal probabilities from (5.141) and (5.142), the mean vectors and
covariance matrices for each of yi and zi are written as

π(i)y (θ1∗ , w∗i1 ) = E[Yi ] = [π(i)1· (w∗i1 ), . . . , π(i) j· (w∗i1 ), . . . , π(i)(J−1)· (w∗i1 )] ,

(5.145)

π(i)z (θ2∗ , w∗i2 ) = E[Zi ] = [π(i)·1 (w∗i2 ), . . . , π(i)·r (w∗i2 ), . . . , π(i)·(R−1) (w∗i2 )] ,

(5.146)

var(Yi ) = diag[π(i)1· (w∗i1 ), . . . , π(i) j· (w∗i1 ), . . . , π(i)(J−1)· (w∗i1 )] − π(i)y (θ1∗ , w∗i1 )π(i)y (θ1∗ , w∗i1 )
= Σ(i)yy (θ1∗ , w∗i1 );

(5.147)

var(Zi ) = diag[π(i)·1 (w∗i2 ), . . . , π(i)·r (w∗i2 ), . . . , π(i)·(R−1) (w∗i2 )] − π(i)z (θ2∗ , w∗i2 )π(i)z (θ2∗ , w∗i2 )
= Σ(i)zz (θ2∗ , w∗i2 ),

(5.148)

where


θ1∗ = [β1∗ , . . . , β j∗ , . . . , βJ−1
] , θ2∗ = [α1∗ , . . . , αr∗ , . . . , αR−1
].

Note that these formulas appear to be the same as in (5.111)–(5.118), except that the
marginal probabilities, that is,

π(i) j· (w∗i1 ), and π(i)·r (w∗i2 ),
are now given by (5.141) and (5.142), and they are free of σξ .
5.5.1.2

Covariances and Correlations Between yi and zi

Using the proposed conditional probability from (5.144), one may write the joint
probability π(i) jr (w∗i1 , w∗i2 ) as
( j)

(r)

( j)

(r)

π(i) jr (w∗i1 , w∗i2 ) = Pr[yi = yi , zi = zi ]
(r)

( j)

= Pr(yi = yi |zi = zi )Pr(zi = zi ) = λiy|z (r; w∗i1 , w∗i2 )π(i)·r (w∗i2 )
= π(i) j· (w∗i1 ) +

R−1

(r)

∑ ρ jh (zih

− π(i)·h (w∗i2 )) π(i)·r (w∗i2 ),

(5.149)

h=1

yielding the covariance between Yi j and Zir as
cov(Yi j , Zir ) = E(Yi j Zir ) − E(Yi j )E(Zir ) = π(i) jr (w∗i1 , w∗i2 ) − π(i) j· (w∗i1 )π(i)·r (w∗i2 )
= π(i)·r (w∗i2 )

R−1

(r)

∑ ρ jh (zih

h=1

− π(i)·h (w∗i2 )) .

(5.150)

320

5 Multinomial Models for Cross-Sectional Bivariate Categorical Data

Following (5.150), after some algebras, we can write the R − 1 covariance quantities in
cov(Zi ,Yi j ) = [cov(Yi j , Zi1 ), . . . , cov(Yi j , Zir ), . . . , cov(Yi j , Zi,R−1 )]
in a matrix form as



(1)
∗ ))
π(i)·1 (w∗i2 ) ∑R−1
ρ
(z

π
(w
jh
(i)·h
i2
ih
h=1




..


.




(r)
R−1


cov(Zi ,Yi j ) = ⎜ π(i)·r (wi2 ) ∑h=1 ρ jh (zih − π(i)·h (wi2 ))





.
..




(R−1)
R−1


π(i)·(R−1) (wi2 ) ∑h=1 ρ jh (zih
− π(i)·h (wi2 ))


π(i)·1 (w∗i2 ) (1, 0R−2 ) − π(i)z (θ2∗ , w∗i2 ) ρ j




..


.




= ⎜π(i)·r (w∗i2 ) (0r−1 , 1, 0R−1−r ) − π(i)z (θ2∗ , w∗i2 ) ρ j ⎟




..


.


π(i)·r (w∗i2 ) (0R−2 , 1) − π(i)z (θ2∗ , w∗i2 ) ρ j
= var(Zi )ρ j ,

(5.151)

where

ρ j = [ρ j1 , . . . , ρ jh , . . . , ρ j,R−1 ] , and
π(i)z (θ2∗ , w∗i2 )

= [π(i)·1 (w∗i2 ), . . . , π(i)·r (w∗i2 ), . . . , π(i)·(R−1) (w∗i2 )] ,

by (5.146), and the (R − 1) × (R − 1) covariance matrix var(Zi ) is given in (5.148).
Consequently, for every j = 1, . . . , J − 1, one obtains
cov(Zi ,Yi ) = [var(Zi )ρ1 , . . . , var(Zi )ρ j , . . . , var(Zi )ρJ−1 ]
= var(Zi )ρM
= Σ(i)yz (θ2∗ , w∗i2 , ρM ),

(5.152)

where ρM = [ρ1 , . . . , ρ j , . . . , ρJ−1 ] is the (R − 1) × (J − 1) matrix of dependence
parameters. Next, by (5.152), one may compute the correlation matrix between the
pair-wise components of yi and zi , as