Tải bản đầy đủ
CHAPTER 4. DATA ANALYSIS AND RESULTS

CHAPTER 4. DATA ANALYSIS AND RESULTS

Tải bản đầy đủ

44

Table 4.1
Respondent Demographic
Description
Female
Male
Gender
Total
Less than 20
20-29
30-40
Age
Over 40
Total
High School
College/Bachelor
Education
Master
Total
Student
Worker
Officer
Occupation
Manager/Owner
Other
Total
Less than 5 million
5 - 10 million
10 - 15 million
Income
Over 15 million
Total

4.2

Frequency
241
264
505
101
176
151
77
505
101
303
101
505
101
33
247
72
52
505
59
227
75
144
505

Percentage (%)
48
52
100
20
35
30
15
100
20
60
20
100
20
7
49
14
10
100
12
45
15
29
100

Confirmatory Factor Analysis

The confirmatory factor analysis (CFA) was applied by using IBM AMOS 22
software. Firstly, the overall model fit indices were assessed. There are many
indices in the SEM literature, for instance, Kline (2010) suggested: (1) Chi-square
(

), df and p-value; (2) an index that reflects the overall proportion of explained

variance such as “Comparative fit index (CFI)”; (3) an index that adjusts the
proportion of explained variance for model complexity such as “Tucker & Lewis
Index (TLI)”; and (4) Root mean square error of approximation (RMSEA) (Browne
& Cudeck, 1993). Based on previous researches from Nguyen Dinh Tho and
Nguyen Thi Mai Trang (2011b), the model fit indices suggested: (1)

/df < 2, (2)

45

Tucker & Lewis Index (TLI) value drops from .90 to 1; (3) Comparative fit index
(CFI) values above .90 to 1 are usually related to model that fits well; (4) RMSEA
value should be less than .08 and (5) if

-test is not significant (p-value > 5%) can

conclude the model is fit with data. The assessment indices are: (1) The composite
reliability ( ); (2) Average variances extracted (

); (3) Unidimensionality; (4)

Convergent validity; (5) Discriminant validity and (6) Nomological validity.
According to the theory was presented on chapter 3, the “Maximum Likelihood
Estimates” method was applied to estimate the model’s parameters because the
assessment of normality of all observation variables satisfied “kurtoses” and
“skewnesses” – all of “skew value” and “kurtosis value” in this test dropped
between [-1,+1] (Muthen & Kaplan, 1985), (Nguyen Dinh Tho & Nguyen Thi Mai
Trang, 2011b). (See Table E1, Appendix E).
All concepts in this research are unidemensionality scales; hence, the author
decided to test measurement concepts in the saturated model. Saturated model can
be defined as one in which all parameters relating to the constructs to one another
are estimated (Anderson & Gerbing, 1988).
Saturated model of this study had 254 degrees of freedom. The
model was 328.726,

normalized by degree of freedom (

value of this

/df) was 1.294 with p

value = .001, TLI = .986, CFI = .988 and RMSEA value was .024. In summary, this
saturated model was fitted with data market (see in Figure 4.1).

46

Figure 4.1. Saturated model of the theoretical model
Discriminant validity analysis refers to testing the extent of distinction between the
two constructs (Hair et al., 2010) while convergent validity test through measuring
the degree of internal consistency within one construct (Campbell & Fiske, 1959;
Churchill, 1979).
Table 4.2 presented the discriminant validity of constructs results. In there, the
correlations between constructs, together with their standard errors was significantly
different from unity, thus, all constructs passed the discriminant validity
requirements (Steenkamp & van Trijp, 1991).

47

Table 4.2
Correlations between Constructs
Relation

r

se

cr

p(r)

1-r

cr (1-r)

p (1-r)

PM ↔ PU
.349
.042
8.365
.000
.651
15.572
.000
PM ↔ PC
.398
.041
9.717
.000
.602
14.725
.000
PM ↔ PEU
.372
.041
8.989
.000
.628
15.173
.000

PM
CA
.176
.044
4.006
.000
.824
18.777
.000
.125
.044
2.825
.005
.875
19.779
.000
PM ↔ AA
PM ↔ BI
.407
.041
9.981
.000
.593
14.567
.000
↔ PC
PU
.453
.040 11.408 .000
.547
13.754
.000
↔ PEU
PU
.515
.038 13.458 .000
.485
12.698
.000

CA
.371
.041
8.974
.000
.629
15.183
.000
PU
↔ AA
PU
.278
.043
6.482
.000
.722
16.863
.000
↔ BI
PU
.446
.040 11.178 .000
.554
13.881
.000
↔ PEU
PC
.560
.037 15.160 .000
.440
11.911
.000
↔ CA
PC
.268
.043
6.235
.000
.732
17.043
.000

PC
AA
.205
.044
4.706
.000
.795
18.210
.000
↔ BI
PC
.340
.042
8.112
.000
.660
15.738
.000
PEU ↔ CA
.349
.042
8.341
.000
.651
15.587
.000
PEU ↔ AA
.315
.042
7.436
.000
.685
16.192
.000

PEU
BI
.359
.042
8.631
.000
.641
15.400
.000
CA ↔ AA
.427
.040 10.603 .000
.573
14.205
.000
CA ↔ BI
.473
.039 12.047 .000
.527
13.412
.000
.418
.041 10.314 .000
.582
14.371
.000
AA ↔ BI
Note. r: correlation coefficient; se: standard error; cr: critical ratio; p(r): p-value of r;
p(1-r): p-value of (1-r).

Besides, the factor loadings for all items higher than .50 (from .70 to .85), all the
mean of estimate weight ( ) were high than .50 (from .75 to .81) and all estimates
( ) were significant at p-value < .001 (Table E2, Appendix E). Moreover, the
requirements of reliability and average variance extracted were satisfied. Indeed,
Cronbach’s alpha and composite reliability ( ) were all higher than .50 (from .80
to .89), the lowest of average variance extracted (

) was 56%. Finally, these

findings indicated that all concepts satisfied the condition of unidimensional,
convergent validity and discriminant validity (Nguyen Dinh Tho & Nguyen Thi Mai
Trang, 2011b). (Table 4.3).

48

Table 4.3
Measurement Validation
Concept Component
PM
PC
AA
CA
PEU
BI

1
1
1
1
1
1

Reliability

Number
of Items

Alpha

4
4
3
3
4
3

.84
.85
.85
.85
.83
.85

ρ

.84
.85
.85
.84
.83
.85

ρ

Value
(Discriminant,
Convergent)

(%)
56
59
66
64
56
65

.75
.77
.81
.80
.75
.81

Satisfactory

PU
1
4
.89
.89
66
.81
̅
Note. ρ : average variance extracted; ρ : composite reliability; : mean of estimate weights

4.3

SEM Approach for Theoretical Model

The theoretical model was then tested using the SEM approach with AMOS 22.0
software. Model estimation results in an acceptable fit between the theoretical
model and the data set:

/df = 1.418, p = .000, TLI = .980, CFI = .983, RMSEA =

.029 (Figure 4.2). According to the SEM results (Table 4.4), just the relation
between perceived usefulness directly to affective attitude was not significant at pvalue = .674. All the rest hypotheses were supported.
Table 4.4
Construct Relations
Relations
PEU
à
à
PEU
à
PC
à
PU
à
PEU
à
PEU
à
CA
à
PU
à
CA
à
AA
à
PM
Note. ***p<.001

Estimate
PC
PU
PU
CA
CA
AA
AA
AA
BI
BI
BI

.684
.486
.189
.299
.235
.177
.416
.027
.36
.255
.369

p-value
***
***
.005
***
***
.007
***
.674
***
***
***

49

There was not any variance of residual less than zero, so the Heywood case
(Heywood, 1931) did not appear during the ML estimate. However, existed eight
standardized residuals with absolute values of more than 2.58. (Table F10,
Appendix F). This would be discussed in next part of this study: “Optimizing the
theoretical model”.

Figure 4.2. Standardized SEM results for theoretical model

50

4.4

Optimized the Theoretical Model

Even the theoretical model indicated a good fit, there were several indices need to
be improved in order to determine the better model. AMOS supports two types of
statistics, which can be helpful in detecting model misspecification: the
standardized residuals and the modification indices.
Firstly, according to previous research from Nguyen Dinh Tho and Nguyen Thi Mai
Trang (2011b), the standardized residuals with absolute values of more than 2.58
are considered to be large. Besides, Jöreskog and Sörbom (1984) stated the residual
covariance between two variables should be less than two in absolute value with a
correct model.
Next, the second type of statistics related to misspecification reflects the extent to
which the hypothesized model is appropriately described. The evidence of misfit in
this regard is captured by the modification indices (MIs), which can be
conceptualized as a

statistic with one degree of freedom (Jöreskog & Sörbom,

1993).
Hence, the author suggested optimizing the conceptual model based on
“Modification Indices (MI)”, “Standardized Residuals” due to the study has eight
standardized residuals values above |2.58| (Table F10, Appendix F) and several
residual covariance pairs at high MIs values (for example, the pair “ε05- ε23”).
(Table F1, Appendix F).
The process of optimizing was conducted in systematically as flowing:
• In the residual matrices, PU1 (column) consisted of five high standardized
residual values (3.78, 3.99, 2.13, 2.16, 2.09) so it was noted.
• PU1’s residual “ε05” also caused the pair “ε05- ε23” captured dramatic MI value
(20.640, See Table F1, Appendix F).
• Decided to delete PU1 out of model.
• Processed next PM1, PC4 variables.

51

After completed optimizing, the theoretical model was represented schematically in
Figure 4.3

Figure 4.3. The optimized theoretical model
Overall, the optimized model got better fit indices:

/df = 1.179, CFI increased

from .983 to .993, TLI increased from .980 to .992, RMSEA improved from .029 to
.019 and p-value increased from .000 to .043.

52

This model passed all requirements such as Heywood case absent, standardized
residuals less than |2.58| and low MIs. (See Table F11, Appendix F for details).
Table 4.5 showed the hypotheses test for optimized model.
Table 4.5
Relations of Constructs (Standardized)
Relations
PEU
PEU
PC
PU
PEU
PEU
CA
PU
CA
AA
PM

à
à
à
à
à
à
à
à
à
à
à

PC
PU
PU
CA
CA
AA
AA
AA
BI
BI
BI

ML

p-value

.694
.496
.153
.271
.255
.177
.413
.032
.324
.252
.366

***
***
.032
***
***
.007
***
.618
***
***
***

Note. ML: estimate value; ***p<.001
4.5

Competitive Model Test

Based on the important role of competitive model when testing a conceptual model
(revised in chapter 2), this study suggested testing one competitive model with
following hypothesis:
Hc: Perceived convenience positively effect on behavioral intention toward
usage.

53

Figure 4.4. The Standardized SEM results of Competitive Model
The estimations of competitive model yielded results in a good fit to data:

/df =

1.163, p = .059, TLI = .993, CF I= .994, RMSEA = .018. It indicated that
competitive model was goodness-of-fit statistics (See Figure 4.4).
Next, using the “

– test” to determine whether significant difference between two

models (optimized theoretical model and competitive model). The ”
presented in Table 4.6 below.

–Test” was

54

Table 4.6
Competing Measurement Modeling
Model
Theoretical Model (T)
Competitive Model (C)
Model comparison
T–C
Note. Δ

=

( )-

231.175
226.841

df
196
195

Δ
4.334

Δdf
1

P-value
.037<.05

(C); Δdf = df (T) – df (C).

According to the resulting value of “

– test” above, the p-value = .037<.05, given

that there is significantly different between optimized theoretical and competitive
model. Therefore, this study should determine the appropriate model for research
(See Table 4.7 for details).
Table 4.7
Summary of Models
Model

df

/df CFI TLI RMSEA

Theoretical Model 231.175 196 1.179
Competitive Model 226.841 195 1.163

.993
.994

.992
.993

.019
.018

P
.043
.059

The summary table showed that the competitive model took 1-degree of freedom
and increased the model fit indices, especially the p-value changed from significant
(.043) to un-significant (.059) that reflected the competitive model was better fit for
data. Besides, the competitive hypothesis (Hc: Perceived convenience positively
effect on behavioral intention toward usage) was significant and all previous
hypotheses were supported from theoretical model to be continued supporting in
competitive model (Table 4.8).
Therefore, the competitive model was selected as the final research model.