Tải bản đầy đủ - 0 (trang)
4: Nonlinear Relationships and Transformations

4: Nonlinear Relationships and Transformations

Tải bản đầy đủ - 0trang

254



Chapter 5 Summarizing Bivariate Data



The scatterplot of these data is reproduced here as Figure 5.26. Because this plot

shows a marked curved pattern, it is clear that no straight line can do a reasonable job

of describing the relationship between x and y. However, the relationship can be described by a curve, and in this case the curved pattern in the scatterplot looks like a

parabola (the graph of a quadratic function). This suggests trying to find a quadratic

function of the form

y^ 5 a 1 b1x 1 b2x 2

that would reasonably describe the relationship. That is, the values of the coefficients

a, b1, and b2 in this function must be selected to obtain a good fit to the data.



Average finish time



300

275

250

225

200



10



FIGURE 5.26



20



30



Scatterplot for the marathon data.



40

Age



50



60



70



What are the best choices for the values of a, b1, and b2? In fitting a line to data, we

used the principle of least squares to guide our choice of slope and intercept. Least

squares can be used to fit a quadratic function as well. The deviations, y 2 y^ , are still

represented by vertical distances in the scatterplot, but now they are vertical distances

from the points to a parabola (the graph of a quadratic function) rather than to a line,

as shown in Figure 5.27. We then choose values for the coefficients in the quadratic

function so that the sum of squared deviations is as small as possible.

y



Deviation



FIGURE 5.27

Deviation for a quadratic function.



x



x1



For a quadratic regression, the least squares estimates of a, b1, and b2 are those values

that minimize the sum of squared deviationsg 1 y 2 y^ 2 2 where y^ 5 a 1 b1 x 1 b2 x 2.

For quadratic regression, a measure that is useful for assessing fit is

R2 5 1 2



SSResid

SSTo



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



5.4



255



Nonlinear Relationships and Transformations



where SSResid 5 g 1y 2 y^ 2 2. The measure R 2 is defined in a way similar to r 2 for

simple linear regression and is interpreted in a similar fashion. The notation r 2 is used

only with linear regression to emphasize the relationship between r 2 and the correlation coefficient, r, in the linear case.

The general expressions for computing the least-squares estimates are somewhat

complicated, so we rely on a statistical software package or graphing calculator to do

the computations for us.



E X A M P L E 5 . 1 4 Marathon Data Revisited: Fitting

a Quadratic Model



300



75



275



50

25



Residual



Average finish time



For the marathon data, the scatterplot (see Figure 5.26) showed a marked curved

pattern. If the least-squares line is fit to these data, it is no surprise that the line does

not do a good job of describing the relationship (r 2 5 .001 or .1% and se 5 56.9439),

and the residual plot shows a distinct curved pattern as well (Figure 5.28).



250

225



0

−25



200

−50

10



20



30



FIGURE 5.28

Plots for the marathon data of Example 5.14: (a) least-square regression

line; (b) residual plot.



40

Age

(a)



50



60



70



20



10



30



40

x

(b)



50



60



70



Part of the Minitab output from fitting a quadratic regression function to these

data is as follows:

The regression equation is

y = 462 – 14.2 x + 0.179 x-squared

Predictor

Constant

x

x-squared

S = 18.4813



Coef

462.00

–14.205

0.17888



SE Coef

43.99

2.460

0.03025



R-Sq = 92.1%



T

10.50

–5.78

5.91



P

0.002

0.010

0.010



R-Sq(adj) = 86.9%



Analysis of Variance

Source

Regression

Residual Error

Total



DF

2

3

5



SS

11965.0

1024.7

12989.7



MS

5982.5

341.6



F

17.52



P

0.022



The least-squares coefficients are

a ϭ 462.00



b1 ϭ Ϫ14.205



b2 ϭ 0.17888



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



256



Chapter 5 Summarizing Bivariate Data



and the least-squares quadratic is

y^ 5 462.00 2 14.205x 1 0.17888x 2

A plot showing the curve and the corresponding residual plot for the quadratic

regression are given in Figure 5.29. Notice that there is no strong pattern in the residual plot for the quadratic case, as there was in the linear case. For the quadratic

regression, R 2 5 .921 (as opposed to .001 for the least-squares line), which means

that 92.1% of the variability in average marathon finish time can be explained by an

approximate quadratic relationship between average finish time and age.

300



10

0



260

Residual



Average finish time



280



240

220



−10

−20



200



−30



180

10



20



30



40

Age

(a)



50



60



10



70



20



30



40

x

(b)



50



60



70



FIGURE 5.29

Quadratic regression of Example 5.13:

(a) scatterplot; (b) residual plot.



Linear and quadratic regression are special cases of polynomial regression. A polynomial regression curve is described by a function of the form

y^ 5 a 1 b1x 1 b2 x 2 1 b3 x 3 1 c1 bk x k

which is called a kth-degree polynomial. The case of k 5 1 results in linear regression

1 y^ 5 a 1 b1x2 and k 5 2 yields a quadratic regression 1 y^ 5 a 1 b1x 1 b2 x 22 . A quadratic curve has only one bend (see Figure 5.30(a) and (b)). A less frequently encountered special case is for k 5 3, where y^ 5 a 1 b1x 1 b2 x 2 1 b3 x 3, which is called a

cubic regression curve. Cubic curves have two bends, as shown in Figure 5.30(c).

y



y



y



FIGURE 5.30

Polynomial regression curves:

(a) quadratic curve with b2 Ͻ 0;

(b) quadratic curve with b2 Ͼ 0;

(c) cubic curve.



x

(a)



x



x

(b)



(c)



E X A M P L E 5 . 1 5 Fish Food



Data set available online



Sea bream are one type of fish that are often raised in large fish farming enterprises.

These fish are usually fed a diet consisting primarily of fish meal. The authors of the

paper “Growth and Economic Profit of Gilthead Sea Bream (Sparus aurata, L.) Fed

Sunflower Meal (Aquaculture [2007]: 528–534) describe a study to investigate



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



5.4



Nonlinear Relationships and Transformations



257



whether it would be more profitable to substitute plant protein in the form of sunflower meal for some of the fish meal in the sea bream’s diet.

The accompanying data are consistent with summary quantities given in the

paper for x 5 percent of sunflower meal in the diet and y 5 average weight (in grams)

of fish after 248 days.

Sunflower Meal (%)



Average Fish Weight



0

6

12

18

24

30

36



432

450

455

445

427

422

421



Figure 5.31 shows a scatterplot of these data. The relationship between x and y does

not appear to be linear, so we might try using a quadratic regression to describe the

relationship between sunflower meal content and average fish weight.



Averagee fish weight



460



FIGURE 5.31

Scatterplot of average fish weight versus sunflower meal content for the

data of Example 5.15.



450



440



430



420

0



10



20

Sunflower meal (%)



30



40



Minitab was used to fit a quadratic regression function and to compute the corresponding residuals. The least-squares quadratic regression is

y^ ϭ 439 ϩ 1.22x Ϫ 0.053x 2

A plot of the quadratic regression curve and the corresponding residual plot are

shown in Figure 5.32.

Notice that the residual plot in Figure 5.32(b) shows a curved pattern (cubic)—not

something we like to see in a residual plot. This suggests that we may want to consider

something other than a quadratic curve to describe the relationship between x and y.

Looking again at the scatterplot of Figure 5.31, we see that a cubic function might be

a better choice because there appear to be two “bends” in the curved relationship—one

at around x 5 12 and another at the far right hand side of the scatterplot.

Using the given data, Minitab was used to fit a cubic regression, resulting in the

curve shown in Figure 5.33(a). The cubic regression is then

y^ ϭ 431.5 ϩ 5.39x Ϫ 0.37x 2 ϩ 0.006x 3

The corresponding residual plot, shown in Figure 5.33(b), does not reveal any

troublesome patterns that would suggest a choice other than the cubic regression.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



258



Chapter 5 Summarizing Bivariate Data



Fitted Line Plot

Average fish weight = 439.0 + 1.220 sunflower meal (%)

−0.05324 Sunflower Meal (%)**2

S

9.46736

R-Sq

69.0%

R-Sq(adj) 53.5%



450



10



5



440

Residuals



Averagee fish weight



460



430



0



420



−5



410



−10

0



10



20

Sunflower meal (%)



30



40



0



10



(a)



20

Sunflower meal (%)



30



40



(b)



FIGURE 5.32

Quadratic regression plots for the fish food data

of Example 5.15: (a) least-squares quadratic regression; (b) residual plot for quadratic regression.



Fitted Line Plot

Average fish weight = 431.5 + 5.387 sunflower meal (%)

−0.3657 sunflower meal (%)**2 + 0.005787 sunflower meal (%)**3

S

2.64725

R-Sq

98.2%

R-Sq(adj) 96.4%



450



2

1

Residual



Averagee fish weight



460



440



0

−1



430

−2

−3



420

0



10



20

Sunflower meal (%)



30



40



(a)



0



10



20

Sunflower meal (%)



30



40



(b)



FIGURE 5.33

Cubic regression plots for the fish

food data of Example 5.15:

(a) least-squares cubic regression;

(b) residual plot for cubic regression.



Based on analysis of these data, we might recommend using sunflower meal for

about 12% of the diet. Sunflower meal is less costly than fish meal, but using more

than about 12% sunflower meal is associated with a decrease in the average fish

weight. It is not clear what happens to average fish weight when sunflower meal is

used for more than 36% of the diet, the largest x value in the data set.



Transformations

An alternative to finding a curve to fit the data is to find a way to transform the

x values and/or y values so that a scatterplot of the transformed data has a linear appearance. A transformation (sometimes called a reexpression) involves using a simple funcCopyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



5.4



Nonlinear Relationships and Transformations



259



tion of a variable in place of the variable itself. For example, instead of trying to describe

the relationship between x and y, it might be easier to describe the relationship between

!x and y or between x and log( y). And, if we can describe the relationship between,

say, !x and y, we will still be able to predict the value of y for a given x value. Common

transformations involve taking square roots, logarithms, or reciprocals.



E X A M P L E 5 . 1 6 River Water Velocity and Distance from Shore

As fans of white-water rafting know, a river flows more slowly close to its banks

(because of friction between the river bank and the water). To study the nature of

the relationship between water velocity and the distance from the shore, data were

gathered on velocity (in centimeters per second) of a river at different distances (in

meters) from the bank. Suppose that the resulting data were as follows:

Distance

.5

1.5

2.5

3.5

4.5

5.5

6.5

7.5

8.5

9.5

Velocity 22.00 23.18 25.48 25.25 27.15 27.83 28.49 28.18 28.50 28.63

A graph of the data exhibits a curved pattern, as seen in both the scatterplot and the

residual plot from a linear fit (see Figures 5.34(a) and 5.34(b)).

Velocity



FIGURE 5.34

Plots for the data of Example 5.16:

(a) scatterplot of the river data;

(b) residual plot from linear fit.



Residual

1



30

29

28

27

26

25

24

23

22



0



–1

0 1 2 3 4 5 6 7 8 9 10

Distance

(a)



0



1



2



3



4 5 6

Distance

(b)



7



8



9 10



Let’s try transforming the x values by replacing each x value by its square root.

We define

xr 5 !x

The resulting transformed data are given in Table 5.2.



T AB LE 5 .2 Original and Transformed Data of Example 5.16

Original Data



Data set available online



Transformed Data



x



y







y



0.5

1.5

2.5

3.5

4.5

5.5

6.5

7.5

8.5

9.5



22.00

23.18

25.48

25.25

27.15

27.83

28.49

28.18

28.50

28.63



0.7071

1.2247

1.5811

1.8708

2.1213

2.3452

2.5495

2.7386

2.9155

3.0822



22.00

23.18

25.48

25.25

27.15

27.83

28.49

28.18

28.50

28.63



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



260



Chapter 5



Summarizing Bivariate Data



Figure 5.35(a) shows a scatterplot of y versus xЈ (or equivalently y versus !x).

The pattern of points in this plot looks linear, and so we can fit a least-squares line

using the transformed data. The Minitab output from this regression appears

below.

Regression Analysis

The regression equation is

Velocity = 20.1 + 3.01 sqrt distance

Predictor

Constant

Sqrt dis



Coef

20.1102

3.0085



S = 0.6292



StDev

0.6097

0.2726



R-Sq = 93.8%



T

32.99

11.03



P

0.000

0.000



R-Sq(adj) = 93.1%



Analysis of Variance

Source

Regression

Residual Error

Total



DF

1

8

9



SS

48.209

3.168

51.376



MS

48.209

0.396



F

121.76



P

0.000



The residual plot in Figure 5.35(b) shows no indication of a pattern. The resulting regression equation is

y^ 5 20.1 1 3.01xr

An equivalent equation is

y^ 5 20.1 1 3.01!x

The values of r 2 and se (see the Minitab output) indicate that a line is a reasonable

way to describe the relationship between y and xЈ. To predict velocity of the river at

a distance of 9 meters from shore, we first compute xr 5 !x 5 !9 5 3 and then

use the sample regression line to obtain a prediction of y:

y^ 5 20.1 1 3.01xr 5 20.1 1 13.012 132 5 29.13

Velocity



Residual



29

28

27

26

25

24



FIGURE 5.35

Plots for the transformed data of Example 5.16: (a) scatterplot of y versus

xЈ; (b) residual plot resulting from a

linear fit to the transformed data.



23

22

1



2

Distance

(a)



3



.8

.6

.4

.2

0

−.2

−.4

−.6

−.8



1



2

Distance

(b)



3



In Example 5.16, transforming the x values using the square root function

worked well. In general, how can we choose a transformation that will result in a

linear pattern? Table 5.3 gives some guidance and summarizes some of the properties

of the most commonly used transformations.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



5.4



Nonlinear Relationships and Transformations



261



TA BLE 5. 3 Commonly Used Transformations

Transformation



Mathematical

Description



No transformation



y^ 5 a 1 bx



Square root of x



y^ 5 a 1 b!x



Log of x *



y^ 5 a 1 b log10 1x2

or

y^ 5 a 1 b ln 1x2

1

y^ 5 a 1 ba b

x



Reciprocal of x



Log of y *

(Exponential

growth or decay)



Try This Transformation When



log10 1 y^ 2 5 a 1 bx

or

ln 1 ^y2 5 a 1 bx



The change in y is constant as x changes. A 1-unit increase in x is associated

with, on average, an increase of b in the value of y.

The change in y is not constant. A 1-unit increase in x is associated with

smaller increases or decreases in y for larger x values.

The change in y is not constant. A 1-unit increase in x is associated

with smaller increases or decreases in the value of y for larger x values.

The change in y is not constant. A 1-unit increase in x is associated with

smaller increases or decreases in the value of y for larger x values. In addition, y has a limiting value of a as x increases.

The change in y is not constant. A 1-unit increase in x is associated with

larger increases or decreases in the value of y for larger x values.



* The values of a and b in the regression equation will depend on whether log10 or ln is used, but the y^ ’s and r 2 values will be identical.



E X A M P L E 5 . 1 7 Loons on Acidic Lakes

A study of factors that affect the survival of loon chicks is described in the paper



“Does Prey Biomass or Mercury Exposure Affect Loon Chick Survival in Wisconsin?” (The Journal of Wildlife Management [2005]: 57–67). In this study, a relationship between the pH of lake water and blood mercury level in loon chicks was

observed. The researchers thought that this might be because the pH of the lake water

might be related to the type of fish that the loons ate. The accompanying data (read

from a graph in the paper and shown in Table 5.4) is x 5 lake pH and y 5 blood



T A B LE 5 .4 Data and Transformed Data from Example 5.17



Data set available online



Lake pH (x)



Blood Mercury Level (y)



Log(y)



5.28

5.69

5.56

5.51

4.90

5.02

5.02

5.04

5.30

5.33

5.64

5.83

5.83

6.17

6.22

6.15



1.10

0.76

0.74

0.60

0.48

0.43

0.29

0.09

0.10

0.20

0.28

0.17

0.18

0.55

0.43

0.40



0.0414

Ϫ0.1192

Ϫ0.1308

Ϫ0.2218

Ϫ0.3188

Ϫ0.3665

Ϫ0.5376

Ϫ1.0458

Ϫ1.0000

Ϫ0.6990

Ϫ0.5528

Ϫ0.7696

Ϫ0.7447

Ϫ0.2596

Ϫ0.3665

Ϫ0.3979

(continued)



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



262



Chapter 5 Summarizing Bivariate Data



T A B L E 5 .4 Data and Transformed Data from Example 5.17—cont'd

Lake pH (x)



Blood Mercury Level (y)



Log(y)



6.05

6.04

6.24

6.30

6.80

6.58

6.65

7.06

6.99

6.97

7.03

7.20

7.89

7.93

7.99

7.99

8.30

8.42

8.42

8.95

9.49



0.33

0.26

0.18

0.16

0.45

0.30

0.28

0.22

0.21

0.13

0.12

0.15

0.11

0.11

0.09

0.06

0.09

0.09

0.04

0.12

0.14



Ϫ0.4815

Ϫ0.5850

Ϫ0.7447

Ϫ0.7959

Ϫ0.3468

Ϫ0.5229

Ϫ0.5528

Ϫ0.6576

Ϫ0.6778

Ϫ0.8861

Ϫ0.9208

Ϫ0.8239

Ϫ0.9586

Ϫ0.9586

Ϫ1.0458

Ϫ1.2218

Ϫ1.0458

Ϫ1.0458

Ϫ1.3979

Ϫ0.9208

Ϫ0.8539



mercury level (mg/g) for 37 loon chicks from different lakes in Wisconsin. A scatterplot is shown in Figure 5.36(a).

The pattern in this scatterplot is typical of exponential decay, with the change in

y as x increases much smaller for large x values than for small x values. You can see

that a change of 1 in pH is associated with a much larger change in blood mercury

level in the part of the plot where the x values are small than in the part of the plot

where the x values are large. Table 5.3 suggests transforming the y values (blood mercury level in this example) by taking their logarithms.

Two standard logarithmic functions are commonly used for such transformations—

the common logarithm (log base 10, denoted by log or log10) and the natural logarithm

(log base e, denoted ln). Either the common or the natural logarithm can be used; the

only difference in the resulting scatterplots is the scale of the transformed y variable. This

can be seen in Figures 5.36(b) and 5.36(c). These two scatterplots show the same pattern,

and it looks like a line would be appropriate to describe this relationship.

Table 5.4 displays the original data along with the transformed y values using

yЈ 5 log(y). The following Minitab output shows the result of fitting the least-squares

line to the transformed data:

Regression Analysis: Log(y) versus Lake pH

The regression equation is

Log(y) ϭ 0.458 Ϫ 0.172 Lake pH

Predictor

Coef

SE Coef

T

P

Constant

0.4582

0.2404

1.91

0.065

Lake Ph

Ϫ0.17183

0.03589

Ϫ4.79

0.000

S ϭ 0.263032

R-Sq ϭ 39.6%

R-Sq(adj) ϭ 37.8%



(continued)



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



5.4



Nonlinear Relationships and Transformations



263



Analysis of Variance

Source

Regression

Residual Error

Total



DF

1

35

36



SS

1.5856

2.4215

4.0071



MS

1.5856

0.0692



F

22.92



P

0.000



1.2



Blood mercury level (y)



1.0

0.8

0.6

0.4

0.2

0.0

5



6



7

8

Lake pH (x)



9



10



(a)

0.0

−0.2



Log (y)



−0.4

−0.6

−0.8

−1.0

−1.2

−1.4

5



6



7

8

Lake pH (x)



9



10



9



10



(b)

0.0

−0.5



ln y



−1.0

−1.5

−2.0

−2.5

−3.0



FIGURE 5.36

Plots for the data of Example 5.17:

(a) scatterplot of the loon data;

(b) scatterplot of the transformed

data with yЈ ϭ log(y); (c) scatterplot

of transformed data with yЈ ϭ ln(y).



−3.5

5



6



7

8

Lake pH (x)

(c)



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



264



Chapter 5



Summarizing Bivariate Data



The resulting regression equation is

yЈ ϭ 0.458 Ϫ 0.172x

or, equivalently

log(y) ϭ 0.458 Ϫ 0.172x



Fitting a Curve Using Transformations The objective of a regression analysis is

usually to describe the approximate relationship between x and y with an equation of

the form y ϭ some function of x.

If we have transformed only x, fitting a least-squares line to the transformed data

results in an equation of the desired form, for example,

y^ 5 5 1 3xr 5 5 1 3 !x    where xr 5 !x

or

1

1

y^ 5 4 1 .2xr 5 4 1 .2     where xr 5

x

x

These functions specify lines when graphed using y and xЈ, and they specify curves

when graphed using y and x, as illustrated in Figure 5.37 for the square root

transformation.

160

140

120



y



100

80

60

40

20

0

0



10



20

30

Transformed x = square root x



40



50



(a)



160

140

120



y



100

80

60

40

20

0



FIGURE 5.37



(a) A plot of y^ 5 5 1 3xr where

xr 5 !x; (b) a plot of

y^ 5 5 1 3!x.



0



500



1000



1500



2000



2500



x

(b)



Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

4: Nonlinear Relationships and Transformations

Tải bản đầy đủ ngay(0 tr)

×