# AP Statistics Curriculum 2007 GLM Corr

(Difference between revisions)
 Revision as of 19:05, 14 June 2007 (view source)IvoDinov (Talk | contribs)← Older edit Revision as of 04:48, 17 February 2008 (view source)IvoDinov (Talk | contribs) Newer edit → Line 1: Line 1: ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Correlation == ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Correlation == - === Linear Modeling - Correlation === + Many biomedical, social, engineering and science applications involve the analysis of relationships, if any, between two or more variables involved in the process of interest. We begin with the simplest of all situations where bivariate data (''X'' and ''Y'') are measured for a process and we are interested on determining the association, relation or an appropriate model for these observations (e.g., fitting a straight line to the pairs of (''X,Y'') data). If we are successful determining a relationship between ''X'' and ''Y'', we can use this model to make predictions - i.e., given a value of ''X'' predict a corresponding ''Y'' response. Note that in this design, data consists of paired observations (''X,Y'') - for example, the [[SOCR_Data_Dinov_020108_HeightsWeights | height and weight of individuals]]. - Example on how to attach images to Wiki documents in included below (this needs to be replaced by an appropriate figure for this section)! + + ===Lines in 2D=== + There are 3 types of lines in 2D planes - Vertical Lines, Horizontal Lines and Oblique Lines. In general, the mathematical representation of lines in 2D is given by equations like $aX + bY=c$, most frequently expressed as $Y=aX + b$, provides the line is not vertical. + + Recall that there is a one-to-one correspondence between any line in 2D and (linear) equations of the form + : If the line is '''vertical''' ($X_1 =X_2$): $X=X_1$; + : If the line is '''horizontal''' ($Y_1 =Y_2$): $Y=Y_1$; + : Otherwise ('''oblique''' line): ${Y-Y_1 \over Y_2-Y_1}= {X-X_1 \over X_2-X_1}$, (for $X_1\not=X_2$ and $Y_1\not=Y_2$) + where $(X_1,Y_1)$ and $(X_2, Y_2)$ are two points on the line of interest (2-distinct points in 2D determine a unique line). + + * Try drawing the following lines manually and [http://www.pserc.cornell.edu/pserc/java/graph/examples/parse1d.html using this applet]: + : Y=2X+1 + : Y=-3X-5 + + === The Correlation Coefficient=== + '''Correlation coefficient''' ($-1 \leq \rho \leq 1$) is a measure of linear association, or clustering around a line of multivariate data. The main relationship between two variables (''X, Y'') can be summarized by: $(\mu_X, \sigma_X)$, $(\mu_Y, \sigma_Y)$ and the correlation coefficient, denoted by $\rho=\rho_{(X,Y)}=R(X,Y)$. + + * If $\rho=1$, we have a perfect positive correlation (straight line relationship between the two variables) + * If $\rho=0$, there is no correlation (random cloud scatter), i.e., no ''linear'' relation between ''X'' and ''Y''. + * If $\rho= –1$, there is a perfect negative correlation between the variables. + + ====Computing $\rho=R(X,Y)$==== + The protocol for computing the correlation involves standardizing, multiplication and averaging. + + * In general, for any [[AP_Statistics_Curriculum_2007_Distrib_RV | random variable]]: + :$\rho_{X,Y}={\mathrm{COV}(X,Y) \over \sigma_X \sigma_Y} ={E((X-\mu_X)(Y-\mu_Y)) \over \sigma_X\sigma_Y},$ + where ''E'' is the [[AP_Statistics_Curriculum_2007_Distrib_MeanVar | expected value]] operator and ''COV'' means [[AP_Statistics_Curriculum_2007_Distrib_MeanVar#Properties_of_Variance | covariance]]. + Since μ''X'' = E(''X''), σ''X''2 = E(''X''2) − E2(''X'') and similarly for ''Y'', we may also write + + :$\rho_{X,Y}=\frac{E(XY)-E(X)E(Y)}{\sqrt{E(X^2)-E^2(X)}~\sqrt{E(Y^2)-E^2(Y)}}.$ + + * '''Sample correlation''' - we only have sampled data - we replace the (unknown) expectations and standard deviations by their sample analogues (sample-mean and sample-standard deviation) to compute the sample correlation correlation: + + : Suppose {$X_1, X_2, X_3, \cdots, X_n$} and {$Y_1, Y_2, Y_3, \cdots, Y_n$} are bivariate observations of the same process and $(\mu_X, \sigma_X)$ and $(\mu_Y, \sigma_Y)$ are the means and standard deviations for the X and Y measurements, respectively. + + : $r_{xy}=\frac{\sum x_iy_i-n \bar{x} \bar{y}}{(n-1) s_x s_y}=\frac{n\sum x_iy_i-\sum x_i\sum y_i} {\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}.$ + + :$r_{xy}=\frac{\sum (x_i-\bar{x})(y_i-\bar{y})}{(n-1) s_x s_y},$ + + where $\bar{x}$ and $\bar{y}$ are the sample means of ''X''  and ''Y'' , ''s''''x''  and ''s''''y''  are the sample standard deviations of ''X''  and ''Y''  and the sum is from ''i'' = 1 to  ''n''. We may rewrite this as + + :$r_{xy}=\frac{\sum x_iy_i-n \bar{x} \bar{y}}{(n-1) s_x s_y}=\frac{n\sum x_iy_i-\sum x_i\sum y_i} {\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}.$ + + * Note: The correlation is defined only if both of the standard deviations are finite and both of them are nonzero.  It is a corollary of the [http://en.wikipedia.org/wiki/Cauchy-Schwarz_inequality Cauchy-Schwarz inequality] that the correlation is always bound $-1 \leq \rho \leq 1$. +
[[Image:AP_Statistics_Curriculum_2007_IntroVar_Dinov_061407_Fig1.png|500px]]
[[Image:AP_Statistics_Curriculum_2007_IntroVar_Dinov_061407_Fig1.png|500px]]

## General Advance-Placement (AP) Statistics Curriculum - Correlation

Many biomedical, social, engineering and science applications involve the analysis of relationships, if any, between two or more variables involved in the process of interest. We begin with the simplest of all situations where bivariate data (X and Y) are measured for a process and we are interested on determining the association, relation or an appropriate model for these observations (e.g., fitting a straight line to the pairs of (X,Y) data). If we are successful determining a relationship between X and Y, we can use this model to make predictions - i.e., given a value of X predict a corresponding Y response. Note that in this design, data consists of paired observations (X,Y) - for example, the height and weight of individuals.

### Lines in 2D

There are 3 types of lines in 2D planes - Vertical Lines, Horizontal Lines and Oblique Lines. In general, the mathematical representation of lines in 2D is given by equations like aX + bY = c, most frequently expressed as Y = aX + b, provides the line is not vertical.

Recall that there is a one-to-one correspondence between any line in 2D and (linear) equations of the form

If the line is vertical (X1 = X2): X = X1;
If the line is horizontal (Y1 = Y2): Y = Y1;
Otherwise (oblique line): ${Y-Y_1 \over Y_2-Y_1}= {X-X_1 \over X_2-X_1}$, (for $X_1\not=X_2$ and $Y_1\not=Y_2$)

where (X1,Y1) and (X2,Y2) are two points on the line of interest (2-distinct points in 2D determine a unique line).

Y=2X+1
Y=-3X-5

### The Correlation Coefficient

Correlation coefficient ( $-1 \leq \rho \leq 1$) is a measure of linear association, or clustering around a line of multivariate data. The main relationship between two variables (X, Y) can be summarized by: XX), YY) and the correlation coefficient, denoted by ρ = ρ(X,Y) = R(X,Y).

• If ρ = 1, we have a perfect positive correlation (straight line relationship between the two variables)
• If ρ = 0, there is no correlation (random cloud scatter), i.e., no linear relation between X and Y.
• If Failed to parse (lexing error): \rho= –1

, there is a perfect negative correlation between the variables.

#### Computing ρ = R(X,Y)

The protocol for computing the correlation involves standardizing, multiplication and averaging. $\rho_{X,Y}={\mathrm{COV}(X,Y) \over \sigma_X \sigma_Y} ={E((X-\mu_X)(Y-\mu_Y)) \over \sigma_X\sigma_Y},$

where E is the expected value operator and COV means covariance. Since μX = E(X), σX2 = E(X2) − E2(X) and similarly for Y, we may also write $\rho_{X,Y}=\frac{E(XY)-E(X)E(Y)}{\sqrt{E(X^2)-E^2(X)}~\sqrt{E(Y^2)-E^2(Y)}}.$
• Sample correlation - we only have sampled data - we replace the (unknown) expectations and standard deviations by their sample analogues (sample-mean and sample-standard deviation) to compute the sample correlation correlation:
Suppose { $X_1, X_2, X_3, \cdots, X_n$} and { $Y_1, Y_2, Y_3, \cdots, Y_n$} are bivariate observations of the same process and XX) and YY) are the means and standard deviations for the X and Y measurements, respectively. $r_{xy}=\frac{\sum x_iy_i-n \bar{x} \bar{y}}{(n-1) s_x s_y}=\frac{n\sum x_iy_i-\sum x_i\sum y_i} {\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}.$ $r_{xy}=\frac{\sum (x_i-\bar{x})(y_i-\bar{y})}{(n-1) s_x s_y},$

where $\bar{x}$ and $\bar{y}$ are the sample means of X  and Y , sx  and sy  are the sample standard deviations of X  and Y  and the sum is from i = 1 to n. We may rewrite this as $r_{xy}=\frac{\sum x_iy_i-n \bar{x} \bar{y}}{(n-1) s_x s_y}=\frac{n\sum x_iy_i-\sum x_i\sum y_i} {\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}.$
• Note: The correlation is defined only if both of the standard deviations are finite and both of them are nonzero. It is a corollary of the Cauchy-Schwarz inequality that the correlation is always bound $-1 \leq \rho \leq 1$. ### Approach

Models & strategies for solving the problem, data understanding & inference.

• TBD

### Model Validation

Checking/affirming underlying assumptions.

• TBD

• TBD

### Examples

Computer simulations and real observed data.

• TBD

### Hands-on activities

Step-by-step practice problems.

• TBD

• TBD