# AP Statistics Curriculum 2007 NonParam ANOVA

(Difference between revisions)
 Revision as of 05:10, 2 March 2008 (view source)IvoDinov (Talk | contribs) (→Motivational Example)← Older edit Current revision as of 21:00, 28 June 2010 (view source)Jenny (Talk | contribs) (9 intermediate revisions not shown) Line 4: Line 4: ===Motivational Example=== ===Motivational Example=== - Suppose four groups of students were randomly assigned to be taught with four different techniques, and their achievement test scores were recorded. Are the distributions of test scores the same, or do they differ in location? The data is presented in the table below. + Suppose four groups of students are randomly assigned to be taught with four different techniques, and their achievement test scores are recorded. Are the distributions of test scores the same, or do they differ in location? The data is presented in the table below.
Line 23: Line 23:
- The small sample sizes, and the lack of information about the distribution of each of the four samples, imply that ANOVA may not be appropriate for analyzing these data. + The small sample sizes and the lack of distribution information of each sample illustrate how ANOVA may not be appropriate for analyzing these types of data. ==The Kruskal-Wallis Test== ==The Kruskal-Wallis Test== - '''Kruskal-Wallis one-way analysis of variance''' by ranks is a non-parametric method for testing equality of two or more population medians. Intuitively, it is identical to a [[AP_Statistics_Curriculum_2007_ANOVA_1Way | one-way analysis of variance]] with the raw data (observed measurements) replaced by their ranks. + '''Kruskal-Wallis One-Way Analysis of Variance''' by ranks is a non-parametric method for testing equality of two or more population medians. Intuitively, it is identical to a [[AP_Statistics_Curriculum_2007_ANOVA_1Way | One-Way Analysis of Variance]] with the raw data (observed measurements) replaced by their ranks. - Since it is a non-parametric method, the Kruskal-Wallis test '''does not''' assume a normal population, unlike the analogous one-way ANOVA.  However, the test does assume identically-shaped distributions for all groups, except for any difference in their centers (e.g., medians). + Since it is a non-parametric method, the Kruskal-Wallis Test '''does not''' assume a normal population, unlike the analogous one-way ANOVA.  However, the test does assume identically-shaped distributions for all groups, except for any difference in their centers (e.g., medians). ==Calculations== ==Calculations== - # Rank all data from all groups together; i.e., rank the data from 1 to N ignoring group membership. Assign any tied values the average of the ranks they would have received had they not been tied. + Let ''N'' be the total number of observations, then $N = \sum_{i=1}^k {n_i}$. - # The test statistic is given by: + - : $K = (N-1)\frac{\sum_{i=1}^g n_i(\bar{r}_{i\cdot} - \bar{r})^2}{\sum_{i=1}^g\sum_{j=1}^{n_i}(r_{ij} - \bar{r})^2}$, where: + - #*$n_g$ is the number of observations in group $g$ + - #*$r_{ij}$ is the rank (among all observations) of observation ''j'' from group ''i'' + - #*$N$ is the total number of observations across all groups + - #*$\bar{r}_{i\cdot} = \frac{\sum_{j=1}^{n_i}{r_{ij}}}{n_i}$, + - #*$\bar{r} =(N+1)/2$ is the average of all the $r_{ij}$. + - #*Notice that the denominator of the expression for $K$ is exactly $(N-1)N(N+1)/12$. Thus $K = \frac{12}{N(N+1)}\sum_{i=1}^g n_i(\bar{r}_{i\cdot} - \bar{r})^2$. + - # A correction for ties can be made by dividing $K$ by $1 - \frac{\sum_{i=1}^G (t_{i}^3 - t_{i})}{N^3-N}$, where G is the number of groupings of different tied ranks, and ti is the number of tied values within group i that are tied at a particular value.  This correction usually makes little difference in the value of K unless there are a large number of ties. + - # Finally, the p-value is approximated by $\Pr(\chi^2_{g-1} \ge K)$. If some ni's are small (i.e., less than 5) the probability distribution of K can be quite different from this [http://en.wikipedia.org/wiki/Chi-square Chi-square distribution]. + - The null hypothesis of equal population medians would then be rejected if $K \ge \chi^2_{\alpha: g-1}$. + Let $R(X_{ij})$ denotes the rank assigned to $X_{ij}$ and let $R_i$ be the sum of ranks assigned to the $i^{th}$ sample. - ===The Kruskal-Wallis Test using SOCR Analyses=== + : $R_i = \sum_{j=1}^{n_i} {R(X_{ij})}, i = 1, 2, ... , k$. - It is much quicker to use [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Analyses] to compute the statistical significance of this test. This [[SOCR_EduMaterials_AnalysisActivities_KruskalWallis | SOCR KruskalWallis Test activity]] may also be helpful in understanding how to use this test in SOCR. + + The SOCR program computes $R_i$ for each sample. The test statistic is defined for the following formulation of hypotheses: + + : $H_o$: All of the k population distribution functions are identical. + : $H_1$: At least one of the populations tends to yield larger observations than at least one of the other populations. + + Suppose {$X_{i,1}, X_{i,2}, \cdots, X_{i,n_i}$} represents the values of the $i^{th}$ sample, where $1\leq i\leq k$. + + : Test statistics: + :: T = $(1/{{S}^{2}}) (\sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-}{N {(N + 1)}^{2} }) / 4$, + where + ::${{S}^{2}} = \left( \left({1/ {N - 1}}\right) \right) \sum{{R(X_{ij})}^{2}} {-} {N {\left(N + 1)\right)}^{2} } ) / 4$. + + * Note: If there are no ties, then the test statistic is reduced to: + ::$T = \left(12 / N(N+1) \right) \sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-} 3 \left(N+1\right)$. + + However, the SOCR implementation allows for the possibility of having ties; so it uses the non-simplified, exact method of computation. + + Multiple comparisons have to be done here. For each pair of groups, the following is computed and printed at the '''Result''' Panel. + + $|R_{i} /n_{i} -R_{j} /n_{j} | > t_{1-\alpha /2} (S^{2^{} } (N-1-T)/(N-k))^{1/2_{} } /(1/n_{i} +1/n_{j} )^{1/2_{}}$. + + The SOCR computation employs the exact method instead of the approximate one (Conover 1980), since computation is easy and fast to implement and the exact method is somewhat more accurate. + + ===The Kruskal-Wallis Test Using SOCR Analyses=== + It is much quicker to use [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Analyses] to compute the statistical significance of this test. This [[SOCR_EduMaterials_AnalysisActivities_KruskalWallis | SOCR KruskalWallis Test Activity]] may also be helpful in understanding how to use this test in SOCR. For the teaching-methods example above, we can easily compute the statistical significance of the differences between the group medians (centers): For the teaching-methods example above, we can easily compute the statistical significance of the differences between the group medians (centers): Line 52: Line 67:
[[Image:SOCR_EBook_Dinov_KruskalWallis_030108_Fig1.jpg|600px]]
[[Image:SOCR_EBook_Dinov_KruskalWallis_030108_Fig1.jpg|600px]]
- Clearly, there are significant differences between the group medians, even after the multiple testing correction, all groups appear different from each other. + Clearly, there is only one significant group difference between medians, after the multiple testing correction, for the group1 vs. group4 comparison (see below): : Group Method1 vs. Group Method2: 1.0 < 5.2056 : Group Method1 vs. Group Method2: 1.0 < 5.2056 : Group Method1 vs. Group Method3: 4.0 < 5.2056 : Group Method1 vs. Group Method3: 4.0 < 5.2056 - : Group Method1 vs. Group Method4: 6.0 > 5.2056 + : '''Group Method1 vs. Group Method4: 6.0 > 5.2056''' : Group Method2 vs. Group Method3: 5.0 < 5.2056 : Group Method2 vs. Group Method3: 5.0 < 5.2056 : Group Method2 vs. Group Method4: 5.0 < 5.2056 : Group Method2 vs. Group Method4: 5.0 < 5.2056 Line 65: Line 80: ==Notes== ==Notes== - * The [http://en.wikipedia.org/wiki/Friedman_test Friedman Fr test] is the rank equivalent of the randomized block design alternative to the [[AP_Statistics_Curriculum_2007_ANOVA_2Way |two-way analysis of variance F test]]. + * The [http://en.wikipedia.org/wiki/Friedman_test Friedman Fr Test] is the rank equivalent of the randomized block design alternative to the [[AP_Statistics_Curriculum_2007_ANOVA_2Way |Two-Way Analysis of Variance F Test]]. [[SOCR_EduMaterials_AnalysisActivities_Friedman | The SOCR Friedman Test Activity ]] demonstrates how to use [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Analyses] to compute the Friedman Test statistics and p-value. - + ==References== ==References== + Conover W (1980). Practical Nonparametric Statistics. John Wiley & Sons, New York, second edition.

## General Advance-Placement (AP) Statistics Curriculum - Means of Several Independent Samples

In this section we extend the multi-sample inference which we discussed in the ANOVA section, to the situation where the ANOVA assumptions are invalid. Hence we use a non-parametric analysis to study differences in centrality between two or more populations.

### Motivational Example

Suppose four groups of students are randomly assigned to be taught with four different techniques, and their achievement test scores are recorded. Are the distributions of test scores the same, or do they differ in location? The data is presented in the table below.

 Teaching Method Method 1 Method 2 Method 3 Method 4 Index 65 75 59 94 87 69 78 89 73 83 67 80 79 81 62 88

The small sample sizes and the lack of distribution information of each sample illustrate how ANOVA may not be appropriate for analyzing these types of data.

## The Kruskal-Wallis Test

Kruskal-Wallis One-Way Analysis of Variance by ranks is a non-parametric method for testing equality of two or more population medians. Intuitively, it is identical to a One-Way Analysis of Variance with the raw data (observed measurements) replaced by their ranks.

Since it is a non-parametric method, the Kruskal-Wallis Test does not assume a normal population, unlike the analogous one-way ANOVA. However, the test does assume identically-shaped distributions for all groups, except for any difference in their centers (e.g., medians).

## Calculations

Let N be the total number of observations, then $N = \sum_{i=1}^k {n_i}$.

Let R(Xij) denotes the rank assigned to Xij and let Ri be the sum of ranks assigned to the ith sample.

$R_i = \sum_{j=1}^{n_i} {R(X_{ij})}, i = 1, 2, ... , k$.

The SOCR program computes Ri for each sample. The test statistic is defined for the following formulation of hypotheses:

Ho: All of the k population distribution functions are identical.
H1: At least one of the populations tends to yield larger observations than at least one of the other populations.

Suppose {$X_{i,1}, X_{i,2}, \cdots, X_{i,n_i}$} represents the values of the ith sample, where $1\leq i\leq k$.

Test statistics:
T = $(1/{{S}^{2}}) (\sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-}{N {(N + 1)}^{2} }) / 4$,

where

${{S}^{2}} = \left( \left({1/ {N - 1}}\right) \right) \sum{{R(X_{ij})}^{2}} {-} {N {\left(N + 1)\right)}^{2} } ) / 4$.
• Note: If there are no ties, then the test statistic is reduced to:
$T = \left(12 / N(N+1) \right) \sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-} 3 \left(N+1\right)$.

However, the SOCR implementation allows for the possibility of having ties; so it uses the non-simplified, exact method of computation.

Multiple comparisons have to be done here. For each pair of groups, the following is computed and printed at the Result Panel.

$|R_{i} /n_{i} -R_{j} /n_{j} | > t_{1-\alpha /2} (S^{2^{} } (N-1-T)/(N-k))^{1/2_{} } /(1/n_{i} +1/n_{j} )^{1/2_{}}$.

The SOCR computation employs the exact method instead of the approximate one (Conover 1980), since computation is easy and fast to implement and the exact method is somewhat more accurate.

### The Kruskal-Wallis Test Using SOCR Analyses

It is much quicker to use SOCR Analyses to compute the statistical significance of this test. This SOCR KruskalWallis Test Activity may also be helpful in understanding how to use this test in SOCR.

For the teaching-methods example above, we can easily compute the statistical significance of the differences between the group medians (centers):

Clearly, there is only one significant group difference between medians, after the multiple testing correction, for the group1 vs. group4 comparison (see below):

Group Method1 vs. Group Method2: 1.0 < 5.2056
Group Method1 vs. Group Method3: 4.0 < 5.2056
Group Method1 vs. Group Method4: 6.0 > 5.2056
Group Method2 vs. Group Method3: 5.0 < 5.2056
Group Method2 vs. Group Method4: 5.0 < 5.2056
Group Method3 vs. Group Method4: 10.0 > 5.2056

TBD

## References

Conover W (1980). Practical Nonparametric Statistics. John Wiley & Sons, New York, second edition.