# AP Statistics Curriculum 2007 IntroDesign

### From Socr

m |
|||

Line 2: | Line 2: | ||

===Design and Experiments=== | ===Design and Experiments=== | ||

- | + | ''Design of experiments'' refers of the blueprint for planning a study or experiment, performing the data collection protocol and controlling the study parameters for accuracy and consistency. Design of experiments only makes sense in studies where variation, chance and uncertainly are present and unavoidable. Data, or information, is typically collected in regard to a specific process or phenomenon being studied to investigate the effects of some controlled variables (independent variables or predictors) on other observed measurements (responses or dependent variables). Both types of variables are associated with specific observational units (living beings, components, objects, materials, etc.) | |

- | + | ||

===Approach=== | ===Approach=== | ||

- | + | The following are the most common components used in Experimental Design. | |

+ | * Comparison: To make inference about effects, associations or predictions, one typically has to compare different groups subjected to distinct conditions. This allows contrasting observed responses and underlying group differences which ultimately may lead to inference on relations and influence between controlled and observed variables. | ||

+ | |||

+ | * Randomization: The second fundamental design principle is randomization. It requires that we make allocation of (controlled variables) treatments to units using some [[SOCR_EduMaterials_Activities_RNG | random mechanism]]. This will simply guarantees that effects that may be present is the units, but not incorporated in the model, are equidistributed amongst all groups and are therefore unlikely to significantly effect our group comparisons at the end of the statistical inference or analysis (as these effects, if present, will be similar within each group). | ||

+ | |||

+ | 3. Replication: All measurements we make, observations we acquire or data we collect is subject to variation, as [[SOCR_EduMaterials_Activities_RNG | there are no completely deterministic processes]]. As we try to make inference about the process that generated the observed data (not the sample data itself, even though our statistical analysis are data-driven and therefore based on the observed measurements), the more data we collect (unbiasly) the stronger our inference is likely to be. Therefore, repeated measurements intuitively would allow is to tame the variability associated with the phenomenon we study. | ||

+ | |||

+ | 4. Blocking: Blocking is the arrangement of experimental units into groups (blocks) that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study. | ||

+ | |||

+ | 5. Orthogonality | ||

+ | |||

+ | Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T - 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts. | ||

+ | |||

+ | 6. Use of factorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). | ||

+ | |||

+ | Analysis of the design of experiments was built on the foundation of the analysis of variance, a collection of models in which the observed variance is partitioned into components due to different factors which are estimated and/or tested. | ||

* TBD | * TBD | ||

## Revision as of 01:08, 17 June 2007

## Contents |

## General Advance-Placement (AP) Statistics Curriculum - Design and Experiments

### Design and Experiments

*Design of experiments* refers of the blueprint for planning a study or experiment, performing the data collection protocol and controlling the study parameters for accuracy and consistency. Design of experiments only makes sense in studies where variation, chance and uncertainly are present and unavoidable. Data, or information, is typically collected in regard to a specific process or phenomenon being studied to investigate the effects of some controlled variables (independent variables or predictors) on other observed measurements (responses or dependent variables). Both types of variables are associated with specific observational units (living beings, components, objects, materials, etc.)

### Approach

The following are the most common components used in Experimental Design.

- Comparison: To make inference about effects, associations or predictions, one typically has to compare different groups subjected to distinct conditions. This allows contrasting observed responses and underlying group differences which ultimately may lead to inference on relations and influence between controlled and observed variables.

- Randomization: The second fundamental design principle is randomization. It requires that we make allocation of (controlled variables) treatments to units using some random mechanism. This will simply guarantees that effects that may be present is the units, but not incorporated in the model, are equidistributed amongst all groups and are therefore unlikely to significantly effect our group comparisons at the end of the statistical inference or analysis (as these effects, if present, will be similar within each group).

3. Replication: All measurements we make, observations we acquire or data we collect is subject to variation, as there are no completely deterministic processes. As we try to make inference about the process that generated the observed data (not the sample data itself, even though our statistical analysis are data-driven and therefore based on the observed measurements), the more data we collect (unbiasly) the stronger our inference is likely to be. Therefore, repeated measurements intuitively would allow is to tame the variability associated with the phenomenon we study.

4. Blocking: Blocking is the arrangement of experimental units into groups (blocks) that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.

5. Orthogonality

Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T - 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.

6. Use of factorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables).

Analysis of the design of experiments was built on the foundation of the analysis of variance, a collection of models in which the observed variance is partitioned into components due to different factors which are estimated and/or tested.

- TBD

### Model Validation

Checking/affirming underlying assumptions.

- TBD

### Computational Resources: Internet-based SOCR Tools

- TBD

### Examples

Computer simulations and real observed data.

- TBD

### Hands-on activities

Step-by-step practice problems.

- TBD

### References

- TBD

- SOCR Home page: http://www.socr.ucla.edu

Translate this page: