AP Statistics Curriculum 2007 IntroVar

From Socr

(Difference between revisions)
Jump to: navigation, search
Line 30: Line 30:
* [[SOCR_EduMaterials_Activities_Histogram_Graphs | Exploratory Data Analysis]]
* [[SOCR_EduMaterials_Activities_Histogram_Graphs | Exploratory Data Analysis]]
* [[SOCR_EduMaterials_AnalysisActivities_ANOVA_1 | Statistical Data Analysis]]
* [[SOCR_EduMaterials_AnalysisActivities_ANOVA_1 | Statistical Data Analysis]]
 +
 +
===Examples===
 +
Computer simulations and real observed data.
 +
 +
* For example, [[SOCR_EduMaterials_Activities_Histogram_Graphs | exploratory data analysis using data histograms]]. This SOCR activity illustrates the generation and interpretation of the histogram of quantitative data.
 +
 +
===Hands-on activities===
 +
Step-by-step practice problems.
 +
 +
* [[SOCR_EduMaterials_Activities_Histogram_Graphs | Histograms and Frequency Graphs Activity]]
 +
* [[SOCR_EduMaterials_Activities_CardsCoinsSampling | Bivariate Cards and Coins Meta-Activity]]
<hr>
<hr>

Revision as of 23:23, 13 June 2007

Contents

General Advance-Placement (AP) Statistics Curriculum - Introduction to Statistics

The Nature of Data & Variation

No mater how controlled the environment, the protocol or the design, virtually any repeated measurement, observation, experiment, trial, study or survey is bound to generate data that varies because of intrinsic (internal to the system) or extrinsic (due to the ambient environment) effects.

For example, a UCLA study of Alzheimer’s disease*, analyzed the data of 31 MCI and 34 probable Alzheimer’s disease patients. The investigators made every attempt to control for as many variables as possible, yet, the demographic information they collected on the subjects contained unavoidable variation. The same study found variation in the MMSE cognitive scores even in the same subjects.

Approach

Models & strategies for solving the problem, data understanding & inference.

  • Once we except that all natural phenomena are inherently variant and there are no completely deterministic processes, we need to look for models and techniques that allow us to study such acquired data in the presence of variation, uncertainty and chance.
  • Statistics is the data science that investigates natural processes and allows us to quantify variation and make population inference based on limited observations.

Model Validation

Checking/affirming underlying assumptions.

  • Each model or technique for data exploration, analysis and understanding relies on a set of assumptions, which always need to be validated before the model or analysis tool is employed to study real data (observations or measurements that are perceived or detected by the investigator).
  • Such a priori model conjectures or presumptions could take the form of mathematical constraints about the properties of the underlying process, restrictions on the study design or demands on the data acquisition protocol.
  • Common assumptions include (statistical) independence of the measurements, specific limitations on the shape of the distribution that we sample/observe data from, restrictions on the parameters of the process we study, etc.

Computational Resources: Internet-based SOCR Tools

Examples

Computer simulations and real observed data.

Hands-on activities

Step-by-step practice problems.


References




Translate this page:

(default)

Deutsch

Español

Français

Italiano

Português

日本語

България

الامارات العربية المتحدة

Suomi

इस भाषा में

Norge

한국어

中文

繁体中文

Русский

Nederlands

Ελληνικά

Hrvatska

Česká republika

Danmark

Polska

România

Sverige

Personal tools