# AP Statistics Curriculum 2007 Distrib Multinomial

(Difference between revisions)
 Revision as of 23:25, 4 March 2008 (view source)IvoDinov (Talk | contribs)← Older edit Revision as of 23:32, 4 March 2008 (view source)IvoDinov (Talk | contribs) Newer edit → Line 33: Line 33: - ===Binomial Random Variables=== + ===Example=== - Suppose we conduct an experiment observing an n-trial (fixed) Bernoulli process. If we are interested in the RV '''X={Number of successes in the n trials}''', then X is called a '''Binomial RV''' and its distribution is called '''Binomial Distribution'''. + Suppose we study N independent trials with results falling in one of k possible categories labeled $1,2, cdots, k$. Let $p_i$ be the probability of a trial resulting in the $i^{th}$ category, where $p_1+p_2+\cdots++p_k =1$. Let $N_i$ be the number of trials resulting in the $i^{th}$ category, where $N_1+N_2+\cdots++N_k = N$. - ====Examples==== + For instance, suppose we have 9 people arriving at a meeting according to the following information: - *Roll a standard die ten times. Let X be the number of times {6} turned up. The distribution of the random variable X is a binomial distribution with n = 10 (number of trials) and p = 1/6 (probability of "success={6}). The distribution of X may be explicitly written as (P(X=x) are rounded of, you can compute these exactly by going to [http://socr.ucla.edu/htmls/SOCR_Distributions.html SOCR Distributions] and selecting Binomial): + : P(by Air) = 0.4,  P(by Bus) = 0.2, P(by Automobile) = 0.3, P(by Train) = 0.1 -
+ - {| class="wikitable" style="text-align:center; width:75%" border="1" + - |- + - | x || 0 || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10 + - |- + - | P(X=x) || 0.162 || 0.323 || 0.291  ||  0.155 ||  0.0543 ||  0.013  || 0.0022  ||  0.00025 || 0.000019  ||  8.269e-7 || 1.654e-8 + - |} + -
+ -
[[Image:SOCR_EBook_Dinov_RV_Binomial_013008_Fig1.jpg|400px]]
+ * Compute the following probabilities + : P(3 by Air, 3 by Bus, 1 by Auto, 2 by Train) = ? + : P(2 by air) = ? - * Suppose 10% of the human population carries the green-eye allele. If we choose 1,000 people randomly and let the RV X be the number of green-eyed people in the sample. Then the distribution of X is binomial distribution with n = 1,000 and p = 0.1 (denoted as $X \sim B(1,000, 0.1)$. In a sample of 1,000, how many are we expecting to have this allele? - ===Binomial Modeling=== + ===SOCR Multinomial Examples=== - ====Exact Binomial Model==== + - The Binomial distribution (i.e., biased-coin tossing experiment) is an exact physical model for any experiment which can be characterized as a series of trials where: + - * Each trial has only two outcomes: success or failure; + - * P(success)=p is the same for every trial; and + - * Trials are independent. + - + - ====Approximate Binomial Model==== + - Suppose we observe an experiment comprised of n identical trials involving a large population (of size N). Assume the population contains a sub-population of subjects that have a characteristic of interest and the sub-population proportion is p (0''x''). Similarly, the probability of the ''n-x'' failures is (1 − ''p'')''n-x''. However, the ''x'' successes can be [[AP_Statistics_Curriculum_2007_Prob_Count | arranged]] anywhere among the ''n'' trials, and there are ${n\choose k}=\frac{n!}{k!(n-k)!}$ different ways of arranging the ''x'' successes in a sequence of ''n'' trials, see the [[AP_Statistics_Curriculum_2007_Prob_Count |Counting section]]. + - + - ===Binomial Expectation and Variance=== + - If X is the random variable representing the r(random) number of heads in n coin toss trials, where the P(Head) = p, i.e., $X\sim B(n, p)$. Then we have the following expressions for the [[AP_Statistics_Curriculum_2007#Expectation_.28Mean.29_and_Variance | expected value, variance and the standard deviation]] of X : + - * Mean (expected value): $E[X] = n\times p$, + - * Variance: $VAR[X] = n\times p \times(1-p)$, and + - * Standard deviation: $SD[X] = \sqrt{n\times p \times(1-p)}$ + - * [http://en.wikipedia.org/wiki/Binomial_distribution The complete description of the Binomial distribution is available here] + - + - ===Examples=== + - ====Binomial Coin Toss Example==== + - Refer to the SOCR [[SOCR_EduMaterials_Activities_BinomialCoinExperiment | Binomial Coin Toss Experiment]] and use the [http://socr.ucla.edu/htmls/SOCR_Experiments.html SOCR Binomial Coin Toss Applet] to perform an experiment of tossing a biased coin, P(Head) = 0.3, 5 times and computing the expectation of the number of Heads in such experiment. You should recognize that there are two distinct ways of computing the expected number of Heads: + - + - * Theoretical calculation, using the Binomial Probabilities;$E[X]=\sum_x{xP(X=x)} =$ + - $={5\choose 0}0.3^0(0.7)^{5}+{5\choose 1}0.3^1(0.7)^{4}+{5\choose 2}0.3^2(0.7)^{3}+{5\choose 3}0.3^3(0.7)^{2}+{5\choose 4}0.3^4(0.7)^{1}+{5\choose 5}0.3^5(0.7)^{0} =$ + - $\cdots = (n\times p) = 5\times 0.3 = 1.5.$ + - + - * Empirical calculation, using the outcomes 100 repeated coin tosses of 5 coins. The image below illustrates this approximate calculation of the expectation for the number of heads when $X\sim B(5, 0.3)$. Notice the slight difference between the theoretical expectation ($n\times p = 5 \times 0.3 = 1.5$) and its empirical approximation of 1.39! + -
[[Image:SOCR_EBook_Dinov_RV_Binomial_013008_Fig2.jpg|400px]]
+ - + - * Notes: Of course, the theoretical calculation is exact and the empirical calculation is only approximate. However, the power of the empirical approximation to the expected number of Heads becomes clear when we increase the number of coins from 5 to 100, or more. In these cases, the exact calculations become very difficult, and for even larger number of coins become intractable. +

## General Advance-Placement (AP) Statistics Curriculum - Multinomial Random Variables and Experiments

The multinomial experiments (and multinomial distribtuions) directly extend the their bi-nomial counterparts.

• Examples of Multinomial experiments
• Rolling a hexagonal Die 5 times: Where the outcome space is the colection of 5-tuples, where each element is a value such that: $1\leq value\leq 6$.
• The Multinomial random variable (RV): Mathematically, a (k) multinomial trial is modeled by a random variable
$X(outcome) = \begin{cases}x_o,\\ x_1,\\ \cdots,\\ x_k.\end{cases}$

If pi = P(X = xi), then:

expected value of X, $E[X]=\sum_{i=1}^k{x_i\times p_i}$.
standard deviation of X, $SD[X]=\sqrt{\sum_{i=1}^k{(x_i-E[X])^2\times p_i}}$.

### Synergies between Binomial and Multinomial processes/probabilities/coefficients

• The Binomial vs. Multinomial Coefficients
${n\choose i}=\frac{n!}{k!(n-k)!}$
${n\choose i_1,i_2,\cdots, i_k}= \frac{n!}{i_1! i_2! \cdots i_k!}$
• The Binomial vs. Multinomial Formulas
$(a+b)^n = \sum_{i=1}^n{{n\choose i}a^1 \times b^{n-i}}$
$(a_1+a_2+\cdots +a_k)^n = \sum_{i_1+i_2\cdots +i_k=n}^n{ {n\choose i_1,i_2,\cdots, i_k} a_1^{i_1} \times a_2^{i_2} \times \cdots \times a_k^{i_k}}$
• The Binomial vs. Multinomial Probabilities
$p=P(X=r)={n\choose i}p^r(1-p)^{n-r}, \forall 0\leq r \leq n$
$p=P(X_1=r_1 \cap X_1=r_1 \cap \cdots \cap X_k=r_k | r_1+r_2+\cdots+r_k=n)={n\choose i_1,i_2,\cdots, i_k}p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}, \forall r_1+r_2+\cdots+r_k=n$

### Example

Suppose we study N independent trials with results falling in one of k possible categories labeled 1,2,cdots,k. Let pi be the probability of a trial resulting in the ith category, where $p_1+p_2+\cdots++p_k =1$. Let Ni be the number of trials resulting in the ith category, where $N_1+N_2+\cdots++N_k = N$.

For instance, suppose we have 9 people arriving at a meeting according to the following information:

P(by Air) = 0.4, P(by Bus) = 0.2, P(by Automobile) = 0.3, P(by Train) = 0.1
• Compute the following probabilities
P(3 by Air, 3 by Bus, 1 by Auto, 2 by Train) = ?
P(2 by air) = ?