AP Statistics Curriculum 2007 Distrib Multinomial

(Difference between revisions)
 Revision as of 23:18, 4 March 2008 (view source)IvoDinov (Talk | contribs) (New page: == General Advance-Placement (AP) Statistics Curriculum - Multinomial Random Variables and Experiments== The multinomial experiments (and multinomial di...)← Older edit Current revision as of 01:35, 17 April 2013 (view source)IvoDinov (Talk | contribs) m (→Examples of Multinomial experiments) (35 intermediate revisions not shown) Line 1: Line 1: ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Multinomial Random Variables and Experiments== ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Multinomial Random Variables and Experiments== - The multinomial experiments (and multinomial distribtuions) directly extend the their [[AP_Statistics_Curriculum_2007_Distrib_Binomial |bi-nomial counterparts]]. + The multinomial experiments (and multinomial distributions) directly extend their [[AP_Statistics_Curriculum_2007_Distrib_Binomial |bi-nomial counterparts]]. - * Examples of Multinomial experiments + ===Multinomial experiments=== - ** Rolling a hexagonal Die 5 times: Where the outcome space is the colection of 5-tuples, where each element is a $1\leq value\leq 6$. + A multinomial experiment is an experiment that has the following properties: + * The experiment consists of '''k repeated trials'''. + * Each trial has a '''discrete''' number of possible outcomes. + * On any given trial, the probability that a particular outcome will occur is '''constant'''. + * The trials are '''independent'''; that is, the outcome on one trial does not affect the outcome on other trials. - * The Multinomial random variable (RV): Mathematically, a (k) multinomial trial is modeled by a random variable $X(outcome) = \begin{cases}x_o,\\ + ====Examples of Multinomial experiments==== - x_1,\\ + * Suppose we have an urn containing 9 marbles. Two are red, three are green, and four are blue (2+3+4=9). We randomly select 5 marbles from the urn, ''with replacement''. What is the probability (''P(A)'') of the event ''A={selecting 2 green marbles and 3 blue marbles}''? - \cdots,\\ + - x_k.\end{cases}$ + - If $p_i=P(X=i)$, then: + *To solve this problem, we apply the multinomial formula, we know the following: - : ''expected value'' of X, $E[X]=\sum_{i=1}^k{x_i\times p_i}$. + ** The experiment consists of 5 trials, so k = 5. - : standard deviation of X, $SD[X]=\sqrt{\sum_{i=1}^k{(x_i-E[X})^2\times p_i}}$. + ** The 5 trials produce 0 red, 2 green marbles, and 3 blue marbles; so $r_1=r_{red} = 0$, $r_2=r_{green} = 2$, and $r_3=r_{blue} = 3$. + ** For any particular trial, the probability of drawing a red, green, or blue marble is 2/9, 3/9, and 4/9, respectively. Hence, $p_1=p_{red} = 2/9$, $p_2=p_{green} = 1/3$, and $p_3=p_{blue} = 4/9$. + + Plugging these values into the multinomial formula we get the probability of the event of interest to be: + + : $$P(A) = {5\choose r_1,r_2,r_3}p_1^{r_1}p_2^{r_2}p_3^{r_3}$$. In this specific case, $$P(A) = {5\choose 0,2,3}p_1^{0}p_2^{2}p_3^{3}$$. + + : $P(A) = {5! \over 0!\times 2! \times 3! }\times (2/9)^0 \times (1/3)^2\times (4/9)^3=0.0975461.$ + + Thus, if we draw 5 marbles with replacement from the urn, the probability of drawing no red, 2 green, and 3 blue marbles is ''0.0975461''. + + * Let's again use the urn containing 9 marbles, where the number of red, green and blue marbles are 2, 3 and 4, respectively. This time we select 5 marbles from the urn, but are interested in the probability (''P(B)'') of the event ''B={selecting 2 green marbles}''! (Note that 2 < 5) + **To solve this problem, we classify balls into '''green''' and '''others'''! Thus the multinomial experiment consists of 5 trials (k = 5), $r_1=r_{green} = 2$, $r_2=r_{other} = 3$. In this case, the probabilities of drawing a '''green''' or '''other''' marble are 3/9, and 6/9, respectively. Notice now the P('''other''') is the sum of the probabilities of the '''other''' colors (complement of green)! Hence, + : $P(B) = {5\choose 2, 3}p_1^{r_1}p_2^{r_2} = {5! \over 2! \times 3! }\times (3/9)^2 \times (6/9)^3=0.329218.$ + + This probability is equivalent to the binomial probability (success=green; failure=other color), ''B(n=5, p=1/3)''. ===Synergies between Binomial and Multinomial processes/probabilities/coefficients=== ===Synergies between Binomial and Multinomial processes/probabilities/coefficients=== - * The Binomial vs. Multinomial '''Coefficients''' + * The Binomial vs. Multinomial '''Coefficients''' (See this [http://www.ohrt.com/odds/binomial.php Binomial Calculator]) - : $({n\choose i}=\frac{n!}{k!(n-k)!}$ + : ${n\choose i}=\frac{n!}{i!(n-i)!}$ : ${n\choose i_1,i_2,\cdots, i_k}= \frac{n!}{i_1! i_2! \cdots i_k!}$ : ${n\choose i_1,i_2,\cdots, i_k}= \frac{n!}{i_1! i_2! \cdots i_k!}$ * The Binomial vs. Multinomial '''Formulas''' * The Binomial vs. Multinomial '''Formulas''' - : $(a+b)^n = \sum_{i=1}^n{{n\choose i}a^1 \times b^{n-i}}$ + : $(a+b)^n = \sum_{i=1}^n{{n\choose i}a^i \times b^{n-i}}$ : $(a_1+a_2+\cdots +a_k)^n = \sum_{i_1+i_2\cdots +i_k=n}^n{ {n\choose i_1,i_2,\cdots, i_k} : [itex](a_1+a_2+\cdots +a_k)^n = \sum_{i_1+i_2\cdots +i_k=n}^n{ {n\choose i_1,i_2,\cdots, i_k} a_1^{i_1} \times a_2^{i_2} \times \cdots \times a_k^{i_k}}$ a_1^{i_1} \times a_2^{i_2} \times \cdots \times a_k^{i_k}}[/itex] - * The Binomial vs. Multinomial '''Probabilities''' + * The Binomial vs. Multinomial '''Probabilities''' (See this [http://socr.ucla.edu/Applets.dir/Normal_T_Chi2_F_Tables.htm Binomial distribution calculator] and the [http://socr.ucla.edu/htmls/dist/Multinomial_Distribution.html SOCR Multinomial Distribution calculator]) + : $p=P(X=r)={n\choose r}p^r(1-p)^{n-r}, \forall 0\leq r \leq n$ + : $p=P(X_1=r_1 \cap X_2=r_2 \cap \cdots \cap X_k=r_k | r_1+r_2+\cdots+r_k=n)=$ + : $={n\choose i_1,i_2,\cdots, i_k}p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}, \forall r_1+r_2+\cdots+r_k=n$ - ===Binomial Random Variables=== + ===Expectation and variance=== - Suppose we conduct an experiment observing an n-trial (fixed) Bernoulli process. If we are interested in the RV '''X={Number of successes in the n trials}''', then X is called a '''Binomial RV''' and its distribution is called '''Binomial Distribution'''. + The expected number of times for observing the outcome ''i'', over ''n'' trials, is + :$E(X_i) = n p_i.$ - ====Examples==== + Since each diagonal entry is the variance of a binomially distributed random variable, the [[AP_Statistics_Curriculum_2007_Distrib_MeanVar#Properties_of_Variance | variance-covariance matrix]] is defined by: - *Roll a standard die ten times. Let X be the number of times {6} turned up. The distribution of the random variable X is a binomial distribution with n = 10 (number of trials) and p = 1/6 (probability of "success={6}). The distribution of X may be explicitly written as (P(X=x) are rounded of, you can compute these exactly by going to [http://socr.ucla.edu/htmls/SOCR_Distributions.html SOCR Distributions] and selecting Binomial): + : Diagonal-terms (variances): $VAR(X_i)=np_i(1-p_i)$, for each ''i'', and + : Off-diagonal terms (covariances): $COV(X_i,X_j)=-np_i p_j$, for $i\not= j$. + + ===Example=== + Suppose we study N independent trials with results falling in one of k possible categories labeled $1,2, \cdots, k$. Let $p_i$ be the probability of a trial resulting in the $i^{th}$ category, where $p_1+p_2+ \cdots +p_k = 1$. Let $N_i$ be the number of trials resulting in the $i^{th}$ category, where $N_1+N_2+ \cdots +N_k = N$. + + For instance, suppose we have 9 people arriving at a meeting according to the following information: + : P(by Air) = 0.4,  P(by Bus) = 0.2, P(by Automobile) = 0.3,  P(by Train) = 0.1 + + * Compute the following probabilities + : P(3 by Air, 3 by Bus, 1 by Auto, 2 by Train) = ? + : P(2 by air) = ? + + ===SOCR Multinomial Examples=== + Suppose we roll 10 loaded hexagonal (6-face) dice 8 times and we are interested in the probability of observing the event A={3 ones, 3 twos, 2 threes, and 2 fours}. Assume the dice are loaded to the small outcomes according to the following probabilities of the 6 outcomes (''one'' is the most likely and ''six'' is the least likely outcome).
{| class="wikitable" style="text-align:center; width:75%" border="1" {| class="wikitable" style="text-align:center; width:75%" border="1" |- |- - | x || 0 || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10 + | ''x'' || 1 || 2 || 3 || 4 || 5 || 6 |- |- - | P(X=x) || 0.162 || 0.323 || 0.291  || 0.155 || 0.0543 || 0.013  || 0.0022  ||  0.00025 || 0.000019  ||  8.269e-7 || 1.654e-8 + | ''P(X=x)'' || 0.286 || 0.238 || 0.19 || 0.143 || 0.095 || 0.048 |} |}
-
[[Image:SOCR_EBook_Dinov_RV_Binomial_013008_Fig1.jpg|400px]]
+ : ''P(A)=?'' Note that the complete description of the event of interest is: - + : A={3 ones, 3 twos, 2 threes, 2 fours, and 0 others (5's or 6's!)} - * Suppose 10% of the human population carries the green-eye allele. If we choose 1,000 people randomly and let the RV X be the number of green-eyed people in the sample. Then the distribution of X is binomial distribution with n = 1,000 and p = 0.1 (denoted as $X \sim B(1,000, 0.1)$. In a sample of 1,000, how many are we expecting to have this allele? + - + - ===Binomial Modeling=== + - ====Exact Binomial Model==== + - The Binomial distribution (i.e., biased-coin tossing experiment) is an exact physical model for any experiment which can be characterized as a series of trials where: + - * Each trial has only two outcomes: success or failure; + - * P(success)=p is the same for every trial; and + - * Trials are independent. + - ====Approximate Binomial Model==== + ====Exact Solution==== - Suppose we observe an experiment comprised of n identical trials involving a large population (of size N). Assume the population contains a sub-population of subjects that have a characteristic of interest and the sub-population proportion is p (0 + :$=0.00586690138260962656816896. - ===Binomial Probabilities=== + * Using the [http://socr.ucla.edu/htmls/dist/Multinomial_Distribution.html SOCR Multinomial Distribution Calculator]: Enter the above given information in the [http://socr.ucla.edu/htmls/dist/Multinomial_Distribution.html SOCR Multimonial distribution applet] to get the probability density and cumulative distribution values for the given outcome ''{3,3,2,2,0}'', as shown on the image below. Note that since the event A does not contain the die outcomes of 5 or 6 , we can reduce the case of 6 outcomes [itex]\{1,2,3,4,5,6 \}$ to a case of 5 outcomes $\{1,2,3,4,other \} with corresponding probabilities [itex]\{0.286, 0.238, 0.19, 0.143, 0.143 \}$, where we pool together the probabilities of the die outcomes of 5 and 6, i.e., P(other)=0.095+0.048=0.143. - If the random variable ''X'' follows the Binomial distribution with (fixed) parameters ''n'' (sample-size) and ''p'' (probability of success at one trial), we write ''X'' ~ B(''n'', ''p''). The probability of getting exactly ''x'' successes is given by the Binomial probability (or mass) function: $P(X=x)={n\choose k}p^k(1-p)^{n-k}$, for ''x'' = 0, 1, 2, ..., ''n'', where ${n\choose k}=\frac{n!}{k!(n-k)!}$ is the binomial coefficient. + - This probability expression has an easy and intuitive interpretation. The probability of the ''x'' successes in the ''n'' trials is (''p''''x''). Similarly, the probability of the ''n-x'' failures is (1 − ''p'')''n-x''. However, the ''x'' successes can be [[AP_Statistics_Curriculum_2007_Prob_Count | arranged]] anywhere among the ''n'' trials, and there are ${n\choose k}=\frac{n!}{k!(n-k)!} different ways of arranging the ''x'' successes in a sequence of ''n'' trials, see the [[AP_Statistics_Curriculum_2007_Prob_Count |Counting section]]. + [[Image:SOCR_EBook_Dinov_Multinimial_102209_Fig2.png|500px]] - ===Binomial Expectation and Variance=== + ====Approximate Solution==== - If X is the random variable representing the r(random) number of heads in n coin toss trials, where the P(Head) = p, i.e., [itex]X\sim B(n, p)$. Then we have the following expressions for the [[AP_Statistics_Curriculum_2007#Expectation_.28Mean.29_and_Variance | expected value, variance and the standard deviation]] of X : + We can also find a pretty close empirically-driven estimate using the [[SOCR_EduMaterials_Activities_DiceExperiment | SOCR Dice Experiment]]. - * Mean (expected value): $E[X] = n\times p$, + - * Variance: $VAR[X] = n\times p \times(1-p)$, and + - * Standard deviation: $SD[X] = \sqrt{n\times p \times(1-p)}$ + - * [http://en.wikipedia.org/wiki/Binomial_distribution The complete description of the Binomial distribution is available here] + - ===Examples=== + For instance, running the [http://socr.ucla.edu/htmls/SOCR_Experiments.html SOCR Dice Experiment] 1,000 times with number of dice n=10, and the loading probabilities listed above, we get an output like the one shown below. - ====Binomial Coin Toss Example==== + - Refer to the SOCR [[SOCR_EduMaterials_Activities_BinomialCoinExperiment | Binomial Coin Toss Experiment]] and use the [http://socr.ucla.edu/htmls/SOCR_Experiments.html SOCR Binomial Coin Toss Applet] to perform an experiment of tossing a biased coin, P(Head) = 0.3, 5 times and computing the expectation of the number of Heads in such experiment. You should recognize that there are two distinct ways of computing the expected number of Heads: + - * Theoretical calculation, using the Binomial Probabilities;E[X]=\sum_x{xP(X=x)} = [/itex] +
[[Image:SOCR_EBook_Dinov_Multinimial_030508_Fig1.jpg|500px]]
- $={5\choose 0}0.3^0(0.7)^{5}+{5\choose 1}0.3^1(0.7)^{4}+{5\choose 2}0.3^2(0.7)^{3}+{5\choose 3}0.3^3(0.7)^{2}+{5\choose 4}0.3^4(0.7)^{1}+{5\choose 5}0.3^5(0.7)^{0} =$ + - $\cdots = (n\times p) = 5\times 0.3 = 1.5. + - * Empirical calculation, using the outcomes 100 repeated coin tosses of 5 coins. The image below illustrates this approximate calculation of the expectation for the number of heads when [itex]X\sim B(5, 0.3)$. Notice the slight difference between the theoretical expectation ($n\times p = 5 \times 0.3 = 1.5$) and its empirical approximation of 1.39! + Now, we can actually count how many of these 1,000 trials generated the event ''A'' as an outcome. In one such experiment of 1,000 trials, there were 8 outcomes of the type {3 ones, 3 twos, 2 threes and 2 fours}. Therefore, the relative proportion of these outcomes to 1,000 will give us a fairly accurate estimate of the exact probability we computed above -
[[Image:SOCR_EBook_Dinov_RV_Binomial_013008_Fig2.jpg|400px]]
+ : $P(A) \approx {8 \over 1,000}=0.008$. - * Notes: Of course, the theoretical calculation is exact and the empirical calculation is only approximate. However, the power of the empirical approximation to the expected number of Heads becomes clear when we increase the number of coins from 5 to 100, or more. In these cases, the exact calculations become very difficult, and for even larger number of coins become intractable. + Note that this approximation is close to the exact answer above. By the [[AP_Statistics_Curriculum_2007_Limits_LLN | Law of Large Numbers (LLN)]], we know that this SOCR empirical compared to the exact multinomial probability of interest will significantly improve as we increase the number of trials in this experiment to 10,000. -
+ ===[[EBook_Problems_Distrib_Multinomial|Problems]]=== - ===References=== + ===See also=== + * [[AP_Statistics_Curriculum_2007_Distrib_Dists| Negative Binomial and other Distributions]] + * [[AP_Statistics_Curriculum_2007_Distrib_Dists#Negative_Multinomial_Distribution_.28NMD.29| Negative Multinomial Distribution]]

General Advance-Placement (AP) Statistics Curriculum - Multinomial Random Variables and Experiments

The multinomial experiments (and multinomial distributions) directly extend their bi-nomial counterparts.

Multinomial experiments

A multinomial experiment is an experiment that has the following properties:

• The experiment consists of k repeated trials.
• Each trial has a discrete number of possible outcomes.
• On any given trial, the probability that a particular outcome will occur is constant.
• The trials are independent; that is, the outcome on one trial does not affect the outcome on other trials.

Examples of Multinomial experiments

• Suppose we have an urn containing 9 marbles. Two are red, three are green, and four are blue (2+3+4=9). We randomly select 5 marbles from the urn, with replacement. What is the probability (P(A)) of the event A={selecting 2 green marbles and 3 blue marbles}?
• To solve this problem, we apply the multinomial formula, we know the following:
• The experiment consists of 5 trials, so k = 5.
• The 5 trials produce 0 red, 2 green marbles, and 3 blue marbles; so r1 = rred = 0, r2 = rgreen = 2, and r3 = rblue = 3.
• For any particular trial, the probability of drawing a red, green, or blue marble is 2/9, 3/9, and 4/9, respectively. Hence, p1 = pred = 2 / 9, p2 = pgreen = 1 / 3, and p3 = pblue = 4 / 9.

Plugging these values into the multinomial formula we get the probability of the event of interest to be:

$$P(A) = {5\choose r_1,r_2,r_3}p_1^{r_1}p_2^{r_2}p_3^{r_3}$$. In this specific case, $$P(A) = {5\choose 0,2,3}p_1^{0}p_2^{2}p_3^{3}$$.
$P(A) = {5! \over 0!\times 2! \times 3! }\times (2/9)^0 \times (1/3)^2\times (4/9)^3=0.0975461.$

Thus, if we draw 5 marbles with replacement from the urn, the probability of drawing no red, 2 green, and 3 blue marbles is 0.0975461.

• Let's again use the urn containing 9 marbles, where the number of red, green and blue marbles are 2, 3 and 4, respectively. This time we select 5 marbles from the urn, but are interested in the probability (P(B)) of the event B={selecting 2 green marbles}! (Note that 2 < 5)
• To solve this problem, we classify balls into green and others! Thus the multinomial experiment consists of 5 trials (k = 5), r1 = rgreen = 2, r2 = rother = 3. In this case, the probabilities of drawing a green or other marble are 3/9, and 6/9, respectively. Notice now the P(other) is the sum of the probabilities of the other colors (complement of green)! Hence,
$P(B) = {5\choose 2, 3}p_1^{r_1}p_2^{r_2} = {5! \over 2! \times 3! }\times (3/9)^2 \times (6/9)^3=0.329218.$

This probability is equivalent to the binomial probability (success=green; failure=other color), B(n=5, p=1/3).

Synergies between Binomial and Multinomial processes/probabilities/coefficients

${n\choose i}=\frac{n!}{i!(n-i)!}$
${n\choose i_1,i_2,\cdots, i_k}= \frac{n!}{i_1! i_2! \cdots i_k!}$
• The Binomial vs. Multinomial Formulas
$(a+b)^n = \sum_{i=1}^n{{n\choose i}a^i \times b^{n-i}}$
$(a_1+a_2+\cdots +a_k)^n = \sum_{i_1+i_2\cdots +i_k=n}^n{ {n\choose i_1,i_2,\cdots, i_k} a_1^{i_1} \times a_2^{i_2} \times \cdots \times a_k^{i_k}}$
$p=P(X=r)={n\choose r}p^r(1-p)^{n-r}, \forall 0\leq r \leq n$
$p=P(X_1=r_1 \cap X_2=r_2 \cap \cdots \cap X_k=r_k | r_1+r_2+\cdots+r_k=n)=$
$={n\choose i_1,i_2,\cdots, i_k}p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}, \forall r_1+r_2+\cdots+r_k=n$

Expectation and variance

The expected number of times for observing the outcome i, over n trials, is

E(Xi) = npi.

Since each diagonal entry is the variance of a binomially distributed random variable, the variance-covariance matrix is defined by:

Diagonal-terms (variances): VAR(Xi) = npi(1 − pi), for each i, and
Off-diagonal terms (covariances): COV(Xi,Xj) = − npipj, for $i\not= j$.

Example

Suppose we study N independent trials with results falling in one of k possible categories labeled $1,2, \cdots, k$. Let pi be the probability of a trial resulting in the ith category, where $p_1+p_2+ \cdots +p_k = 1$. Let Ni be the number of trials resulting in the ith category, where $N_1+N_2+ \cdots +N_k = N$.

For instance, suppose we have 9 people arriving at a meeting according to the following information:

P(by Air) = 0.4, P(by Bus) = 0.2, P(by Automobile) = 0.3, P(by Train) = 0.1
• Compute the following probabilities
P(3 by Air, 3 by Bus, 1 by Auto, 2 by Train) = ?
P(2 by air) = ?

SOCR Multinomial Examples

Suppose we roll 10 loaded hexagonal (6-face) dice 8 times and we are interested in the probability of observing the event A={3 ones, 3 twos, 2 threes, and 2 fours}. Assume the dice are loaded to the small outcomes according to the following probabilities of the 6 outcomes (one is the most likely and six is the least likely outcome).

 x 1 2 3 4 5 6 P(X=x) 0.286 0.238 0.19 0.143 0.095 0.048
P(A)=? Note that the complete description of the event of interest is:
A={3 ones, 3 twos, 2 threes, 2 fours, and 0 others (5's or 6's!)}

Exact Solution

Of course, we can compute this probability exactly:

• By-hand calculations:
$P(A) = {10! \over 3!\times 3! \times 2! \times 2! \times 0! \times 0!} \times 0.286^3 \times 0.238^3\times 0.19^2 \times 0.143^2 \times 0.095^0 \times 0.048^0=$
= 0.00586690138260962656816896.
• Using the SOCR Multinomial Distribution Calculator: Enter the above given information in the SOCR Multimonial distribution applet to get the probability density and cumulative distribution values for the given outcome {3,3,2,2,0}, as shown on the image below. Note that since the event A does not contain the die outcomes of 5 or 6 , we can reduce the case of 6 outcomes {1,2,3,4,5,6} to a case of 5 outcomes {1,2,3,4,other} with corresponding probabilities {0.286,0.238,0.19,0.143,0.143}, where we pool together the probabilities of the die outcomes of 5 and 6, i.e., P(other)=0.095+0.048=0.143.

Approximate Solution

We can also find a pretty close empirically-driven estimate using the SOCR Dice Experiment.

For instance, running the SOCR Dice Experiment 1,000 times with number of dice n=10, and the loading probabilities listed above, we get an output like the one shown below.

Now, we can actually count how many of these 1,000 trials generated the event A as an outcome. In one such experiment of 1,000 trials, there were 8 outcomes of the type {3 ones, 3 twos, 2 threes and 2 fours}. Therefore, the relative proportion of these outcomes to 1,000 will give us a fairly accurate estimate of the exact probability we computed above

$P(A) \approx {8 \over 1,000}=0.008$.

Note that this approximation is close to the exact answer above. By the Law of Large Numbers (LLN), we know that this SOCR empirical compared to the exact multinomial probability of interest will significantly improve as we increase the number of trials in this experiment to 10,000.