http://wiki.stat.ucla.edu/socr/index.php?title=Special:Contributions&feed=atom&target=DaveZes Socr - User contributions [en] 2023-06-01T20:21:00Z From Socr MediaWiki 1.15.1 http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:52:27Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. Let's call &lt;math&gt;f(\star)&lt;/math&gt; the density (or in some cases, the likelihood) defined by the random process &lt;math&gt;\star&lt;/math&gt;. If &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(X)&lt;/math&gt;, that takes a parameter, &lt;math&gt;\mu&lt;/math&gt;, whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter, &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt; is a fixed number -- a &quot;normalizing constant&quot; so to ensure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:29:32Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, and &lt;math&gt;f(\cdot)&lt;/math&gt; is a density or likelihood, we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(X)&lt;/math&gt;, that takes a parameter, &lt;math&gt;\mu&lt;/math&gt;, whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter, &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt; is a fixed number -- a &quot;normalizing constant&quot; so to ensure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:26:48Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, and &lt;math&gt;f(\cdot)&lt;/math&gt; is a density or likelihood, we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt;, that takes a parameter, &lt;math&gt;\mu&lt;/math&gt;, whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter, &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt; is a fixed number -- a &quot;normalizing constant&quot; so to ensure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:25:52Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, and &lt;math&gt;f(\cdot)&lt;/math&gt; is a density or likelihood, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt;, that takes a parameter, &lt;math&gt;\mu&lt;/math&gt;, whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter, &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt; is a fixed number -- a &quot;normalizing constant&quot; so to ensure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:24:17Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, and &lt;math&gt;f(\cdot)&lt;/math&gt; is a density, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt;, that takes a parameter, &lt;math&gt;\mu&lt;/math&gt;, whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter, &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt; is a fixed number -- a &quot;normalizing constant&quot; so to ensure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:21:42Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, and &lt;math&gt;f(\cdot)&lt;/math&gt; is a density, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt;, that takes a parameter, &lt;math&gt;\mu&lt;/math&gt;, whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter, &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt; is a fixed number -- a &quot;normalizing constant&quot; so to ensure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:19:01Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt;, that takes a parameter, &lt;math&gt;\mu&lt;/math&gt;, whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter, &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt; is a fixed number -- a &quot;normalizing constant&quot; so to ensure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:15:28Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt; that takes a parameter, &lt;math&gt;\mu&lt;/math&gt; whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt;, is a fixed number -- a &quot;normalizing constant&quot; so to assure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T20:14:30Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt; that takes a parameter, &lt;math&gt;\mu&lt;/math&gt; whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> Since &lt;math&gt;\mathbf{x}&lt;/math&gt; is fixed, &lt;math&gt;f(\mathbf{x})&lt;/math&gt;, is a fixed number -- a &quot;normalizing constant&quot; so to assure that the posterior density integrates to one.<br /> <br /> &lt;math&gt;f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) d\mu &lt;/math&gt;</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T19:28:54Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt; that takes a parameter, &lt;math&gt;\mu&lt;/math&gt; whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) &lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> CURRENTLY UNDER CONSTRUCTION -- THANKS FOR YOUR PATIENCE !!<br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T19:28:16Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt; that takes a parameter, &lt;math&gt;\mu&lt;/math&gt; whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;f(\mathbf{x}|\mu) }&lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> CURRENTLY UNDER CONSTRUCTION -- THANKS FOR YOUR PATIENCE !!<br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T19:21:51Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt; that takes a parameter, &lt;math&gt;\mu&lt;/math&gt; whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> In this formulation, we solve for &lt;math&gt;f(\mu|\mathbf{x})&lt;/math&gt;, the &quot;posterior&quot; density of the population parameter &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> For this we utilize the likelihood function of our data given our parameter, &lt;math&gt;\frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;, and, importantly, a density &lt;math&gt;f(\mu)&lt;/math&gt;, that describes our &quot;prior&quot; belief in &lt;math&gt;\mu&lt;/math&gt;.<br /> <br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T19:07:16Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, &lt;math&gt;f(\cdot)&lt;/math&gt; that takes a parameter, &lt;math&gt;\mu&lt;/math&gt; whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{lik(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }&lt;/math&gt;<br /> <br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T19:05:18Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem, or &quot;Bayes Rule&quot; can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br /> <br /> We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some density, &lt;math&gt;f(\cdot)&lt;/math&gt; that takes a parameter, &lt;math&gt;\mu&lt;/math&gt; whose value is not certainly known.<br /> <br /> Using Bayes Theorem we may write<br /> <br /> &lt;math&gt;f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf) }&lt;/math&gt;<br /> <br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T18:49:35Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = P(B|A) \cdot P(A)/P(B)&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = f(X|Y) \cdot f(Y) / f(X)&lt;/math&gt;<br /> <br /> &lt;math&gt;f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }&lt;/math&gt;<br /> <br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T18:47:36Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = P(B|A) \cdot P(A)/P(B)&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = f(X|Y) \cdot f(Y) / f(X)&lt;/math&gt;<br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T18:47:02Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = P(B|A)*P(A)/P(B)&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(\cdot)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = f(X|Y) \cdot f(Y) / f(X)&lt;/math&gt;<br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T18:45:23Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = P(B|A)*P(A)/P(B)&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(.)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = f(X|Y) \cdot f(Y) / f(X)&lt;/math&gt;<br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T18:44:55Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem can be stated succinctly by the equality<br /> <br /> &lt;math&gt;P(A|B) = P(B|A)*P(A)/P(B)&lt;/math&gt;<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(.)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = f(X|Y) \cdot f(Y) / f(X) here&lt;/math&gt;<br /> <br /> <br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim AP Statistics Curriculum 2007 Bayesian Prelim 2009-07-23T18:42:15Z <p>DaveZes:&#32;</p> <hr /> <div>'''Bayes Theorem'''<br /> <br /> Bayes theorem can be stated succinctly by the equality<br /> <br /> P(A|B) = P(B|A)*P(A)/P(B)<br /> <br /> In words, &quot;the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs.&quot;<br /> <br /> Bayes Theorem can also be written in terms of densities over continuous random variables. So, if &lt;math&gt;f(.)&lt;/math&gt; is some density, and &lt;math&gt;X&lt;/math&gt; and &lt;math&gt;Y&lt;/math&gt; are random variables, then we can say<br /> <br /> &lt;math&gt;f(Y|X) = f(X|Y) \cdot f(Y) / f(X) here&lt;/math&gt;<br /> <br /> <br /> <br /> is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form &quot;the probability of A, given B&quot; and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br /> <br /> P(A) is often known as the Prior Probability (or as the Marginal Probability)<br /> <br /> P(A|B) is known as the Posterior Probability (Conditional Probability)<br /> <br /> P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br /> <br /> P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div> DaveZes http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Normal AP Statistics Curriculum 2007 Bayesian Normal 2009-07-23T16:04:35Z <p>DaveZes:&#32;</p> <hr /> <div>''Normal Example:''<br /> <br /> It is known that the speedometer that comes with a certain new sports car is not very accurate, which results in an estimate of the top speed of the car of 185 mph, with a standard deviation of 10 mph. Knowing that his car is capable of much higher speeds, the owner took the car to the shop. After a checkup, the speedometer was replaced with a better one, which gave a new estimate of 220 mph with a standard deviation of 4 mph. The errors are assumed to be normally distributed.<br /> <br /> We can say that the owner '''S’s''' prior beliefs about the top speed of his car were represented by:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; µ ~ N(&lt;math&gt;\mu_0&lt;/math&gt;, &lt;math&gt;\phi_0&lt;/math&gt;) = µ ~ N(185,&lt;math&gt;10^2&lt;/math&gt;) &lt;/div&gt;<br /> <br /> We could then say that the measurements using the new speedometer result in a measurement of:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt;''' x ~ N(&lt;math&gt;\mu&lt;/math&gt;, &lt;math&gt;\phi&lt;/math&gt;) = x ~ N(µ,&lt;math&gt; 4^2&lt;/math&gt;)''' &lt;/div&gt;<br /> <br /> We note that the observation '''x''' turned out to be 210, and we see that '''S’s''' posterior beliefs about '''µ''' should be represented by:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''µ | x ~ N(&lt;math&gt;\mu_1&lt;/math&gt;, &lt;math&gt;\phi_1&lt;/math&gt;)''' &lt;/div&gt;<br /> <br /> where (rounded)<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''&lt;math&gt;\phi_1&lt;/math&gt; = &lt;math&gt;(10^{-2} + 4^{-2})^{-1}&lt;/math&gt; = 14 = &lt;math&gt;4^2&lt;/math&gt;''' &lt;/div&gt;<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''&lt;math&gt;\mu_1&lt;/math&gt; = &lt;math&gt;14(185/10^2 + 220/4^2) = 218&lt;/math&gt;''' &lt;/div&gt;<br /> <br /> Therefore, the posterior for the top speed is:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''&lt;math&gt;\mu&lt;/math&gt; | x ~ N(&lt;math&gt;218,4^2&lt;/math&gt;)''' &lt;/div&gt;<br /> <br /> Meaning 218 +/- 4 mph.<br /> <br /> If the new speedometer measurements were considered by another person '''S’''' who had no knowledge of the readings from the first speedometer, but still had a vague idea (from knowledge of the stock speedometer) that the top speed was about 200 +/- 30 mph,<br /> Then:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''&lt;math&gt;\mu&lt;/math&gt; ~ N(&lt;math&gt;200,30^2&lt;/math&gt;)''' &lt;/div&gt;<br /> <br /> Then '''S’''' would have a posterior variance:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''&lt;math&gt;\phi_1 = (30^{-2} + 4^{-2})^{-1} = 16 = 4^2&lt;/math&gt;''' &lt;/div&gt;<br /> <br /> '''S’''' would have a posterior mean of:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''&lt;math&gt;\mu_1 = 16(200/30^2 + 220/4^2) = 224&lt;/math&gt;'''&lt;/div&gt;<br /> <br /> Therefore, the distribution of '''S’''' would be:<br /> <br /> &lt;div style=&quot;text-align: center;&quot;&gt; '''&lt;math&gt;\mu&lt;/math&gt; | x ~ N&lt;math&gt;(224,4^2)&lt;/math&gt;''' &lt;/div&gt;<br /> <br /> Meaning 224 +/- 4 mph.<br /> This calculation has been carried out assuming that the prior information we have is rather vague, and therefore the posterior is almost entirely determined by the data.<br /> <br /> The situation is summarized as follows:<br /> <br /> <br /> '''Prior Distribution Likelihood from Data Posterior Distribution'''<br /> <br /> '''S N&lt;math&gt;(185 , 10^2)&lt;/math&gt; N&lt;math&gt;(218 , 4^2)&lt;/math&gt;'''<br /> <br /> '''N&lt;math&gt;(220 , 4^2)&lt;/math&gt;'''<br /> <br /> ''' S’ N&lt;math&gt;(200 , 30^2)&lt;/math&gt; N&lt;math&gt;(224 , 4^2)&lt;/math&gt;'''</div> DaveZes