Socr - User contributions [en]
http://wiki.stat.ucla.edu/socr/index.php/Special:Contributions/DaveZes
From SocrenMediaWiki 1.15.1Wed, 17 Jul 2019 16:37:47 GMTAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. Let's call <math>f(\star)</math> the density (or in some cases, the likelihood) defined by the random process <math>\star</math>. If <math>X</math> and <math>Y</math> are random variables, we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(X)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:52:27 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if <math>X</math> and <math>Y</math> are random variables, and <math>f(\cdot)</math> is a density or likelihood, we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(X)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:29:32 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if <math>X</math> and <math>Y</math> are random variables, and <math>f(\cdot)</math> is a density or likelihood, we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f( \mathbf{x} \cap \mu) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:26:48 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if <math>X</math> and <math>Y</math> are random variables, and <math>f(\cdot)</math> is a density or likelihood, then we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:25:52 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities or likelihood functions over continuous random variables. So, if <math>X</math> and <math>Y</math> are random variables, and <math>f(\cdot)</math> is a density, then we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:24:17 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>X</math> and <math>Y</math> are random variables, and <math>f(\cdot)</math> is a density, then we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:21:42 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math>, that takes a parameter, <math>\mu</math>, whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter, <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math> is a fixed number -- a "normalizing constant" so to ensure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:19:01 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math>, is a fixed number -- a "normalizing constant" so to assure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) f(\mu) d\mu </math></div>Thu, 23 Jul 2009 20:15:28 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
Since <math>\mathbf{x}</math> is fixed, <math>f(\mathbf{x})</math>, is a fixed number -- a "normalizing constant" so to assure that the posterior density integrates to one.<br />
<br />
<math>f(\mathbf{x}) = \int_{\mu} f(\mu \cap \mathbf{x}) d\mu = \int_{\mu} f( \mathbf{x} | \mu ) d\mu </math></div>Thu, 23 Jul 2009 20:14:30 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) </math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
CURRENTLY UNDER CONSTRUCTION -- THANKS FOR YOUR PATIENCE !!<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 19:28:54 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>f(\mathbf{x}|\mu) }</math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
CURRENTLY UNDER CONSTRUCTION -- THANKS FOR YOUR PATIENCE !!<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 19:28:16 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
In this formulation, we solve for <math>f(\mu|\mathbf{x})</math>, the "posterior" density of the population parameter <math>\mu</math>.<br />
<br />
For this we utilize the likelihood function of our data given our parameter, <math>\frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math>, and, importantly, a density <math>f(\mu)</math>, that describes our "prior" belief in <math>\mu</math>.<br />
<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 19:21:51 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some known density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{lik(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf{x}) }</math><br />
<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 19:07:16 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem, or "Bayes Rule" can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
What is commonly called '''Bayesian Statistics''' is a very special application of Bayes Theorem.<br />
<br />
We will examine a number of examples in this Chapter, but to illustrate generally, imagine that '''x''' is a fixed collection of data that has been realized from under some density, <math>f(\cdot)</math> that takes a parameter, <math>\mu</math> whose value is not certainly known.<br />
<br />
Using Bayes Theorem we may write<br />
<br />
<math>f(\mu|\mathbf{x}) = \frac{f(\mathbf{x}|\mu) \cdot f(\mu)} { f(\mathbf) }</math><br />
<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 19:05:18 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = P(B|A) \cdot P(A)/P(B)</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = f(X|Y) \cdot f(Y) / f(X)</math><br />
<br />
<math>f(Y|X) = \frac{f(X|Y) \cdot f(Y)} { f(X) }</math><br />
<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 18:49:35 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = P(B|A) \cdot P(A)/P(B)</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = f(X|Y) \cdot f(Y) / f(X)</math><br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 18:47:36 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = P(B|A)*P(A)/P(B)</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(\cdot)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = f(X|Y) \cdot f(Y) / f(X)</math><br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 18:47:02 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = P(B|A)*P(A)/P(B)</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(.)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = f(X|Y) \cdot f(Y) / f(X)</math><br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 18:45:23 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem can be stated succinctly by the equality<br />
<br />
<math>P(A|B) = P(B|A)*P(A)/P(B)</math><br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(.)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = f(X|Y) \cdot f(Y) / f(X) here</math><br />
<br />
<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 18:44:55 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Prelim
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Prelim
<p>DaveZes: </p>
<hr />
<div>'''Bayes Theorem'''<br />
<br />
Bayes theorem can be stated succinctly by the equality<br />
<br />
P(A|B) = P(B|A)*P(A)/P(B)<br />
<br />
In words, "the probability of event A occurring given that event B occurred is equal to the probability of event B occurring given that event A occurred times the probability of event A occurring divided by the probability that event B occurs."<br />
<br />
Bayes Theorem can also be written in terms of densities over continuous random variables. So, if <math>f(.)</math> is some density, and <math>X</math> and <math>Y</math> are random variables, then we can say<br />
<br />
<math>f(Y|X) = f(X|Y) \cdot f(Y) / f(X) here</math><br />
<br />
<br />
<br />
is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form "the probability of A, given B" and denoted P(A|B) = P(B|A)*P(A)/P(B) where P(B) not equal to 0. <br />
<br />
P(A) is often known as the Prior Probability (or as the Marginal Probability)<br />
<br />
P(A|B) is known as the Posterior Probability (Conditional Probability)<br />
<br />
P(B|A) is the conditional probability of B given A (also known as the likelihood function)<br />
<br />
P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by P(B|A).</div>Thu, 23 Jul 2009 18:42:15 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_PrelimAP Statistics Curriculum 2007 Bayesian Normal
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Bayesian_Normal
<p>DaveZes: </p>
<hr />
<div>''Normal Example:''<br />
<br />
It is known that the speedometer that comes with a certain new sports car is not very accurate, which results in an estimate of the top speed of the car of 185 mph, with a standard deviation of 10 mph. Knowing that his car is capable of much higher speeds, the owner took the car to the shop. After a checkup, the speedometer was replaced with a better one, which gave a new estimate of 220 mph with a standard deviation of 4 mph. The errors are assumed to be normally distributed.<br />
<br />
We can say that the owner '''S’s''' prior beliefs about the top speed of his car were represented by:<br />
<br />
<div style="text-align: center;"> µ ~ N(<math>\mu_0</math>, <math>\phi_0</math>) = µ ~ N(185,<math>10^2</math>) </div><br />
<br />
We could then say that the measurements using the new speedometer result in a measurement of:<br />
<br />
<div style="text-align: center;">''' x ~ N(<math>\mu</math>, <math>\phi</math>) = x ~ N(µ,<math> 4^2</math>)''' </div><br />
<br />
We note that the observation '''x''' turned out to be 210, and we see that '''S’s''' posterior beliefs about '''µ''' should be represented by:<br />
<br />
<div style="text-align: center;"> '''µ | x ~ N(<math>\mu_1</math>, <math>\phi_1</math>)''' </div><br />
<br />
where (rounded)<br />
<br />
<div style="text-align: center;"> '''<math>\phi_1</math> = <math>(10^{-2} + 4^{-2})^{-1}</math> = 14 = <math>4^2</math>''' </div><br />
<br />
<div style="text-align: center;"> '''<math>\mu_1</math> = <math>14(185/10^2 + 220/4^2) = 218</math>''' </div><br />
<br />
Therefore, the posterior for the top speed is:<br />
<br />
<div style="text-align: center;"> '''<math>\mu</math> | x ~ N(<math>218,4^2</math>)''' </div><br />
<br />
Meaning 218 +/- 4 mph.<br />
<br />
If the new speedometer measurements were considered by another person '''S’''' who had no knowledge of the readings from the first speedometer, but still had a vague idea (from knowledge of the stock speedometer) that the top speed was about 200 +/- 30 mph,<br />
Then:<br />
<br />
<div style="text-align: center;"> '''<math>\mu</math> ~ N(<math>200,30^2</math>)''' </div><br />
<br />
Then '''S’''' would have a posterior variance:<br />
<br />
<div style="text-align: center;"> '''<math>\phi_1 = (30^{-2} + 4^{-2})^{-1} = 16 = 4^2</math>''' </div><br />
<br />
'''S’''' would have a posterior mean of:<br />
<br />
<div style="text-align: center;"> '''<math>\mu_1 = 16(200/30^2 + 220/4^2) = 224</math>'''</div><br />
<br />
Therefore, the distribution of '''S’''' would be:<br />
<br />
<div style="text-align: center;"> '''<math>\mu</math> | x ~ N<math>(224,4^2)</math>''' </div><br />
<br />
Meaning 224 +/- 4 mph.<br />
This calculation has been carried out assuming that the prior information we have is rather vague, and therefore the posterior is almost entirely determined by the data.<br />
<br />
The situation is summarized as follows:<br />
<br />
<br />
'''Prior Distribution Likelihood from Data Posterior Distribution'''<br />
<br />
'''S N<math>(185 , 10^2)</math> N<math>(218 , 4^2)</math>'''<br />
<br />
'''N<math>(220 , 4^2)</math>'''<br />
<br />
''' S’ N<math>(200 , 30^2)</math> N<math>(224 , 4^2)</math>'''</div>Thu, 23 Jul 2009 16:04:35 GMTDaveZeshttp://wiki.stat.ucla.edu/socr/index.php/Talk:AP_Statistics_Curriculum_2007_Bayesian_Normal