AP Statistics Curriculum 2007 Bayesian Other

From Socr

(Difference between revisions)
Jump to: navigation, search
Line 1: Line 1:
-
'''Bayesian Inference for the Binomial Distribution
+
==[[EBook | Probability and Statistics Ebook]] - Bayesian Inference for the Binomial and Poisson Distributions==
-
'''
+
-
The parameters of interest in this section is the probability P of success in a number of trials which can result in either success or failure with the trials being independent of one another and having the same probability of success. Suppose that there are n trials such that you have an observation of x successes from a binomial distribution of index n and parameter P
+
The parameters of interest in this section is the probability P of success in a number of trials which can result in either success or failure with the trials being independent of one another and having the same probability of success. Suppose that there are n trials such that you have an observation of x successes from a [[EBook#Bernoulli_and_Binomial_Experiments |binomial distribution of index n and parameter P]]:
 +
: <math>x \sim B(n,P)</math>
-
x ~ B(n,P)
+
We can show that
-
 
+
: <math>p(x|P) = {n \choose x} P^x (1 - P)^{n - x}</math>, (x = 0, 1, …, n)
-
subsequently, we can show that
+
-
p(x|P) = <math>{n \choose x}</math> <math> P^x </math> <math>(1 - P)^{n - x}</math> , (x = 0, 1, …, n)
+
-
 
+
-
p(x|P) is proportional to <math>P^x (1 - P)^{n - x}</math>
+
 +
: p(x|P) is proportional to <math>P^x (1 - P)^{n - x}</math>.
If the prior density has the form:  
If the prior density has the form:  
-
p(P) proportional to <math>P^{\alpha - 1}</math><math> (P-1)^{\beta - 1}</math> , (P between 0 and 1)
+
: <math>p(P) \sim P^{\alpha - 1} (P-1)^{\beta - 1}</math>, (P between 0 and 1),
-
 
+
-
then it follows the beta distribution
+
-
P ~ β(α,β)
+
-
 
+
-
 
+
-
From this we can appropriate the posterior which evidently has the form
+
-
 
+
-
p(P|x) is proportional to <math>P^{\alpha + x - 1}</math><math>(1-P)^{\beta + n - x - 1}</math>
+
-
 
+
-
The posterior distribution of the Binomial is
+
-
 
+
-
(P|x) ~ β(α + x, β + n – x)
+
-
 
+
-
 
+
-
 
+
-
'''Bayesian Inference for the Poisson Distribution'''
+
-
 
+
-
A discrete random variable x is said to have a Poisson distribution of mean <math>\lambda</math> if it has the density
+
 +
then it follows the [http://socr.ucla.edu/htmls/dist/Beta_Distribution.html beta distribution]
 +
: <math>P \sim \beta(\alpha,\beta)</math>.
-
P(x|<math>\lambda</math>) = (<math>\lambda^x</x!</math>)<math>e^{-\lambda}</math>
+
From this we can appropriate the posterior which evidently has the form:
 +
: <math>p(P|x) \sim P^{\alpha + x - 1} (1-P)^{\beta + n - x - 1}</math>.
-
Supose that you have n observations x=(x1, x2, …, xn) from such a distribution so that the likelihood is
+
The posterior distribution of the [[EBook#Bernoulli_and_Binomial_Experiments |Binomial]] is
 +
: <math>(P|x) \sim \beta(\alpha + x, \beta + n – x)</math>.
 +
===Bayesian Inference for the Poisson Distribution===
-
L(<math>\lambda</math>|x) = <math>\lambda^T e^{(-n \lambda)}</math>, where T = <math>\sum{k_i}</math>
+
A discrete random variable x is said to have a [[EBook#Poisson_Distribution |Poisson distribution]] of mean <math>\lambda</math> if it has the density:
 +
: <math>P(x|\lambda) = {\lambda^x e^{-\lambda}\over x!}</math>
-
In Bayesian inference, the conjugate prior for the parameter <math>\lambda</math> of the Poisson distribution is the Gamma distribution.
+
Suppose that you have n observations <math>x=(x_1, x_2, \cdots, x_n)</math> from such a distribution so that the likelihood is:
 +
: <math>L(\lambda|x) = \lambda^T e^{(-n \lambda)}</math>, where <math>T = \sum_{k_i}{x_i}</math>.
-
<math>\lambda \sim</math> Gamma(<math>\alpha</math> , <math>\beta</math> )
+
In Bayesian inference, the conjugate prior for the parameter <math>\lambda</math> of the [http://socr.ucla.edu/htmls/dist/Poisson_Distribution.html Poisson distribution] is the [http://socr.ucla.edu/htmls/dist/Gamma_Distribution.html Gamma distribution].
 +
: <math>\lambda \sim \Gamma(\alpha, \beta)</math>.
-
The Poisson parameter <math>\lambda</math> is distributed accordingly to the parameterized Gamma density g in terms of a shape and inverse scale parameter <math>\alpha</math> and <math>\beta</math> respectively
+
The Poisson parameter <math>\lambda</math> is distributed accordingly to the parametrized Gamma density ''g'' in terms of a shape and inverse scale parameter <math>\alpha</math> and <math>\beta</math> respectively:
 +
: <math>g(\lambda|\alpha, \beta) = \displaystyle\frac{\beta^\alpha}{\Gamma(\alpha)}\lambda^{\alpha - 1} e^{-\beta \lambda}</math>. For <math>\lambda > 0</math>.
-
g(<math>\lambda</math>|<math>\alpha</math> , <math>\beta</math>) = <math>\displaystyle\frac{\beta^\alpha}{\Gamma(\alpha)}</math> <math>\lambda^{\alpha - 1} e^{-\beta \lambda}</math>
+
Then, given the same sample of ''n'' measured values <math>k_i</math> from our likelihood and a prior of <math>\Gamma(\alpha, \beta)</math>, the posterior distribution becomes:
-
For <math>\lambda</math> > 0
+
: <math>\lambda \sim  \Gamma (\alpha + \displaystyle\sum_{i=1}^{\infty} k_i, \beta +n)</math>.
 +
The posterior mean <math>E[\lambda]</math> approaches the [[EBook#Method_of_Moments_and_Maximum_Likelihood_Estimation | maximum likelihood estimate]] in the limit as <math>\alpha</math> and <math>\beta</math> approach 0.
-
Then, given the same sample of n measured values <math>k_i</math> from our likelihood and a prior of Gamma(<math>\alpha</math>, <math>\beta</math>), the posterior distribution becomes
+
==See also==
 +
* [[EBook#Chapter_III:_Probability |Probability Chapter]]
 +
==References==
-
<math>\lambda \sim</math> Gamma (<math>\alpha + \displaystyle\sum_{i=1}^{\infty} k_i</math> , <math>\beta</math> + n)
+
<hr>
 +
* SOCR Home page: http://www.socr.ucla.edu
-
The posterior mean E[<math>\lambda</math>] approaches the maximum likelihood estimate in the limit as <math>\alpha</math> and <math>\beta</math> approach 0.
+
{{translate|pageName=http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_Bayesian_Other}}

Revision as of 23:23, 22 October 2009

Contents

Probability and Statistics Ebook - Bayesian Inference for the Binomial and Poisson Distributions

The parameters of interest in this section is the probability P of success in a number of trials which can result in either success or failure with the trials being independent of one another and having the same probability of success. Suppose that there are n trials such that you have an observation of x successes from a binomial distribution of index n and parameter P:

x \sim B(n,P)

We can show that

p(x|P) = {n \choose x} P^x (1 - P)^{n - x}, (x = 0, 1, …, n)
p(x|P) is proportional to Px(1 − P)nx.

If the prior density has the form:

p(P) \sim P^{\alpha - 1} (P-1)^{\beta - 1}, (P between 0 and 1),

then it follows the beta distribution

P \sim \beta(\alpha,\beta).

From this we can appropriate the posterior which evidently has the form:

p(P|x) \sim P^{\alpha + x - 1} (1-P)^{\beta + n - x - 1}.

The posterior distribution of the Binomial is

Failed to parse (lexing error): (P|x) \sim \beta(\alpha + x, \beta + n – x)

.

Bayesian Inference for the Poisson Distribution

A discrete random variable x is said to have a Poisson distribution of mean λ if it has the density:

P(x|\lambda) = {\lambda^x e^{-\lambda}\over x!}

Suppose that you have n observations x=(x_1, x_2, \cdots, x_n) from such a distribution so that the likelihood is:

L(λ | x) = λTe( − nλ), where T = \sum_{k_i}{x_i}.

In Bayesian inference, the conjugate prior for the parameter λ of the Poisson distribution is the Gamma distribution.

\lambda \sim \Gamma(\alpha, \beta).

The Poisson parameter λ is distributed accordingly to the parametrized Gamma density g in terms of a shape and inverse scale parameter α and β respectively:

g(\lambda|\alpha, \beta) = \displaystyle\frac{\beta^\alpha}{\Gamma(\alpha)}\lambda^{\alpha - 1} e^{-\beta \lambda}. For λ > 0.

Then, given the same sample of n measured values ki from our likelihood and a prior of Γ(α,β), the posterior distribution becomes:

\lambda \sim  \Gamma (\alpha + \displaystyle\sum_{i=1}^{\infty} k_i, \beta +n).

The posterior mean E[λ] approaches the maximum likelihood estimate in the limit as α and β approach 0.

See also

References




Translate this page:

(default)

Deutsch

Español

Français

Italiano

Português

日本語

България

الامارات العربية المتحدة

Suomi

इस भाषा में

Norge

한국어

中文

繁体中文

Русский

Nederlands

Ελληνικά

Hrvatska

Česká republika

Danmark

Polska

România

Sverige

Personal tools