AP Statistics Curriculum 2007 MultivariateNormal

From Socr

(Difference between revisions)
Jump to: navigation, search
(reference)
m (Bivariate (2D) case: typos)
Line 27: Line 27:
===Bivariate (2D) case===
===Bivariate (2D) case===
-
In 2-dimensions, the nonsingular bi-variate Normal distribution with ({{nowrap|1=''k'' = rank(Σ) = 2}}), the probability density function of a (bivariate) vector {{nowrap|[''X'' ''Y'']′}} is
+
In 2-dimensions, the nonsingular bi-variate Normal distribution with (<math>k=rank(\Sigma) = 2</math>), the probability density function of a (bivariate) vector (X,Y) is
: <math>
: <math>
     f(x,y) =
     f(x,y) =
Line 46: Line 46:
   </math>
   </math>
-
In the bivariate case, the first equivalent condition for multivariate normality is less restrictive: it is sufficient to verify that countably many distinct linear combinations of X and Y are normal in order to conclude that the vector {{nowrap|[X Y]′}} is bivariate normal.
+
In the bivariate case, the first equivalent condition for multivariate normality is less restrictive: it is sufficient to verify that countably many distinct linear combinations of X and Y are normal in order to conclude that the vector <math> [ X, Y ] ^T</math> is bivariate normal.
===Properties===
===Properties===

Revision as of 05:25, 14 December 2010

Contents

EBook - Multivariate Normal Distribution

The multivariate normal distribution, or multivariate Gaussian distribution, is a generalization of the univariate (one-dimensional) normal distribution to higher dimensions. A random vector is said to be multivariate normally distributed if every linear combination of its components has a univariate normal distribution. The multivariate normal distribution may be used to study different associations (e.g., correlations) between real-valued random variables.

Definition

In k-dimensions, a random vector X = (X_1, \cdots, X_k) is multivariate normally distributed if it satisfies any one of the following equivalent conditions (Gut, 2009):

  • Every linear combination of its components Y = a1X1 + … + akXk is normally distributed. In other words, for any constant vector a\in R^k, the linear combination (which is univariate random variable) Y = a^TX = \sum_{i=1}^{k}{a_iX_i} has a univariate normal distribution.
  • There exists a random -vector Z, whose components are independent normal random variables, a k-vector μ, and a k×ℓ matrix A, such that X = AZ + μ. Here is the rank of the variance-covariance matrix.
  • There is a k-vector μ and a symmetric, nonnegative-definite k×k matrix Σ, such that the characteristic function of X is

    \varphi_X(u) = \exp\Big( iu^T\mu - \tfrac{1}{2} u^T\Sigma u \Big).
  • When the support of X is the entire space Rk, there exists a k-vector μ and a symmetric positive-definite k×k variance-covariance matrix Σ, such that the probability density function of X can be expressed as

    f_X(x) = \frac{1}{ (2\pi)^{k/2}|\Sigma|^{1/2} }
             \exp\!\Big( {-\tfrac{1}{2}}(x-\mu)'\Sigma^{-1}(x-\mu) \Big),

where |Σ| is the determinant of Σ, and where (2π)k/2|Σ|1/2 = |2πΣ|1/2. This formulation reduces to the density of the univariate normal distribution if Σ is a scalar (i.e., a 1×1 matrix).

If the variance-covariance matrix is singular, the corresponding distribution has no density. An example of this case is the distribution of the vector of residual-errors in the ordinary least squares regression. Note also that the Xi are in general not independent; they can be seen as the result of applying the matrix A to a collection of independent Gaussian variables Z.

Bivariate (2D) case

In 2-dimensions, the nonsingular bi-variate Normal distribution with (k = rank(Σ) = 2), the probability density function of a (bivariate) vector (X,Y) is


    f(x,y) =
      \frac{1}{2 \pi  \sigma_x \sigma_y \sqrt{1-\rho^2}}
      \exp\left(
        -\frac{1}{2(1-\rho^2)}\left[
          \frac{(x-\mu_x)^2}{\sigma_x^2} +
          \frac{(y-\mu_y)^2}{\sigma_y^2} -
          \frac{2\rho(x-\mu_x)(y-\mu_y)}{\sigma_x \sigma_y}
        \right]
      \right),

where ρ is the correlation between X and Y. In this case,


    \mu = \begin{pmatrix} \mu_x \\ \mu_y \end{pmatrix}, \quad
    \Sigma = \begin{pmatrix} \sigma_x^2 & \rho \sigma_x \sigma_y \\
                             \rho \sigma_x \sigma_y  & \sigma_y^2 \end{pmatrix}.

In the bivariate case, the first equivalent condition for multivariate normality is less restrictive: it is sufficient to verify that countably many distinct linear combinations of X and Y are normal in order to conclude that the vector [X,Y]T is bivariate normal.

Properties

Normally distributed and independent

If X and Y are normally distributed and independent, this implies they are "jointly normally distributed", hence, the pair (XY) must have bivariate normal distribution. However, a pair of jointly normally distributed variables need not be independent - they could be correlated.

Two normally distributed random variables need not be jointly bivariate normal

The fact that two random variables X and Y both have a normal distribution does not imply that the pair (XY) has a joint normal distribution. A simple example is one in which X has a normal distribution with expected value 0 and variance 1, and Y = X if |X| > c and Y = −X if |X| < c, where c is about 1.54.


Problems


References




Translate this page:

(default)

Deutsch

Español

Français

Italiano

Português

日本語

България

الامارات العربية المتحدة

Suomi

इस भाषा में

Norge

한국어

中文

繁体中文

Русский

Nederlands

Ελληνικά

Hrvatska

Česká republika

Danmark

Polska

România

Sverige

Personal tools