digplanet beta 1: Athena
Share digplanet:

Agriculture

Applied sciences

Arts

Belief

Business

Chronology

Culture

Education

Environment

Geography

Health

History

Humanities

Language

Law

Life

Mathematics

Nature

People

Politics

Science

Society

Technology

In mathematics, probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because there may be correlations among them — often they represent different properties of an individual statistical unit (e.g. a particular person, event, etc.). Normally each element of a random vector is a real number.

Random vectors are often used as the underlying implementation of various types of aggregate random variables, e.g. a random matrix, random tree, random sequence, random process, etc.

More formally, a multivariate random variable is a column vector \mathbf{X}=(X_1,...,X_n)^T (or its transpose, which is a row vector) whose components are scalar-valued random variables on the same probability space (\Omega, \mathcal{F}, P), where \Omega is the sample space, \mathcal{F} is the sigma-algebra (the collection of all events), and P is the probability measure (a function returning each event's probability).

Probability distribution[edit]

Every random vector gives rise to a probability measure on \mathbb{R}^n with the Borel algebra as the underlying sigma-algebra. This measure is also known as the joint probability distribution, the joint distribution, or the multivariate distribution of the random vector.

The distributions of each of the component random variables X_i are called marginal distributions. The conditional probability distribution of X_i given X_j is the probability distribution of X_i when X_j is known to be a particular value.

Operations on random vectors[edit]

Random vectors can be subjected to the same kinds of algebraic operations as can non-random vectors: addition, subtraction, multiplication by a scalar, and the taking of inner products.

Similarly, a new random vector \mathbf{Y} can be defined by applying an affine transformation g\colon \mathbb{R}^n \to \mathbb{R}^n to a random vector \mathbf{X}:

\mathbf{Y}=\mathcal{A}\mathbf{X}+b, where \mathcal{A} is an n \times  n matrix and b is an n \times  1 column vector.

If \mathcal{A} is invertible and the probability density of \textstyle\mathbf{X} is f_{\mathbf{X}}, then the probability density of \mathbf{Y} is

f_{\mathbf{Y}(y)}=\frac{f_{\mathbf{X}}(\mathcal{A}^{-1}(y-b))}{|\det\mathcal{A}|}.

Expected value, covariance, and cross-covariance[edit]

The expected value or mean of a random vector \mathbf{X} is a fixed vector \operatorname{E}[\mathbf{X}] whose elements are the expected values of the respective random variables.

The covariance matrix (also called the variance-covariance matrix) of an n \times 1 random vector is an n \times n matrix whose i,j^{th} element is the covariance between the i^{th} and the j^{th} random variables. The covariance matrix is the expected value, element by element, of the n \times n matrix computed as [\mathbf{X}-\operatorname{E}[\mathbf{X}]][\mathbf{X}-\operatorname{E}[\mathbf{X}]]^T, where the superscript T refers to the transpose of the indicated vector:

\operatorname{Var}[\mathbf{X}]=\operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{X}-\operatorname{E}[\mathbf{X}])^{T}].

By extension, the cross-covariance matrix between two random vectors \mathbf{X} and \mathbf{Y} (\mathbf{X} having n elements and \mathbf{Y} having p elements) is the n \times p matrix

\operatorname{Cov}[\mathbf{X},\mathbf{Y}]=\operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{Y}-\operatorname{E}[\mathbf{Y}])^{T}],

where again the indicated matrix expectation is taken element-by-element in the matrix. The cross-covariance matrix \operatorname{Cov}[\mathbf{Y},\mathbf{X}] is simply the transpose of the matrix \operatorname{Cov}[\mathbf{X},\mathbf{Y}].

Further properties[edit]

Expectation of a quadratic form[edit]

One can take the expectation of a quadratic form in the random vector X as follows:[1]:p.170-171

\operatorname{E}(X^{T}AX) = [\operatorname{E}(X)]^{T}A[\operatorname{E}(X)] + \operatorname{tr}(AC),

where C is the covariance matrix of X and tr refers to the trace of a matrix — that is, to the sum of the elements on its main diagonal (from upper left to lower right). Since the quadratic form is a scalar, so is its expectation.

Proof: Let \mathbf{z} be an m \times 1 random vector with \operatorname{E}[\mathbf{z}] = \mu and \operatorname{Cov}[\mathbf{z}]= V and let A be an m \times m non-stochastic matrix.

Based on the formula of the covariance, then if we call \mathbf{z}' = \mathbf{X} and \mathbf{z}'A' = \mathbf{Y}, we see that:

\operatorname{Cov}[\mathbf{X},\mathbf{Y}] = \operatorname{E}[\mathbf{X}\mathbf{Y}']-\operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{Y}]'

Hence

\begin{align}
E(XY')   &= \operatorname{Cov}(X,Y)+E(X)E(Y)' \\
E(z'Az) &= \operatorname{Cov}(z',z'A')+E(z')E(z'A')'  \\
&=\operatorname{Cov}(z', z'A') + \mu' (\mu'A')' \\
&=\operatorname{Cov}(z', z'A') + \mu' A \mu ,
\end{align}

which leaves us to show that

\operatorname{Cov}(z', z'A')=\operatorname{t}(AV).

This is true based on the fact that one can cyclically permute matrices when taking a trace without changing the end result (e.g.: trace(AB) = trace(BA)).

We see that

\begin{align}
\operatorname{Cov}(z',z'A') &= E\left[\left(z' - E(z') \right)\left(z'A' - E\left(z'A'\right) \right)' \right] \\
&= E\left[ (z' - \mu') (z'A' - \mu' A' )' \right]\\
&= E\left[ (z - \mu)' (Az - A\mu) \right].
\end{align}

And since

\left( {z - \mu } \right)'\left( {Az - A\mu } \right)

is a fixed number, then

(z - \mu)' ( Az - A\mu)= \operatorname{trace}\left[ {(z - \mu )'(Az - A\mu )} \right] = \operatorname{trace} \left[(z - \mu )'A(z - \mu ) \right]

trivially. Using the permutation we get:

\operatorname{trace}\left[ {(z - \mu )'A(z - \mu )} \right] = \operatorname{trace}\left[ {A(z - \mu )'(z - \mu )} \right],

and by plugging this into the original formula we get:

\begin{align}
\operatorname{Cov} \left( {z',z'A'} \right) &= E\left[ {\left( {z - \mu } \right)' (Az - A\mu)} \right] \\
&= E \left[ \operatorname{trace}\left[ A(z - \mu )'(z - \mu )\right] \right] \\
&= \operatorname{trace} \left[ {A \cdot E \left[(z - \mu )'(z - \mu )\right] } \right] \\
&= \operatorname{trace} [A V]. 
\end{align}

Expectation of the product of two different quadratic forms[edit]

One can take the expectation of the product of two different quadratic forms in a zero-mean Gaussian random vector X as follows:[1]:pp. 162-176

\operatorname{E}[X^{T}AX][X^{T}BX] = 2\operatorname{trace}(ACBC) + \operatorname{trace}(AC)\operatorname{trace}(BC)

where again C is the covariance matrix of X. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar.

Applications[edit]

Portfolio theory[edit]

In portfolio theory in finance, an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector r of random returns on the individual assets, and the portfolio return p (a random scalar) is the inner product of the vector of random returns with a vector w of portfolio weights — the fractions of the portfolio placed in the respective assets. Since p = wTr, the expected value of the portfolio return is wTE(r) and the variance of the portfolio return can be shown to be wTCw, where C is the covariance matrix of r.

Regression theory[edit]

In linear regression theory, we have data on n observations on a dependent variable y and n observations on each of k independent variables xj. The observations on the dependent variable are stacked into a column vector y; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a matrix X of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data:

y = X \beta + e,

where β is a postulated fixed but unknown vector of k response coefficients, and e is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such as ordinary least squares, a vector \hat \beta is chosen as an estimate of β, and the estimate of the vector e, denoted \hat e, is computed as

\hat e = y - X \hat \beta.

Then the statistician must analyze the properties of \hat \beta and \hat e, which are viewed as random vectors since a randomly different selection of n cases to observe would have resulted in different values for them.

References[edit]

  1. ^ a b Kendrick, David, Stochastic Control for Economic Models, McGraw-Hill, 1981.

Original courtesy of Wikipedia: http://en.wikipedia.org/wiki/Multivariate_random_variable — Please support Wikipedia.
This page uses Creative Commons Licensed content from Wikipedia. A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia.
4862 videos foundNext > 

E1.5. Joint, marginal conditional distributions, and Bayes' Rule (Econometrics Math)

In this video, I define joint, marginal and conditional distributions (pmfs or pdfs). I motivate these definitions using a simple example of two discrete ran...

Mod-01 Lec-26 Multivariate Distribution and Functions of Multiple Random Variables

Probability Methods in Civil Engineering by Prof. Rajib Maity, Department of Civil Engineering, IIT Kharagpur. For more details on NPTEL visit http://nptel.i...

Multivariate Gaussian Random Vectors - Part 1 - Definition

The definition of a multi-variate Gaussian random vector is presented and compared to the Gaussian pdf for a single random variable as we've studied in past ...

(PP 5.1) Multiple discrete random variables

(0:00) Definition of a random vector. (1:50) Definition of a discrete random vector. (2:28) Definition of the joint PMF of a discrete random vector. (7:00) F...

Example: the marginal pdf from joint pdf for continuous random variables

Subject: Statistics Level: Post newbie/ undergrad Topic: Multivariate distributions Description: obtaining the marginal pdf from joint pdf for continuous ran...

STATA: generate random variables

Help us caption and translate this video on Amara.org: http://www.amara.org/en/v/BhE5/ 2 ways to generate univariate random variable, 1 way to generate multi...

Example: the expectation of a function of 2 random variables

Subject: Statistics Level: Post newbie/ undergrad Topic: Multivariate distributions Description:this is a video I made cuz of what I saw in your homework. Sa...

Expectations and variance of a random vector - part 1

This video explains what is meant by the expectations and variance of a vector of random variables. Check out http://www.oxbridge-tutor.co.uk/#!econometrics-...

Example of Transformations of Multiple r.v.s -- The Sum Of Two r.v.s

The method of moments is applied to compute the CDF and pdf of the sum of two random variables, Y = X_1 + X_2. We show that when X_1 and X_2 are independent,...

(PP 6.5) Affine property, Constructing Gaussians, and Sphering

Any affine transformation of a (multivariate) Gaussian random variable is (multivariate) Gaussian. How to construct any (multivariate) Gaussian using an affi...

4862 videos foundNext > 

We're sorry, but there's no news about "Multivariate random variable" right now.

Loading

Oops, we seem to be having trouble contacting Twitter

Talk About Multivariate random variable

You can talk about Multivariate random variable with people all over the world in our discussions.

Support Wikipedia

A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. Please add your support for Wikipedia!