Mean dependence
In probability theory, a random variable Y is said to be mean independent of random variable X if and only if E(Y | X) = E(Y) for all x such that ƒ1(x) is not equal to zero. Y is said to be mean dependent if E(Y | X) ≠ μ(y) for some x such that ƒ1(x) is not equal to zero.
According to Cameron and Trivedi (2009, p. 23) and Wooldridge (2010, pp. 54, 907), stochastic independence implies mean independence, but the converse is not necessarily true.
Moreover, mean independence implies uncorrelatedness while the converse is not necessarily true.
The concept of mean independence is often used in econometrics to have a middle ground between the strong assumption of independent variables and the weak assumption of uncorrelated variables of a pair of random variables and .
If X, Y are two different random variables such that X is mean independent of Y and Z = f(X), which means that Z is a function only of X, then Y and Z are mean independent.
References
- Cameron, A. Colin; Trivedi, Pravin K. (2009). Microeconometrics: Methods and Applications (8th ed.). New York: Cambridge University Press. ISBN 9780521848053.
- Wooldridge, Jeffrey M. (2010). Econometric Analysis of Cross Section and Panel Data (2nd ed.). London: The MIT Press. ISBN 9780262232586.