In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed.
Contents

Formal definition 1

Computation 2

Using linear regression 2.1

Using recursive formula 2.2

Using matrix inversion 2.3

Interpretation 3

Geometrical 3.1

As conditional independence test 3.2

Semipartial correlation (part correlation) 4

Use in time series analysis 5

See also 6

References 7

External links 8
Formal definition
Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z_{1}, Z_{2}, ..., Z_{n}}, written ρ_{XY·Z}, is the correlation between the residuals R_{X} and R_{Y} resulting from the linear regression of X with Z and of Y with Z, respectively. The firstorder partial correlation (i.e. when n=1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344–345).^{[1]}
Computation
Using linear regression
A simple way to compute the sample partial correlation for some data is to solve the two associated linear regression problems, get the residuals, and calculate the correlation between the residuals. Let X and Y be, as above, random variables taking real values, and let Z be the ndimensional vector valued random variable. If we write x_{i}, y_{i} and z_{i} to denote the ith of N i.i.d. samples of some joint probability distribution over three scalar real random variables X, Y and Z, solving the linear regression problem amounts to finding ndimensional vectors \mathbf{w}_X^* and \mathbf{w}_Y^* such that

\mathbf{w}_X^* = \arg\min_{\mathbf{w}} \left\{ \sum_{i=1}^N (x_i  \langle\mathbf{w}, \mathbf{z}_i \rangle)^2 \right\}

\mathbf{w}_Y^* = \arg\min_{\mathbf{w}} \left\{ \sum_{i=1}^N (y_i  \langle\mathbf{w}, \mathbf{z}_i \rangle)^2 \right\}
with N being the number of samples and \langle\mathbf{v},\mathbf{w} \rangle the scalar product between the vectors v and w. Note that in some formulations the regression includes a constant term, so the matrix \mathbf{z} would have an additional column of ones.
The residuals are then

r_{X,i} = x_i  \langle\mathbf{w}_X^*,\mathbf{z}_i \rangle

r_{Y,i} = y_i  \langle\mathbf{w}_Y^*,\mathbf{z}_i \rangle
and the sample partial correlation is then given by the usual formula for sample correlation , but between these new derived values.

\hat{\rho}_{XY\cdot\mathbf{Z}}=\frac{N\sum_{i=1}^N r_{X,i}r_{Y,i}\sum_{i=1}^N r_{X,i}\sum_{i=1}^N r_{Y,i}} {\sqrt{N\sum_{i=1}^N r_{X,i}^2\left(\sum_{i=1}^N r_{X,i}\right)^2}~\sqrt{N\sum_{i=1}^N r_{Y,i}^2\left(\sum_{i=1}^N r_{Y,i}\right)^2}}.
Using recursive formula
It can be computationally expensive to solve the linear regression problems. Actually, the nthorder partial correlation (i.e., with Z = n) can be easily computed from three (n  1)thorder partial correlations. The zerothorder partial correlation ρ_{XY·Ø} is defined to be the regular correlation coefficient ρ_{XY}.
It holds, for any Z_0 \in \mathbf{Z}:

\rho_{XY\cdot \mathbf{Z} } = \frac{\rho_{XY\cdot\mathbf{Z}\setminus\{Z_0\}}  \rho_{XZ_0\cdot\mathbf{Z}\setminus\{Z_0\}}\rho_{Z_0Y\cdot\mathbf{Z}\setminus\{Z_0\}}} {\sqrt{1\rho_{XZ_0\cdot\mathbf{Z}\setminus\{Z_0\}}^2} \sqrt{1\rho_{Z_0Y\cdot\mathbf{Z}\setminus\{Z_0\}}^2}}.
Naïvely implementing this computation as a recursive algorithm yields an exponential time complexity. However, this computation has the overlapping subproblems property, such that using dynamic programming or simply caching the results of the recursive calls yields a complexity of \mathcal{O}(n^3).
Note in the case where Z is a single variable, this reduces to:

\rho_{XY\cdot Z } = \frac{\rho_{XY}  \rho_{XZ}\rho_{ZY}} {\sqrt{1\rho_{XZ}^2} \sqrt{1\rho_{ZY}^2}}.
Using matrix inversion
In \mathcal{O}(n^3) time, another approach allows all partial correlations to be computed between any two variables X_{i} and X_{j} of a set V of cardinality n, given all others, i.e., \mathbf{V} \setminus \{X_i,X_j\}, if the correlation matrix (or alternatively covariance matrix) Ω = (ω_{ij}), where ω_{ij} = ρ_{XiXj}, is positive definite and therefore invertible. If we define P = Ω^{−1}, we have:

\rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = \frac{p_{ij}}{\sqrt{p_{ii}p_{jj}}}.
Interpretation
Geometrical interpretation of partial correlation
Geometrical
Let three variables X, Y, Z (where x is the Independent Variable (IV), y is the Dependent Variable (DV), and Z is the "control" or "extra variable") be chosen from a joint probability distribution over n variables V. Further let v_{i}, 1 ≤ i ≤ N, be N ndimensional i.i.d. samples taken from the joint probability distribution over V. We then consider the Ndimensional vectors x (formed by the successive values of X over the samples), y (formed by the values of Y) and z (formed by the values of Z).
It can be shown that the residuals R_{X} coming from the linear regression of X using Z, if also considered as an Ndimensional vector r_{X}, have a zero scalar product with the vector z generated by Z. This means that the residuals vector lives on a hyperplane S_{z} that is perpendicular to z.
The same also applies to the residuals R_{Y} generating a vector r_{Y}. The desired partial correlation is then the cosine of the angle φ between the projections r_{X} and r_{Y} of x and y, respectively, onto the hyperplane perpendicular to z.^{[2]}
As conditional independence test
With the assumption that all involved variables are multivariate Gaussian, the partial correlation ρ_{XY·Z} is zero if and only if X is conditionally independent from Y given Z.^{[3]} This property does not hold in the general case.
To test if a sample partial correlation \hat{\rho}_{XY\cdot\mathbf{Z}} vanishes, Fisher's ztransform of the partial correlation can be used:

z(\hat{\rho}_{XY\cdot\mathbf{Z}}) = \frac{1}{2} \ln\left(\frac{1+\hat{\rho}_{XY\cdot\mathbf{Z}}}{1\hat{\rho}_{XY\cdot\mathbf{Z}}}\right).
The null hypothesis is H_0: \hat{\rho}_{XY\cdot\mathbf{Z}} = 0, to be tested against the twotail alternative H_A: \hat{\rho}_{XY\cdot\mathbf{Z}} \neq 0. We reject H_{0} with significance level α if:

\sqrt{N  \mathbf{Z}  3}\cdot z(\hat{\rho}_{XY\cdot\mathbf{Z}}) > \Phi^{1}(1\alpha/2),
where Φ(·) is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, and N is the sample size. Note that this ztransform is approximate and that the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exact ttest based on a combination of the partial regression coefficient, the partial correlation coefficient and the partial variances is available.^{[4]}
The distribution of the sample partial correlation was described by Fisher.^{[5]}
Semipartial correlation (part correlation)
The semipartial (or part) correlation statistic is similar to the partial correlation statistic. Both measure variance after certain factors are controlled for, but to calculate the semipartial correlation one holds the third variable constant for either X or Y, whereas for partial correlations one holds the third variable constant for both.^{[6]} The semipartial correlation measures unique and joint variance while the partial correlation measures unique variance. The semipartial (or part) correlation can be viewed as more practically relevant "because it is scaled to (i.e., relative to) the total variability in the dependent (response) variable." ^{[7]} Conversely, it is less theoretically useful because it is less precise about the unique contribution of the independent variable. Although it may seem paradoxical, the semipartial correlation of X with Y is always less than or equal to the partial correlation of X with Y
Use in time series analysis
In time series analysis, the partial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lag h, as

\phi(h)= \rho_{X_0X_h\cdot \{X_1,\dots,X_{h1} \}}.
See also
References

^ Guilford J. P., Fruchter B. (1973). Fundamental statistics in psychology and education. Tokyo:

^ Rummel, R. J. (1976). "Understanding Correlation".

^ Baba, Kunihiro; Ritei Shibata & Masaaki Sibuya (2004). "Partial correlation and conditional correlation as measures of conditional independence".

^ Kendall MG, Stuart A. (1973) The Advanced Theory of Statistics, Volume 2 (3rd Edition), ISBN 0852642156, Section 27.22

^

^ http://luna.cas.usf.edu/~mbrannic/files/regression/Partial.html.

^ StatSoft, Inc. (2010). "SemiPartial (or Part) Correlation", Electronic Statistics Textbook. Tulsa, OK: StatSoft, accessed January 15, 2011.
External links

Prokhorov, A.V. (2001), "Partial correlation coefficient", in Hazewinkel, Michiel,

What is a partial correlation?

Mathematical formulae in the "Description" section of the IMSL Numerical Library PCORR routine

A threevariable example
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.