A plot showing 100 random numbers with a "hidden"
sine function, and an autocorrelation (correlogram) of the series on the bottom.
Example for a correlogram
In the analysis of data, a correlogram is an image of correlation statistics. For example, in time series analysis, a correlogram, also known as an autocorrelation plot, is a plot of the sample autocorrelations r_h\, versus h\, (the time lags).
If crosscorrelation is used, the result is called a crosscorrelogram. The correlogram is a commonly used tool for checking randomness in a data set. This randomness is ascertained by computing autocorrelations for data values at varying time lags. If random, such autocorrelations should be near zero for any and all timelag separations. If nonrandom, then one or more of the autocorrelations will be significantly nonzero.
In addition, correlograms are used in the model identification stage for Box–Jenkins autoregressive moving average time series models. Autocorrelations should be nearzero for randomness; if the analyst does not check for randomness, then the validity of many of the statistical conclusions becomes suspect. The correlogram is an excellent way of checking for such randomness.
Sometimes, corrgrams, colormapped matrices of correlation strengths in multivariate analysis,^{[1]} are also called correlograms.^{[2]}^{[3]}
Contents

Applications 1

Importance 2

Estimation of autocorrelations 3

Statistical inference with correlograms 4

Software 5

Related techniques 6

References 7

Further reading 8

External links 9
Applications
The correlogram can help provide answers to the following questions:

Are the data random?

Is an observation related to an adjacent observation?

Is an observation related to an observation twiceremoved? (etc.)

Is the observed time series white noise?

Is the observed time series sinusoidal?

Is the observed time series autoregressive?

What is an appropriate model for the observed time series?

Is the model

Y = \mathrm{constant} + \mathrm{error }
valid and sufficient?

Is the formula s_{\bar{Y}}=s/\sqrt{N} valid?
Importance
Randomness (along with fixed model, fixed variation, and fixed distribution) is one of the four assumptions that typically underlie all measurement processes. The randomness assumption is critically important for the following three reasons:

Most standard statistical tests depend on randomness. The validity of the test conclusions is directly linked to the validity of the randomness assumption.

Many commonly used statistical formulae depend on the randomness assumption, the most common formula being the formula for determining the standard deviation of the sample mean:

s_{\bar{Y}}=s/\sqrt{N}
where s is the standard deviation of the data. Although heavily used, the results from using this formula are of no value unless the randomness assumption holds.

For univariate data, the default model is

Y = \mathrm{constant} + \mathrm{error }
If the data are not random, this model is incorrect and invalid, and the estimates for the parameters (such as the constant) become nonsensical and invalid.
Estimation of autocorrelations
The autocorrelation coefficient at lag h is given by

r_h = c_h/c_0 \,
where c_{h} is the autocovariance function

c_h = \frac{1}{N}\sum_{t=1}^{Nh} \left(Y_t  \bar{Y}\right)\left(Y_{t+h}  \bar{Y}\right)
and c_{0} is the variance function

c_0 = \frac{1}{N}\sum_{t=1}^{N} \left(Y_t  \bar{Y}\right)^2
The resulting value of r_{h} will range between 1 and +1.
Alternate estimate
Some sources may use the following formula for the autocovariance function:

c_h = \frac{1}{Nh}\sum_{t=1}^{Nh} \left(Y_t  \bar{Y}\right)\left(Y_{t+h}  \bar{Y}\right)
Although this definition has less bias, the (1/N) formulation has some desirable statistical properties and is the form most commonly used in the statistics literature. See pages 20 and 4950 in Chatfield for details.
Statistical inference with correlograms
In the same graph one can draw upper and lower bounds for autocorrelation with significance level \alpha\,:

B=\pm z_{1\alpha/2} SE(r_h)\, with r_h\, as the estimated autocorrelation at lag h\,.
If the autocorrelation is higher (lower) than this upper (lower) bound, the null hypothesis that there is no autocorrelation at and beyond a given lag is rejected at a significance level of \alpha\,. This test is an approximate one and assumes that the timeseries is Gaussian.
In the above, z_{1α/2} is the quantile of the normal distribution; SE is the standard error, which can be computed by Bartlett's formula for MA(l) processes:

SE(r_1)=\frac {1} {\sqrt{N}}

SE(r_h)=\sqrt\frac{1+2\sum_{i=1}^{h1} r^2_i}{N} for h>1.\,
In the picture above we can reject the null hypothesis that there is no autocorrelation between timepoints which are adjacent (lag=1). For the other periods one cannot reject the null hypothesis of no autocorrelation.
Note that there are two distinct formulas for generating the confidence bands:
1. If the correlogram is being used to test for randomness (i.e., there is no time dependence in the data), the following formula is recommended:

\pm \frac{z_{1\alpha/2}}{\sqrt{N}}
where N is the sample size, z is the quantile function of the standard normal distribution and α is the significance level. In this case, the confidence bands have fixed width that depends on the sample size.
2. Correlograms are also used in the model identification stage for fitting ARIMA models. In this case, a moving average model is assumed for the data and the following confidence bands should be generated:

\pm z_{1\alpha/2}\sqrt{\frac{1}{N}\left(1+2\sum_{i=1}^{k} r_i^2\right)}
where k is the lag. In this case, the confidence bands increase as the lag increases.
Software
Correlograms are available in most general purpose statistical software programs. In R, the function acf and pacf can be used to produce such a plot.
Related techniques
References

^ Friendly, Michael (19 August 2002). "Corrgrams: Exploratory displays for correlation matrices" (PDF).

^ "CRAN  Package corrgram". cran.rproject.org. 29 August 2013. Retrieved 19 January 2014.

^ "QuickR: Correlograms". statmethods.net. Retrieved 19 January 2014.
Further reading

Hanke, John E./Reitsch, Arthur G./Wichern, Dean W. (2001). Business forecasting (7th ed.). Upper Saddle River, NJ: Prentice Hall.

Box, G. E. P., and Jenkins, G. (1976). Time Series Analysis: Forecasting and Control. HoldenDay.

Chatfield, C. (1989). The Analysis of Time Series: An Introduction (Fourth ed.). New York, NY: Chapman & Hall.
External links
This article incorporates public domain material from websites or documents of the National Institute of Standards and Technology.
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.