World Library  
Flag as Inappropriate
Email this Article

Simultaneous equations model

Article Id: WHEBN0000250436
Reproduction Date:

Title: Simultaneous equations model  
Author: World Heritage Encyclopedia
Language: English
Subject: Multivariate statistics, Econometrics, Statistics, List of statistics articles, Structural estimation
Collection: Econometrics, Multivariate Statistics
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Simultaneous equations model

Simultaneous equation models are a form of statistical model in the form of a set of linear simultaneous equations. They are often used in econometrics.

Contents

  • Structural and reduced form 1
    • Assumptions 1.1
  • Estimation 2
    • Two-stages least squares (2SLS) 2.1
    • Indirect least squares 2.2
    • Limited information maximum likelihood (LIML) 2.3
      • K class estimators 2.3.1
    • Three-stage least squares (3SLS) 2.4
  • See also 3
  • Notes 4
  • References 5
  • External links 6

Structural and reduced form

Suppose there are m regression equations of the form

y_{it} = y_{-i,t}'\gamma_i + x_{it}'\;\!\beta_i + u_{it}, \quad i=1,\ldots,m,

where i is the equation number, and t = 1, ..., T is the observation index. In these equations xit is the ki×1 vector of exogenous variables, yit is the dependent variable, y−i,t is the ni×1 vector of all other endogenous variables which enter the ith equation on the right-hand side, and uit are the error terms. The “−i” notation indicates that the vector y−i,t may contain any of the y’s except for yit (since it is already present on the left-hand side). The regression coefficients βi and γi are of dimensions ki×1 and ni×1 correspondingly. Vertically stacking the T observations corresponding to the ith equation, we can write each equation in vector form as

y_i = Y_{-i}\gamma_i + X_i\beta_i + u_i, \quad i=1,\ldots,m,

where yi and ui are 1 vectors, Xi is a T×ki matrix of exogenous regressors, and Y−i is a T×ni matrix of endogenous regressors on the right-hand side of the ith equation. Finally, we can move all endogenous variables to the left-hand side and write the m equations jointly in vector form as

Y\Gamma = X\Beta + U.\,

This representation is known as the structural form. In this equation Y = [y1 y2 ... ym] is the T×m matrix of dependent variables. Each of the matrices Y−i is in fact an ni-columned submatrix of this Y. The m×m matrix Γ, which describes the relation between the dependent variables, has a complicated structure. It has ones on the diagonal, and all other elements of each column i are either the components of the vector −γi or zeros, depending on which columns of Y were included in the matrix Y−i. The T×k matrix X contains all exogenous regressors from all equations, but without repetitions (that is, matrix X should be of full rank). Thus, each Xi is a ki-columned submatrix of X. Matrix Β has size k×m, and each of its columns consists of the components of vectors βi and zeros, depending on which of the regressors from X were included or excluded from Xi. Finally, U = [u1 u2 ... um] is a T×m matrix of the error terms.

Postmultiplying the structural equation by Γ −1, the system can be written in the reduced form as

Y = X\Beta\Gamma^{-1} + U\Gamma^{-1} = X\Pi + V.\,

This is already a simple general linear model, and it can be estimated for example by ordinary least squares. Unfortunately, the task of decomposing the estimated matrix \scriptstyle\hat\Pi into the individual factors Β and Γ −1 is quite complicated, and therefore the reduced form is more suitable for prediction but not inference.

Assumptions

Firstly, the rank of the matrix X of exogenous regressors must be equal to k, both in finite samples and in the limit as T → ∞ (this later requirement means that in the limit the expression \scriptstyle \frac1TX'\!X should converge to a nondegenerate k×k matrix). Matrix Γ is also assumed to be non-degenerate.

Secondly, error terms are assumed to be serially independent and identically distributed. That is, if the tth row of matrix U is denoted by u(t), then the sequence of vectors {u(t)} should be iid, with zero mean and some covariance matrix Σ (which is unknown). In particular, this implies that E[U] = 0, and E[U′U] = T Σ.

Lastly, the identification conditions require that the number of unknowns in this system of equations should not exceed the number of equations. More specifically, the order condition requires that for each equation ki + ni ≤ k, which can be phrased as “the number of excluded exogenous variables is greater or equal to the number of included endogenous variables”. The rank condition of identifiability is that rank(Πi0) = ni, where Πi0 is a (k − kini matrix which is obtained from Π by crossing out those columns which correspond to the excluded endogenous variables, and those rows which correspond to the included exogenous variables.

Estimation

Two-stages least squares (2SLS)

The simplest and the most common[1] estimation method for the simultaneous equations model is the so-called two-stage least squares method, developed independently by Theil (1953) and Basmann (1957). It is an equation-by-equation technique, where the endogenous regressors on the right-hand side of each equation are being instrumented with the regressors X from all other equations. The method is called “two-stage” because it conducts estimation in two steps:[2]

Step 1: Regress Y−i on X and obtain the predicted values \scriptstyle\hat{Y}_{\!-i};
Step 2: Estimate γi, βi by the ordinary least squares regression of yi on \scriptstyle\hat{Y}_{\!-i} and Xi.

If the ith equation in the model is written as

y_i = \begin{pmatrix}Y_{-i} & X_i\end{pmatrix}\begin{pmatrix}\gamma_i\\\beta_i\end{pmatrix} + u_i \equiv Z_i \delta_i + u_i,

where Zi is a (ni + ki) matrix of both endogenous and exogenous regressors in the ith equation, and δi is an (ni + ki)-dimensional vector of regression coefficients, then the 2SLS estimator of δi will be given by[3]

\hat\delta_i = \big(\hat{Z}'_i\hat{Z}_i\big)^{-1}\hat{Z}'_i y_i = \big( Z'_iPZ_i \big)^{-1} Z'_iPy_i,

where P = X (X ′X)−1X ′ is the projection matrix onto the linear space spanned by the exogenous regressors X.

Indirect least squares

Indirect least squares is an approach in econometrics where the coefficients in a simultaneous equations model are estimated from the reduced form model using ordinary least squares.[4][5] For this, the structural system of equations is transformed into the reduced form first. Once the coefficients are estimated the model is put back into the structural form.

Limited information maximum likelihood (LIML)

The “limited information” maximum likelihood method was suggested by Anderson & Rubin (1949). It is used when one is interested in estimating a single structural equation at a time (hence its name of limited information), say for observation i:

y_i = Y_{-i}\gamma_i +X_i\beta_i+ u_i \equiv Z_i \delta_i + u_i

The structural equations for the remaining endogeneous variables Y−1 are not specified, and they are given in their reduced form:

Y_{-i} = X \Pi + U_{-1}

Notation in this context is different than for the simple IV case. One has:

  • Y_{-i}: The endogeneous variable(s).
  • X_{-i}: The exogeneous variable(s)
  • X: The instrument(s) (often denoted Z)

The explicit formula for the LIML is:[6]

\hat\delta_i = \Big(Z'_i(I-\lambda M)Z_i\Big)^{\!-1}Z'_i(I-\lambda M)y_i,

where M = I − X (X ′X)−1X ′, and λ is the smallest characteristic root of the matrix:

\Big(\begin{bmatrix}y_i\\Y_{-i}\end{bmatrix} M_i \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} \Big) \Big(\begin{bmatrix}y_i\\Y_{-i}\end{bmatrix} M \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} \Big)^{\!-1}

where, in a similar way, Mi = I − Xi (XiXi)−1Xi.

In other words, λ is the smallest solution of the generalized eigenvalue problem, see Theil (1971, p. 503):

\Big|\begin{bmatrix}y_i&Y_{-i}\end{bmatrix}' M_i \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} -\lambda \begin{bmatrix}y_i&Y_{-i}\end{bmatrix}' M \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} \Big|=0

K class estimators

The LIML is a special case of the K-class estimators:[7]

\hat\delta = \Big(Z'(I-\kappa M)Z\Big)^{\!-1}Z'(I-\kappa M)y,

with:

  • \delta = \begin{bmatrix} \beta_i & \gamma_i\end{bmatrix}
  • Z = \begin{bmatrix} X_i & Y_{-i}\end{bmatrix}

Several estimators belong to this class:

  • κ=0: OLS
  • κ=1: 2SLS. Note indeed that in this case, I-\kappa M = I-M= P the usual projection matrix of the 2SLS
  • κ=λ: LIML
  • κ=λ - α (n-K): Fuller (1977) estimator. Here K represents the number of instruments, n the sample size, and α a positive constant to specify. A value of α=1 will yield an estimator that is approximately unbiased.[8]

Three-stage least squares (3SLS)

The three-stage least squares estimator was introduced by Zellner & Theil (1962). It combines two-stage least squares (2SLS) with seemingly unrelated regressions (SUR).

See also

Notes

  1. ^ Greene (2003, p. 398)
  2. ^ Greene (2003, p. 399)
  3. ^ Greene (2003, p. 399)
  4. ^ Park, S-B. (1974) "On Indirect Least Squares Estimation of a Simultaneous Equation System", The Canadian Journal of Statistics / La Revue Canadienne de Statistique, 2 (1), 75–82 JSTOR 3314964
  5. ^ Vajda, S.; Valko, P.; Godfrey, K.R. (1987). "Direct and indirect least squares methods in continuous-time parameter estimation". Automatica 23 (6): 707–718.  
  6. ^ Amemiya (1985, p. 235)
  7. ^ Davidson & Mackinnon (1993, p. 649)
  8. ^ Davidson & Mackinnon (1993, p. 649)

References

  • Amemiya, Takeshi (1985). Advanced econometrics. Cambridge, Massachusetts: Harvard University Press.  
  • Anderson, T.W.; Rubin, H. (1949). "Estimator of the parameters of a single equation in a complete system of stochastic equations".  
  • Basmann, R.L. (1957). "A generalized classical method of linear estimation of coefficients in a structural equation".  
  • Davidson, Russell; MacKinnon, James G. (1993). Estimation and inference in econometrics. Oxford University Press.  
  •  
  • Greene, William H. (2002). Econometric analysis (5th ed.). Prentice Hall.  
  •  
  •  
  •  

External links

  • About.com:economics Online dictionary of economics, entry for ILS
  • Lecture on the Identification Problem in 2SLS, and Estimation on YouTube by Mark Thoma
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from World Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.