XXX Chats

Freewoman cam show

Continuous updating gmm estimator

The interpretation gives some insight into why there is less bias associated with this estimator.

The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments.

A standard estimation procedure is to first-difference the model, so as to eliminate the unobserved heterogeneity, and then base GMM estimation on the moment conditions implied where endogenous differences of the variables are instrumented by their lagged levels.

This is the well known Arellano-Bond estimator, or first-difference (DIF) GMM estimator (see Arellano and Bond [1]).

Given the fact that the absolute value of the autoregressive parameter must be less than unity as a necessary requirement for the data-generating process to be stationary, we propose a computationally feasible variation on the two-step DIF and SYS GMM estimators, in which the idea of continuous-updating is applied solely to the autoregressive parameter; these two new estimators are denoted “SCUDIF” and “SCUSYS” below.

Following the jackknife interpretation of the continuous-updating estimator in the work of Donald and Newey [10], we show that the subset-continuous-updating method that we propose in this paper does not alter the asymptotic distribution of the two-step GMM estimators, and it hence retains consistency.

It is computationally advantageous relative to the continuous-updating estimator in that it replaces a relatively high-dimensional optimization over unbounded intervals by a one-dimensional optimization limited to the stationary domain of the autoregressive parameter.

We conduct Monte Carlo experiments and show that the proposed subset-continuous-updating versions of the DIF and SYS GMM estimators outperform their standard two-step counterparts in small samples in terms of the estimation accuracy on the autoregressive parameter and the rejection frequency of the Sargan-Hansen test.

The work by Hansen, Heaton, and Yaron [9] suggests that the continuous-updating GMM estimator has smaller bias than the standard two-step GMM estimator.

However, it involves high-dimensional optimizations when the number of regressors is large.

In this section, we conduct Monte Carlo experiments to compare the performance of our subset-continuous-updating estimators with the standard two-step estimators in finite samples.

We consider the model and assumptions specified in Section 2.

Comments Continuous updating gmm estimator