I Got More Data, My Model is More Refined, but My Estimator is Getting Worse! Am I Just Dumb?

DSpace/Manakin Repository

I Got More Data, My Model is More Refined, but My Estimator is Getting Worse! Am I Just Dumb?

Citable link to this page

 

 
Title: I Got More Data, My Model is More Refined, but My Estimator is Getting Worse! Am I Just Dumb?
Author: Meng, Xiao-Li; Xie, Xianchao

Note: Order does not necessarily reflect citation order of authors.

Citation: Meng, Xiao-Li, and Xianchao Xie. Forthcoming. I Got More Data, My Model Is More Refined, but My Estimator Is Getting Worse! Am I Just Dumb? Econometric Reviews.
Full Text & Related Files:
Abstract: Possibly, but more likely you are merely a victim of conventional wisdom. More data or better models by no means guarantee better estimators (e.g., with a smaller mean squared error), when you are not following probabilistically principled methods such as MLE (for large samples) or Bayesian approaches. Estimating equations are particularly vulnerable in this regard, almost a necessary price for their robustness. These points will be demonstrated via common tasks of estimating regression parameters and correlations, under simple models such as bivariate normal and ARCH(1). Some general strategies for detecting and avoiding such pitfalls are suggested, including checking for self-efficiency (Meng, 1994; Statistical Science) and adopting a guiding working model. Using the example of estimating the autocorrelation \(\rho\) under a stationary AR(1) model, we also demonstrate the interaction between model assumptions and observation structures in seeking additional information, as the sampling interval \(s\) increases. Furthermore, for a given sample size, the optimal s for minimizing the asymptotic variance of \(\hat{\rho}_{MLE}\)is \(s = 1\) if and only if \(\rho^2 ≤ 1/3\); beyond that region the optimal s increases at the rate of \(log ^{−1}(\rho^{−2})\) as \(\rho\) approaches a unit root, as does the gain in efficiency relative to using \(s = 1\). A practical implication of this result is that the so-called “non-informative” Jeffreys prior can be far from non-informative even for stationary time series models, because here it converges rapidly to a point mass at a unit root as \(s\) increases. Our overall emphasis is that intuition and conventional wisdom need to be examined via critical thinking and theoretical verification before they can be trusted fully.
Published Version: doi:10.1080/07474938.2013.808567
Terms of Use: This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAP
Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:10886849
Downloads of this work:

Show full Dublin Core record

This item appears in the following Collection(s)

 
 

Search DASH


Advanced Search
 
 

Submitters