President Trump fired the Bureau of Labor Statistics commissioner, Erika McEntarfer, after July’s jobs report showed very little job growth over the past quarter. Initially, the President accused her of “rigging” the numbers to make him look bad. More recently, members of his administration have tried to reduce the criticism to just that of substantial revisions (one such representative case is Casey Mulligan’s tweet here).
Let’s take the less inflammatory reason (unreliable jobs figures) as the true motivation here to ask a probing question: What would a successful change to the statistics program look like?
It would not be the case that revisions would disappear. With statistics, there will always be revisions. Any statistical report is necessarily built on various assumptions. Ultimately, you are collecting a sample that you use to, based on assumptions and stylized facts, make claims about the entire population. Ideally, one would survey the entire population, but that is cost-prohibitive, both in terms of money and time. So, one uses an (ideally) representative sample of the population. If those assumptions and stylized facts change or are no longer useful, then the model must be revised. Revision will, in turn, change the results of the claims the sample can support. In such a case, the presence of revised data is a sign of an improvement to the model. Without revisions, the model will become less useful over time.
What about the size of revisions? That, of course, is a concern. If the model’s revisions frequently swing by huge amounts, then the model is fundamentally flawed. But University of Central Arkansas economist Jeremy Horpedahl shows that the BLS’s data revisions have shrunk over time (see also this post by University of Louisiana economist Gary Wagner). Not much room for improvement there.
Size and frequency of revisions will depend on the sample, and most importantly, on the response rate of the sample. A major problem with the BLS data in general is that response rates have been falling. Falling response rates mean that larger and larger imputations have to be made with less data. Not ideal. Improving response rates could be a sign of better quality data.
We could also see how the BLS data correspond to other sources. ADP, the payroll company, puts out their own monthly survey of jobs. It’s not quite identical to the BLS report (see their FAQ at the bottom for differences), but it is a useful comparison tool. Indeed, the revisions to the BLS data (and ADP’s own revisions) tend to bring the two data sets closer together. Over time, the BLS’s private employment numbers and ADP’s private employment numbers differ, with ADP Report on average 1,000 jobs lower than the BLS report. Given we are talking job gains/losses in the tens, if not hundreds of thousands, each month, such a discrepancy is not bad at all.[1] Lower discrepancy between the two data sets would be a sign of improvement.
Improvements to economic data are a good thing. But any improvement will be a difficult process. One must be very, very careful about how one evaluates whether a change is an improvement.
—
[1] Note: All data are using non-seasonally adjusted figures. Since seasonal adjustment is a function of models chosen by each agency, NSA provide the best apples-to-apples comparison. However, using seasonally-adjusted figures doesn’t alter this much. The discrepancy rises to 5,000 employees per month.
The post What Would Success Look Like? appeared first on Econlib.
Click this link for the original source of this article.
Author: Jon Murphy
This content is courtesy of, and owned and copyrighted by, https://www.econlib.org and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.