Posts Tagged 'Karl Popper'

Paul Romer on Modern Macroeconomics, Or, the “All Models Are False” Dodge

Paul Romer has been engaged for some time in a worthy campaign against the travesty of modern macroeconomics. A little over a year ago I commented favorably about Romer’s takedown of Robert Lucas, but I also defended George Stigler against what I thought was an unfair attempt by Romer to identify George Stigler as an inspiration and role model for Lucas’s transgressions. Now just a week ago, a paper based on Romer’s Commons Memorial Lecture to the Omicron Delta Epsilon Society, has become just about the hottest item in the econ-blogosophere, even drawing the attention of Daniel Drezner in the Washington Post.

I have already written critically about modern macroeconomics in my five years of blogging, and here are some links to previous posts (link, link, link, link). It’s good to see that Romer is continuing to voice his criticisms, and that they are gaining a lot of attention. But the macroeconomic hierarchy is used to criticism, and has its standard responses to criticism, which are being dutifully deployed by defenders of the powers that be.

Romer’s most effective rhetorical strategy is to point out that the RBC core of modern DSGE models posit unobservable taste and technology shocks to account for fluctuations in the economic time series, but that these taste and technology shocks are themselves simply inferred from the fluctuations in the times-series data, so that the entire structure of modern macroeconometrics is little more than an elaborate and sophisticated exercise in question-begging.

In this post, I just want to highlight one of the favorite catch-phrases of modern macroeconomics which serves as a kind of default excuse and self-justification for the rampant empirical failures of modern macroeconomics (documented by Lipsey and Carlaw as I showed in this post). When confronted by evidence that the predictions of their models are wrong, the standard and almost comically self-confident response of the modern macroeconomists is: All models are false. By which the modern macroeconomists apparently mean something like: “And if they are all false anyway, you can’t hold us accountable, because any model can be proven wrong. What really matters is that our models, being microfounded, are not subject to the Lucas Critique, and since all other models than ours are not micro-founded, and, therefore, being subject to the Lucas Critique, they are simply unworthy of consideration. This is what I have called methodological arrogance. That response is simply not true, because the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

Here is Romer’s take:

In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions (p.14).” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.

Friedman’s methodological assertion would have been correct had Friedman substituted “simple” for “unrealistic.” Sometimes simplifications are unrealistic, but they don’t have to be. A simplification is a generalization of something complicated. By simplifying, we can transform a problem that had been too complex to handle into a problem more easily analyzed. But such simplifications aren’t necessarily unrealistic. To say that all models are false is simply a dodge to avoid having to account for failure. The excuse of course is that all those other models are subject to the Lucas Critique, so my model wins. But your model is subject to the Lucas Critique even though you claim it’s not, so even according to the rules you have arbitrarily laid down, you don’t win.

So I was just curious about where the little phrase “all models are false” came from. I was expecting that Karl Popper might have said it, in which case to use the phrase as a defense mechanism against empirical refutation would have been a particularly fraudulent tactic, because it would have been a perversion of Popper’s methodological stance, which was to force our theoretical constructs to face up to, not to insulate it from, empirical testing. But when I googled “all theories are false” what I found was not Popper, but the British statistician, G. E. P. Box who wrote in his paper “Science and Statistics” based on his R. A. Fisher Memorial Lecture to the American Statistical Association: “All models are wrong.” Here’s the exact quote:

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad. Pure mathematics is concerned with propositions like “given that A is true, does B necessarily follow?” Since the statement is a conditional one, it has nothing whatsoever to do with the truth of A nor of the consequences B in relation to real life. The pure mathematician, acting in that capacity, need not, and perhaps should not, have any contact with practical matters at all.

In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world. It follows that, although rigorous derivation of logical consequences is of great importance to statistics, such derivations are necessarily encapsulated in the knowledge that premise, and hence consequence, do not describe natural truth.

It follows that we cannot know that any statistical technique we develop is useful unless we use it. Major advances in science and in the science of statistics in particular, usually occur, therefore, as the result of the theory-practice iteration.

One of the most annoying conceits of modern macroeconomists is the constant self-congratulatory references to themselves as scientists because of their ostentatious use of axiomatic reasoning, formal proofs, and higher mathematical techniques. The tiresome self-congratulation might get toned down ever so slightly if they bothered to read and take to heart Box’s lecture.

Advertisements

Some Popperian (and Kuhnian!) Responses to Robert Waldmann

Robert Waldmann has been criticizing my arguments for the importance of monetary policy in accounting for both the 2008 downturn and the weakness of the subsequent recovery. He raises interesting issues which I think warrant a response. In my previous response to Waldmann, I closed with the following paragraph:

I think that the way to pick out changes in monetary policy is to look at changes in inflation expectations, and I think that you can find some correlation between changes in monetary policy, so identified, and employment, though it is probably not nearly as striking as the relationship between asset prices and inflation expectations. I also don’t think that operation twist had any positive effect, but QE3 does seem to have had some. I am not familiar with the study by the San Francisco Fed economists, but I will try to find it and see what I can make out of it. In the meantime, even if Waldmann is correct about the relationship between monetary policy and employment since 2008, there are all kinds of good reasons for not rushing to reject a null hypothesis on the basis of a handful of ambiguous observations. That wouldn’t necessarily be the calm and reasonable thing to do.

Waldmann replied as follows on his blog:

Get the null on your side is my motto (I admit it).  You follow this.  You suggest that your hypothesis is the hull hypothesis then abuse Neyman and Person by implying that we can draw interesting conclusions from failure to reject the null.  Basically the sentence which includes the word “null” is the assertion that we should assume you are right and I am wrong until I offer solid proof.  To be briefer, since we are working in social science, you are asking that I assume you are right.  This is not an ideal approach to debate.
I ask you to review your sentence which contains the word “null” and reconsider if you really believe it.  The choice of the null should be harmless (it is an a priori choice without a prior).  How about we make the usual null hypothesis that an effect is zero.  Can you reject the null that monetary policy since 2009 has had no effect ? At what confidence level is the null rejected ?  Did you use a t-test ? an f-test ?  “null” is a technical term and I ask again if you would be willing to retract the sentence including the word “null”.

First, I was careless in identifying my hypothesis that monetary policy is an important factor with the “null” hypothesis. The convention in statistical testing is to identify the null hypothesis as alternative to the hypothesis being tested. What I meant to say was that even if the evidence is not sufficient to reject the null hypothesis that monetary policy is ineffective, there may still be good reason not to reject the alternative or maintained hypothesis that monetary policy is effective. In the real world, there is ambiguity. Evidence is not necessarily conclusive, so we accept for the most part that there really are alternative ways of looking at the world and that, as a practical matter, we don’t have sufficient evidence to reject conclusively either the null or the maintained hypothesis. With the relatively small numbers of observations that we are working with, statistical tests aren’t powerful enough to reject the null with a high level of confidence, so I have trouble accepting the standard statistical model of hypothesis testing in this context.

But even aside from the paucity of observations, there is a deeper problem which is that, as Karl Popper the arch-falsificationist was among the first to point out, observations are not independent of the underlying theory. We use the theory to interpret what we are observing. Think of Galileo, he was confronted with people telling him that the theory that the earth is travelling around a stationary sun is obviously refuted by the clear evidence that the earth is stationary and that it is the sun that is moving in the sky. Galileo therefore had to write a whole book in which he explained, using the Copernican theory, how to interpret the apparent evidence that the earth is stationary and the sun is moving. By doing so, Galileo didn’t prove that the earth-centric model was wrong, he simply was able to show that what his opponents regarded as conclusive empirical validation of their theory was not conclusive, inasmuch as the Copernican theory was able to interpret the supposedly contradictory evidence in a manner that is consistent with the premises of the Copernican theory. As Kuhn showed in the Structure of Scientific Revolutions, the initial astronomical evidence was more supportive of the Ptolomaic hypothesis than of the Copernican hypothesis. It was only because the Copernicans didn’t give up prematurely that they eventually gathered sufficient evidence to overwhelm the opposition.

Waldmann continues:

using expected inflation to identify monetary policy is only a valid statistical procedure if one is willing to assume that nothing else affects expected inflation.  If you think that say OPEC ever had any influence on expected inflation, then you can’t use your identifying assumption.  In particular TIPS breakevens can be fairly well fit (not predicted because not out of sample) using lagged data other than data on what the FOMC did.

again I refer to

http://www.angrybearblog.com/2013/02/inflation-expectations-and-lagged.html

[Here is the chart to which Waldmann refers.]

angry_bear

(legend here red is the 5 year TIPS breakeven or expected inflation, Blue is the change over the *past* year of the price of a barrel of oil times 0.1 plus 1.6, green is the geometric mean of the change over the *past year* of the personal consumption deflator and the personal consumption minus food and energy deflator.

Again, I don’t think formal statistical modeling is the issue here, because the data are neither sufficient in quantity nor unambiguous in their interpretation. The data are what they are, and if we cannot parse out what has been caused by OPEC and what has been caused by the Fed, we have to accept the ambiguity and not pretend that it doesn’t exist just so to impose an identifying assumption. I would also make what I would have thought is an obvious observation that since 2007 the causality between the price of oil and the state of the economy has been going in both directions, and any statistical model that takes the price of oil as exogenous is incredible.

I don’t see how anyone could look at this graph and then claim we can identify monetary policy by the TIPS breakeven.  That is only valid if nothing but monetary policy affects inflation expectations.

I don’t understand that. Why, if monetary policy accounts for 50% of the variation in inflation expectations is it not valid to use the TIPS spread to identify monetary policy? We may have to make some plausible assumptions about when there were supply-side disturbances or add some instrumental variables, but I don’t see why we would want to ignore monetary policy just because factors other than monetary policy may be affecting inflation expectations.

Similarly in 1933 monetary policy wasn’t the only thing that changed.  I understand that there was considerable policy reform in the so called “first hundred days.  ” The idea that we can identify the effect of monetary policy by looking at the USA in 1933 is based on the assumption that Roosevelt did nothing else.  This is not reasonable.

Sure he did other things, but you can’t seriously mean that government spending increased in the first 100 days by an amount sufficient to account for the explosion in output from April to July. I would concede that other things that Roosevelt did may have also helped restore confidence, but I don’t see how you can deny that the devaluation of the dollar was at or near the top of the list of economic actions taken in the first 4 months of his Presidency.

But I think we can detect the effect of recent monetary policy on TIPS breakevens if we agree that it (including QE) is working principally through forward guidance.  There should be quick effects on asset prices when surprising shifts are announced.  QE 4 (December 2012) was definitely a surprise.  The TIPS spread barely moved (within the range of normal fluctuations).  I think the question is settled.  I do not think it is optimal to ignore daily data when you have it and treat same quarter as the same instant.  Some prices are sticky and some aren’t.  Bond prices aren’t.

What makes you so sure that QE4 was a surprise. I think that there was considerable disappointment that there was no increase in the inflation target, just a willingness to accept some slight amount of overshooting (2.5%) before applying the brakes as long as unemployment remains over 6.5%. Ambiguity reins supreme.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,326 other followers

Follow Uneasy Money on WordPress.com
Advertisements