Almost 40 years ago, Robert Lucas made a huge, but not quite original, contribution, when he provided a very compelling example of how the predictions of the then standard macroeconometric models used for policy analysis were inherently vulnerable to shifts in the empirically estimated parameters contained in the models, shifts induced by the very policy change under consideration. Insofar as those models could provide reliable forecasts of the future course of the economy, it was because the policy environment under which the parameters of the model had been estimated was not changing during the time period for which the forecasts were made. But any forecast deduced from the model conditioned on a policy change would necessarily be inaccurate, because the policy change itself would cause the agents in the model to alter their expectations in light of the policy change, causing the parameters of the model to diverge from their previously estimated values. Lucas concluded that only models based on deep parameters reflecting the underlying tastes, technology, and resource constraints under which agents make decisions could provide a reliable basis for policy analysis.

The Lucas critique undoubtedly conveyed an important insight about how to use econometric models in analyzing the effects of policy changes, and if it did no more than cause economists to be more cautious in offering policy advice based on their econometric models and policy makers to more skeptical about the advice they got from economists using such models, the Lucas critique would have performed a very valuable public service. Unfortunately, the lesson that the economics profession learned from the Lucas critique went far beyond that useful warning about the reliability of conditional forecasts potentially sensitive to unstable parameter estimates. In an earlier post, I discussed another way in which the Lucas Critique has been misapplied. (One responsible way to deal with unstable parameter estimates would be make forecasts showing a range of plausible outcome depending on how parameter estimates might change as a result of the policy change. Such an approach is inherently messy, and, at least in the short run, would tend to make policy makers less likely to pay attention to the policy advice of economists. But the inherent sensitivity of forecasts to unstable model parameters ought to make one skeptical about the predictions derived from any econometric model.)

Instead, the Lucas critique was used by Lucas and his followers as a tool by which to advance a reductionist agenda of transforming macroeconomics into a narrow slice of microeconomics, the slice being applied general-equilibrium theory in which the models required drastic simplification before they could generate quantitative predictions. The key to deriving quantitative results from these models is to find an optimal intertemporal allocation of resources given the specified tastes, technology and resource constraints, which is typically done by describing the model in terms of an optimizing representative agent with a utility function, a production function, and a resource endowment. A kind of hand-waving is performed via the rational-expectations assumption, thereby allowing the optimal intertemporal allocation of the representative agent to be identified as a composite of the mutually compatible optimal plans of a set of decentralized agents, the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector. Under rational expectations, agents correctly anticipate future equilibrium prices, so that market-clearing prices in the current period are consistent with full intertemporal equilibrium.

What is amazing – mind-boggling might be a more apt adjective – is that this modeling strategy is held by Lucas and his followers to be invulnerable to the Lucas critique, being based supposedly on deep parameters reflecting nothing other than tastes, technology and resource endowments. The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.

But there is a second point to make, which is that the reductionist models adopted by Lucas and his followers are no less vulnerable to the Lucas critique than the models they replaced. All the New Classical models are explicitly conditioned on the assumption of optimality. It is only by positing an optimal solution for the representative agent that the equilibrium price vector can be inferred. The deep parameters of the model are conditioned on the assumption of optimality and the existence of an equilibrium price vector supporting that equilibrium. If the equilibrium does not obtain – the optimal plans of the individual agents or the fantastical representative agent becoming incapable of execution — empirical estimates of the parameters of the model parameters cannot correspond to the equilibrium values implied by the model itself. Parameter estimates are therefore sensitive to how closely the economic environment in which the parameters were estimated corresponded to conditions of equilibrium. If the conditions under which the parameters were estimated more nearly approximated the conditions of equilibrium than the period in which the model is being used to make conditional forecasts, those forecasts, from the point of view of the underlying equilibrium model, must be inaccurate. The Lucas critique devours its own offspring.

One issue is that since the macroeconomy (macrostate) is described by some finite set of parameters (interest rates, output, money supply, your favorite parameter, etc), the details of even the correct microfoundations must be lost in in the macrostate. Your idea of ‘macrofoundations’ is pretty much the only logical way to proceed …

http://informationtransfereconomics.blogspot.com/2014/12/information-equilibrium-theories.html

” The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.”

would you possibly break this down into simpler predicates for me?

Much of Varoufakis’ critique is to show the impossibility of the assumptions, which seems ought to be final and complete. Nevertheless also arguing that the models can be useful to inform.

Nice post, but did you know that almost 40 years ago Lucas and Sargent conceded your point?

Paul Romer has recently been reminding people of the famous 1978 Boston Fed conference. In their paper there Lucas and Sargent wrote:

“… the question of whether a particular model is structural is an empirical, not theoretical, one.”Yet subsequently both these worthies, as well as their followers, found it convenient to forget about this caveat and decided that microfounded models were immune to the critique by construction.

Till recently the following three claims about the Lucas critique were conventional wisdom:

1. Lucas originated the Lucas critique.

2. Lucas showed that the CC/SEM approach was facing problems because it ignored the Lucas critique.

3. Lucas showed that models based on his preferred microfoundations would be immune to the critique.

Each of the above claims is false, and was known to be false in the 1970s.Yet, amazingly, these demonstrably false claims had become conventional wisdom by the 1980s.

*

Stanley Fischer (1983):“It is indeed remarkable that the Lucas policy evaluation critique has triumphed without any detailed empirical support beyond Lucas’s accusation that macroeconometric models in the 1960s all predicted too little inflation in the 1970s. The general [theoretical] point made by the critique is correct and was known before it was so eloquently and forcefully propounded by Lucas. That the point has been important empirically, however, is something that should have been demonstrated rather than asserted.”

*

Sargent (2005):“Lucas used evidence of coefficient drift and add factors to bash the Keynesians, but as I read his paper, at least, he didn’t claim to offer an explanation for the observed drift. His three examples are each time-invariant structures. Data from them would not have coefficient drift even if you fit one of those misspecified Keynesian models. So the connection of the first part of his paper to the second was weak.”

*

Chris Sims (h/t James Morley [then at Macro Advisers]):“The only coherent interpretation of the Lucas critique is that it states that if one uses a model which incorrectly describes the reaction of expectations formation to policy choice, it will produce incorrect evaluations of policy. The implication is not that econometric evaluation of policy using models fitted to history is impossible, but that it requires correct specification of

the reaction of the economy to policy… There may be some policy issues where the simple rational expectations policy analysis paradigm – treating policy as given by a rule with deterministic parameters, which are to be changed once and for all, with no one knowing beforehand that the change may occur and no one doubting afterward that the change is permanent

– is a useful approximate simplifying assumption. To the extent that the rational expectations literature has led us to suppose that all “real” policy change must fit into this internally inconsistent mold, it is has led us onto sterile ground.”

“… the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector.”Thank you, thank you, thank you. I’ve long been troubled by this, and even in my own mind used the phrase “hand-waving” when thinking about this, but not been able to put the pieces together the way you do in the complete sentence that this is a part of.

Jason, At the risk of resurrecting issues from my posts on accounting identities in macroeconomics, aren’t you just saying that any correct theory must satisfy the relevant conservation laws?

LAL What I am saying is that the Lucas critique makes a very valuable point in highlighting the instability of empirical estimates of parameter values that could change as a result of a policy change, so that any prediction of the effects of a policy change should take into account the potential effects of a policy change on the paremeters of the model used to predict the effect of policy change. But the possibility raised by the Lucas critique was used Lucas et al. as the basis for rejecting a priori any macroeconomic model in which the parameters could be affected by a policy change. That was absurd (borderline pathological) because it meant that the only models left standing were the ones that Lucas et al. wanted to use. That was scientific malpractice imposed on economics under the pretense of some supposed principle of methodological individualism. But reductionism is not the same as methodological individualism.

john, I don’t know how Varoufakis got into this discussion. Care to provide a reference?

Herman, Thanks ever so much for providing this fascinating historical background, which I was not aware of.

marcel proust, Thank you, glad that you found what I wrote helpful.

David,

Interesting post, and I’m sympathetic to your point of view. But what’s your alternative?

– Are you a fan of the Cowles Commission-type of econometric models?

– Are you just arguing that the modern microfounded models should be viewed with just as much skepticism as Cowles-type models?

I’m just not sure what the alternative to microfounded DSGE models is.

What would have happens in 2008 if the FED used only Lucas’ models? I know the answer, don’t you?

Actually, I don’t think that even Lucas would have used his models in 2008.

Great post.

Sure. He said some thing like “this time there were some monetary disturbances”…

Great post as usual, David.

I am tired so I am going to make my points in bullet point form:

– Clearly preferences, technology and resource constraints are vulnerable to the Lucas Critique too (advertising campaigns, R & D and redistribution would constitute policy changes that would affect each of these, respectively).

– Even if we were to concede that these things could be considered the ‘deep parameters’ of the economy, who says the functional forms chosen in macroeconomic models are empirically accurate representations of the parameters of the real world? Do people/firms have Cobb Douglas or Leontief preferences/production functions? What is the ‘level’ of technology as given by A? Do macroeconomic models habitually justify their choice of functional form and parameter values like this, or are the choices instead – as always – based on considerations of mathematical tractability?

– It always astounds me how much focus economists have on the mere existence of equilibrium, without actually discussing how an economy might get there from where it is now. As you mention, ratex serves as a sort of deus ex machina which magically means that the plans of agents are coordinated. We are then offered vague statements about how agents would learn and react to converge to this equilibrium, except there’s no actual formal modelling of this process. Think this has something to do with Romer’s ‘mathiness’

Nick, Thanks.

Unlearningecon, Thanks.

Yep, there are all kinds of things that happen all the time that could affect the real equilibrium. They treat those things as noise that doesn’t really systematically affect the real equilibrium or possibly the source of the technology shocks that create the “real business cycle.” The idea of the Lucas critique was that policy systematically changes the deep parameters in a way that defeats policy. I don’t buy it, but that might be the response.

The functional forms are obviously arbitrary, and, yes, the forms that are chosen are the ones that lead to tractable solutions. But that also creates opportunities for younger guys to play with different functional forms with new mathematical techniuqes, which means that they can publish new results.

The inability to explain the traverse from one equilibrium to another much less from a disequilibrium (if such a state could ever be imagined) to an equilibrium is obviously the most telling argument against the pretension that anything is actually being explained.