Almost 40 years ago, Robert Lucas made a huge, but not quite original, contribution, when he provided a very compelling example of how the predictions of the then standard macroeconometric models used for policy analysis were inherently vulnerable to shifts in the empirically estimated parameters contained in the models, shifts induced by the very policy change under consideration. Insofar as those models could provide reliable forecasts of the future course of the economy, it was because the policy environment under which the parameters of the model had been estimated was not changing during the time period for which the forecasts were made. But any forecast deduced from the model conditioned on a policy change would necessarily be inaccurate, because the policy change itself would cause the agents in the model to alter their expectations in light of the policy change, causing the parameters of the model to diverge from their previously estimated values. Lucas concluded that only models based on deep parameters reflecting the underlying tastes, technology, and resource constraints under which agents make decisions could provide a reliable basis for policy analysis.
The Lucas critique undoubtedly conveyed an important insight about how to use econometric models in analyzing the effects of policy changes, and if it did no more than cause economists to be more cautious in offering policy advice based on their econometric models and policy makers to more skeptical about the advice they got from economists using such models, the Lucas critique would have performed a very valuable public service. Unfortunately, the lesson that the economics profession learned from the Lucas critique went far beyond that useful warning about the reliability of conditional forecasts potentially sensitive to unstable parameter estimates. In an earlier post, I discussed another way in which the Lucas Critique has been misapplied. (One responsible way to deal with unstable parameter estimates would be make forecasts showing a range of plausible outcome depending on how parameter estimates might change as a result of the policy change. Such an approach is inherently messy, and, at least in the short run, would tend to make policy makers less likely to pay attention to the policy advice of economists. But the inherent sensitivity of forecasts to unstable model parameters ought to make one skeptical about the predictions derived from any econometric model.)
Instead, the Lucas critique was used by Lucas and his followers as a tool by which to advance a reductionist agenda of transforming macroeconomics into a narrow slice of microeconomics, the slice being applied general-equilibrium theory in which the models required drastic simplification before they could generate quantitative predictions. The key to deriving quantitative results from these models is to find an optimal intertemporal allocation of resources given the specified tastes, technology and resource constraints, which is typically done by describing the model in terms of an optimizing representative agent with a utility function, a production function, and a resource endowment. A kind of hand-waving is performed via the rational-expectations assumption, thereby allowing the optimal intertemporal allocation of the representative agent to be identified as a composite of the mutually compatible optimal plans of a set of decentralized agents, the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector. Under rational expectations, agents correctly anticipate future equilibrium prices, so that market-clearing prices in the current period are consistent with full intertemporal equilibrium.
What is amazing – mind-boggling might be a more apt adjective – is that this modeling strategy is held by Lucas and his followers to be invulnerable to the Lucas critique, being based supposedly on deep parameters reflecting nothing other than tastes, technology and resource endowments. The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.
But there is a second point to make, which is that the reductionist models adopted by Lucas and his followers are no less vulnerable to the Lucas critique than the models they replaced. All the New Classical models are explicitly conditioned on the assumption of optimality. It is only by positing an optimal solution for the representative agent that the equilibrium price vector can be inferred. The deep parameters of the model are conditioned on the assumption of optimality and the existence of an equilibrium price vector supporting that equilibrium. If the equilibrium does not obtain – the optimal plans of the individual agents or the fantastical representative agent becoming incapable of execution — empirical estimates of the parameters of the model parameters cannot correspond to the equilibrium values implied by the model itself. Parameter estimates are therefore sensitive to how closely the economic environment in which the parameters were estimated corresponded to conditions of equilibrium. If the conditions under which the parameters were estimated more nearly approximated the conditions of equilibrium than the period in which the model is being used to make conditional forecasts, those forecasts, from the point of view of the underlying equilibrium model, must be inaccurate. The Lucas critique devours its own offspring.