Posts Tagged 'reductionism'

Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

The Standard Narrative on the History of Macroeconomics: An Exercise in Self-Serving Apologetics

During my recent hiatus from blogging, I have been pondering an important paper presented in June at the History of Economics Society meeting in Toronto, “The Standard Narrative on History of Macroeconomics: Central Banks and DSGE Models” by Francesco Sergi of the University of Bristol, which was selected by the History of Economics Society as the best conference paper by a young scholar in 2017.

Here is the abstract of Sergi’s paper:

How do macroeconomists write the history of their own discipline? This article provides a careful reconstruction of the history of macroeconomics told by the practitioners working today in the dynamic stochastic general equilibrium (DSGE) approach.

Such a tale is a “standard narrative”: a widespread and “standardizing” view of macroeconomics as a field evolving toward “scientific progress”. The standard narrative explains scientific progress as resulting from two factors: “consensus” about theory and “technical change” in econometric tools and computational power. This interpretation is a distinctive feature of central banks’ technical reports about their DSGE models.

Furthermore, such a view on “consensus” and “technical change” is a significantly different view with respect to similar tales told by macroeconomists in the past — which rather emphasized the role of “scientific revolutions” and struggles among competing “schools of thought”. Thus, this difference raises some new questions for historians of macroeconomics.

Sergi’s paper is too long and too rich in content to easily summarize in this post, so what I will do is reproduce and comment on some of the many quotations provided by Sergi, taken mostly from central-bank reports, but also from some leading macroeconomic textbooks and historical survey papers, about the “progress” of modern macroeconomics, and especially about the critical role played by “microfoundations” in achieving that progress. The general tenor of the standard narrative is captured well by the following quotations from V. V. Chari

[A]ny interesting model must be a dynamic stochastic general equilibrium model. From this perspective, there is no other game in town. […] A useful aphorism in macroeconomics is: “If you have an interesting and coherent story to tell, you can tell it in a DSGE model.  (Chari 2010, 2)

I could elaborate on this quotation at length, but I will just leave it out there for readers to ponder with a link to an earlier post of mine about methodological arrogance. Instead I will focus on two other sections of Sergi’s paper “the five steps of theoretical progress” and “microfoundations as theoretical progress.” Here is how Sergi explains the role of the five steps:

The standard narrative provides a detailed account of the progressive evolution toward the synthesis. Following a teleological perspective, each step of this evolution is an incremental, linear improvement of the theoretical tool box for model building. The standard narrative identifies five steps . . . .  Each step corresponds to the emergence of a school of thought. Therefore, in the standard narrative, there are not such things as competing schools of thought and revolutions. Firstly, because schools of thought are represented as a sequence; one school (one step) is always leading to another school (the following step), hence different schools are not coexisting for a long period of time. Secondly, there are no revolutions because, while emerging, new schools of thought [do] not overthrow the previous ones; instead, they suggest improvements and amendments, that are accepted as an improvement by pre-existing schools therefore, accumulation of knowledge takes place thanks to consensus. (pp. 17-18)

The first step in the standard narrative is the family of Keynesian macroeconometric models of the 1950s and 1960s, the primitive ancestors of the modern DSGE models. The second step was the emergence of New Classical macroeconomics which introduced the ideas of rational expectations and dynamic optimization into theoretical macroeconomic discourse in the 1970s. The third step was the development, inspired by New Classical ideas, of Real-Business-Cycle models of the 1980s, and the fourth step was introduction of New Keynesian models in the late 1980s and 1990s that tweaked the Real-Business-Cycle models in ways that rationalized the use of counter-cyclical macroeconomic policy within the theoretical framework of the Real-Business-Cycle approach. The final step, the DSGE model, emerged more or less naturally as a synthesis of the converging Real-Business-Cycle and New Keynesian approaches.

After detailing the five steps of theoretical progress, Sergi focuses attention on “the crucial improvement” that allowed the tool box of macroeconomic modelling to be extended in such a theoretically fruitful way: the insistence on providing explicit microfoundations for macroeconomic models. He writes:

Abiding [by] the Lucasian microfoundational program is put forward by DSGE modellers as the very fundamental essence of theoretical progress allowed by [the] consensus. As Sanajay K. Chugh (University of Pennsylvania) explains in the historical chapter of his textbook, microfoundations is all what modern macroeconomics is about: (p. 20)

Modern macroeconomics begin by explicitly studying the microeconomic principles of utility maximization, profit maximization and market-clearing. [. . . ] This modern macroeconomics quickly captured the attention of the profession through the 1980s [because] it actually begins with microeconomic principles, which was a rather attractive idea. Rather than building a framework of economy-wide events from the top down [. . .] one could build this framework using microeconomic discipline from the bottom up. (Chugh 2015, 170)

Chugh’s rationale for microfoundations is a naïve expression of reductionist bias dressed up as simple homespun common-sense. Everyone knows that you should build from the bottom up, not from the top down, right? But things are not always quite as simple as they seem. Here is an attempt to present microfoundations as being cutting-edge and sophisticated offered in a 2009 technical report written by Cuche-Curti et al. for the Swiss National Bank.

The key property of DSGE models is that they rely on explicit micro-foundations and a rational treatment of expectations in a general equilibrium context. They thus provide a coherent and compelling theoretical framework for macroeconomic analysis. (Cuche-Curti et al. 2009, 6)

A similar statement is made by Gomes et al in a 2010 technical report for the European Central Bank:

The microfoundations of the model together with its rich structure allow [us] to conduct a quantitative analysis in a theoretically coherent and fully consistent model setup, clearly spelling out all the policy implications. (Gomes et al. 2010, 5)

These laudatory descriptions of the DSGE model stress its “coherence” as a primary virtue. What is meant by “coherence” is spelled out more explicitly in a 2006 technical report describing NEMO, a macromodel of the Norwegian economy, by Brubakk et al. for the Norges Bank.

Various agents’ behavior is modelled explicitly in NEMO, based on microeconomic theory. A consistent theoretical framework makes it easier to interpret relationships and mechanisms in the model in the light of economic theory. One advantage is that we can analyse the economic effects of changes of a more structural nature […] [making it] possible to provide a consistent and detailed economic rationale for Norges Bank’s projections for the Norwegian economy. This distinguishes NEMO from purely statistical models, which to a limited extent provide scope for economic interpretations. (Brubakk and Sveen 2009, 39)

By creating microfounded models, in which all agents are optimizers making choices consistent with the postulates of microeconomic theory, DSGE model-builders, in effect, create “laboratories” from which to predict the consequences of alternative monetary policies, enabling policy makers to make informed policy choices. I pause merely to note and draw attention to the tendentious and misleading misappropriation of the language of empirical science by these characteristically self-aggrandizing references to DSGE models as “laboratories” as if what was going on in such models was determined by an actual physical process, as is routinely the case in the laboratories of physical and natural scientists, rather than speculative exercises in high-level calculations derived from the manipulation of DSGE models.

As a result of recent advances in macroeconomic theory and computational techniques, it has become feasible to construct richly structured dynamic stochastic general equilibrium models and use them as laboratories for the study of business cycles and for the formulation and analysis of monetary policy. (Cuche-Curri et al. 2009, 39)

Policy makers can be confident that the conditional predictions corresponding to the policy alternative under consideration, which are derived from their “laboratory” DSGE models, because those models, having been constructed on the basis of the postulates of economic theory, are therefore microfounded, embodying deep structural parameters that are invariant to policy changes. Microfounded models are thus immune to the Lucas Critique of macroeconomic policy evaluation, under which the empirically estimated coefficients of traditional Keynesian macroeconometric models cannot be assumed to remain constant under policy changes, because those coefficient estimates are themselves conditional to policy choices.

Here is how the point is made in three different central bank technical reports: by Argov et al. in a 2012 technical report about MOISE, a DSGE model for the Israeli economy, by Cuche-Curti et al. and by Medina and Soto in a 2006 technical report about a new DSGE model for the Chilean economy for the Central Bank of Chile.

Being micro-founded, the model enables the central bank to assess the effect of its alternative policy choices on the future paths of the economy’s endogenous variables, in a way that is immune to the Lucas critique. (Argov et al. 2012, 5)

[The DSGE] approach has three distinct advantages in comparison to other modelling strategies. First and foremost, its microfoundations should allow it to escape the Lucas critique. (Cuche-Curti et al. 2009, 6)

The main advantage of this type of model, over more traditional reduce-form macro models, is that the structural interpretation of their parameters allows [it] to overcome the Lucas Critique. This is clearly an advantage for policy analysis. (Medina and Soto, 2006, 2)

These quotations show clearly that escaping, immunizing, or overcoming the Lucas Critique is viewed by DSGE modelers as the holy grail of macroeconomic model building and macroeconomic policy analysis. If the Lucas Critique cannot be neutralized, the coefficient estimates derived from reduced-form macroeconometric models cannot be treated as invariant to policy and therefore cannot provide a secure basis for predicting the effects of alternative policies. But DSGE models allow deep structural relationships, reflecting the axioms underlying microeconomic theory, to be estimated. Because they reflect the deep, and presumably stable, microeconomic structure of the economy, estimates of deep parameters derived from DSGE models, DSGE modelers claim that these estimates provide policy makers with a reliable basis for conditional forecasting of the effects of macroeconomic policy.

Because of the consistently poor track record of DSGE models in actual forecasting (for evidence of that poor track record see the paper by Carlaw and Lipsey and my post about their paper) comparing the predictive performance of DSGE models with more traditional macroeconometric models), the emphasis placed on the Lucas Critique by DSGE modelers has an apologetic character, DSGE modelers having to account for the relatively poor comparative predictive power of DSGE models by relentlessly invoking the Lucas Critique in trying to account for, and explain away, the poor predictive performance of the DSGE models. But if DSGE models really are better than traditional macro models why are their unconditional predictions not at least as good as those of traditional macroeconometric models? Obviously estimates of the deep structural relationships provided by microfounded models are not as reliable as DSGE apologetics tries to suggest.

And the reason that the estimates of deep structural relationships derived from DSGE models are not reliable is that those models, no less than traditional macroeconometric models, are subject to the Lucas Critique, the deep microeconomic structural relationships embodied in DSGE models being conditional on the existence of a unique equilibrium solution that persists long enough for the structural relationships characterizing that equilibrium to be inferred from the data-generating mechanism whereby those models are estimated. (I have made this point previously here.) But if the data-generating mechanism does not conform to the unique general equilibrium upon whose existence the presumed deep structural relationships of microeconomic theory embodied in DSGE models are conditioned, the econometric estimates derived from DSGE models cannot capture the desired deep structural relationships, and the resulting structural estimates are therefore incapable of providing a reliable basis for macroeconomic-policy analysis or for conditional forecasts of the effects of alternative policies, much less unconditional forecasts of endogenous macroeconomic variables.

Of course, the problem is even more intractable than the discussion above implies, because there is no reason why the deep structural relationships corresponding to a particular equilibrium should be invariant to changes in the equilibrium. So any change in economic policy that displaces a pre-existing equilibrium, let alone any other unforeseen technological change or change in tastes or resource endowments that displaces a pre-existing equilibrium will necessarily cause all the deep structural relationships to change correspondingly. So the deep structural parameters upon whose invariance the supposedly unique capacity of DSGE models to provide policy analysis upon which policy makers can rely simply don’t exist. Policy making based on DSGE models is as much an uncertain art requiring the exercise of finely developed judgment and intuition as policy making based on any other kind of economic modeling. DSGE models provide no uniquely reliable basis for making macroeconomic policy.

References

Argov, E., Barnea, E., Binyamini, A., Borenstein, E., Elkayam, D., and Rozenshtrom, I. (2012). MOISE: A DSGE Model for the Israeli Economy. Technical Report 2012.06, Bank of Israel.
Brubakk, L.,Husebø, T. A., Maih, J., Olsen, K., and Østnor, M. (2006). Finding NEMO: Documentation of the Norwegian economy model. Technical Report 2006/6, Norges Bank, Staff Memo.
Carlaw, K. I., and Lipsey, R. G. (2012). “Does History Matter?: Empirical Analysis of Evolutionary versus Stationary Equilibrium Views of the Economy.” Journal of Evolutionary Economics. 22(4):735-66.
Chari, V. V. (2010). Testimony before the committee on Science and Technology, Subcommittee on Investigations and Oversight, US House of Representatives. In Building a Science of Economics for the Real World.
Chugh, S. K. (2015). Modern Macroeconomics. MIT Press, Cambridge (MA).
Cuche-Curti, N. A., Dellas, H., and Natal, J.-M. (2009). DSGE-CH. A Dynamic Stochastic General Equilibrium Model for Switzerland. Technical Report 5, Swiss National Bank.
Gomes, S., Jacquinot, P., and Pisani, M. (2010). The EAGLE. A Model for Policy Analysis of Macroeconomic Interdependence in the Euro Area. Technical Report 1195, European Central Bank.
Medina, J. P. and Soto, C. (2006). Model for Analysis and Simulations (MAS): A New DSGE Model for the Chilean Economy. Technical report, Central Bank of Chile.

All New Classical Models Are Subject to the Lucas Critique

Almost 40 years ago, Robert Lucas made a huge, but not quite original, contribution, when he provided a very compelling example of how the predictions of the then standard macroeconometric models used for policy analysis were inherently vulnerable to shifts in the empirically estimated parameters contained in the models, shifts induced by the very policy change under consideration. Insofar as those models could provide reliable forecasts of the future course of the economy, it was because the policy environment under which the parameters of the model had been estimated was not changing during the time period for which the forecasts were made. But any forecast deduced from the model conditioned on a policy change would necessarily be inaccurate, because the policy change itself would cause the agents in the model to alter their expectations in light of the policy change, causing the parameters of the model to diverge from their previously estimated values. Lucas concluded that only models based on deep parameters reflecting the underlying tastes, technology, and resource constraints under which agents make decisions could provide a reliable basis for policy analysis.

The Lucas critique undoubtedly conveyed an important insight about how to use econometric models in analyzing the effects of policy changes, and if it did no more than cause economists to be more cautious in offering policy advice based on their econometric models and policy makers to more skeptical about the advice they got from economists using such models, the Lucas critique would have performed a very valuable public service. Unfortunately, the lesson that the economics profession learned from the Lucas critique went far beyond that useful warning about the reliability of conditional forecasts potentially sensitive to unstable parameter estimates. In an earlier post, I discussed another way in which the Lucas Critique has been misapplied. (One responsible way to deal with unstable parameter estimates would be make forecasts showing a range of plausible outcome depending on how parameter estimates might change as a result of the policy change. Such an approach is inherently messy, and, at least in the short run, would tend to make policy makers less likely to pay attention to the policy advice of economists. But the inherent sensitivity of forecasts to unstable model parameters ought to make one skeptical about the predictions derived from any econometric model.)

Instead, the Lucas critique was used by Lucas and his followers as a tool by which to advance a reductionist agenda of transforming macroeconomics into a narrow slice of microeconomics, the slice being applied general-equilibrium theory in which the models required drastic simplification before they could generate quantitative predictions. The key to deriving quantitative results from these models is to find an optimal intertemporal allocation of resources given the specified tastes, technology and resource constraints, which is typically done by describing the model in terms of an optimizing representative agent with a utility function, a production function, and a resource endowment. A kind of hand-waving is performed via the rational-expectations assumption, thereby allowing the optimal intertemporal allocation of the representative agent to be identified as a composite of the mutually compatible optimal plans of a set of decentralized agents, the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector. Under rational expectations, agents correctly anticipate future equilibrium prices, so that market-clearing prices in the current period are consistent with full intertemporal equilibrium.

What is amazing – mind-boggling might be a more apt adjective – is that this modeling strategy is held by Lucas and his followers to be invulnerable to the Lucas critique, being based supposedly on deep parameters reflecting nothing other than tastes, technology and resource endowments. The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.

But there is a second point to make, which is that the reductionist models adopted by Lucas and his followers are no less vulnerable to the Lucas critique than the models they replaced. All the New Classical models are explicitly conditioned on the assumption of optimality. It is only by positing an optimal solution for the representative agent that the equilibrium price vector can be inferred. The deep parameters of the model are conditioned on the assumption of optimality and the existence of an equilibrium price vector supporting that equilibrium. If the equilibrium does not obtain – the optimal plans of the individual agents or the fantastical representative agent becoming incapable of execution — empirical estimates of the parameters of the model parameters cannot correspond to the equilibrium values implied by the model itself. Parameter estimates are therefore sensitive to how closely the economic environment in which the parameters were estimated corresponded to conditions of equilibrium. If the conditions under which the parameters were estimated more nearly approximated the conditions of equilibrium than the period in which the model is being used to make conditional forecasts, those forecasts, from the point of view of the underlying equilibrium model, must be inaccurate. The Lucas critique devours its own offspring.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,260 other subscribers
Follow Uneasy Money on WordPress.com